idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
15,901
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
|
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property.
In the beginning, we have two objects, data (coming from a Random Variable, call it X) and parameter, $\theta$ (another rv, implicitly assumed since we are talking about its estimator). These two, are assumed to be dependent (otherwise, there is no point in trying to estimate one from the other). Now, the third object enters the game, Sufficient Statistic, T. The intuitive idea when we say T is enough to estimate $\theta$ really means that if we know T (ie conditioned on T), X provides no additional info, that is, X and $\theta$ are independent. In other word, knowledge of X is equivalent to knowledge of T as far as estimation of $\theta$ is concerned. Note that in probabilities are where all the uncertainties are captured, and hence "any estimate" when (conditional) probabilities are independent (eg conditional densities factorize).
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
|
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property.
In the beginni
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property.
In the beginning, we have two objects, data (coming from a Random Variable, call it X) and parameter, $\theta$ (another rv, implicitly assumed since we are talking about its estimator). These two, are assumed to be dependent (otherwise, there is no point in trying to estimate one from the other). Now, the third object enters the game, Sufficient Statistic, T. The intuitive idea when we say T is enough to estimate $\theta$ really means that if we know T (ie conditioned on T), X provides no additional info, that is, X and $\theta$ are independent. In other word, knowledge of X is equivalent to knowledge of T as far as estimation of $\theta$ is concerned. Note that in probabilities are where all the uncertainties are captured, and hence "any estimate" when (conditional) probabilities are independent (eg conditional densities factorize).
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property.
In the beginni
|
15,902
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
|
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equivalent to the second sentence (to the extent that an informal statement can be equivalent to a mathematical statement) if we modify it to say "all the parameter information." In other words, the statement assumes that only parameter (Fisher) information is inferentially useful. Or, at least, it assumes any other type of information, however conceived and defined and measured, must be redundant with the Fisher information for purposes of improving the accuracy and/or precision of an estimate.
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
|
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equiva
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equivalent to the second sentence (to the extent that an informal statement can be equivalent to a mathematical statement) if we modify it to say "all the parameter information." In other words, the statement assumes that only parameter (Fisher) information is inferentially useful. Or, at least, it assumes any other type of information, however conceived and defined and measured, must be redundant with the Fisher information for purposes of improving the accuracy and/or precision of an estimate.
|
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equiva
|
15,903
|
Do optimization techniques map to sampling techniques?
|
One connection has been brought up by Max Welling and friends in these two papers:
Bayesian Learning via Stochastic Gradient Langevin Dynamics
Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring.
The gist is that the "learning", ie. optimisation of a model smoothly transitions into sampling from the posterior.
|
Do optimization techniques map to sampling techniques?
|
One connection has been brought up by Max Welling and friends in these two papers:
Bayesian Learning via Stochastic Gradient Langevin Dynamics
Bayesian Posterior Sampling via Stochastic Gradient Fish
|
Do optimization techniques map to sampling techniques?
One connection has been brought up by Max Welling and friends in these two papers:
Bayesian Learning via Stochastic Gradient Langevin Dynamics
Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring.
The gist is that the "learning", ie. optimisation of a model smoothly transitions into sampling from the posterior.
|
Do optimization techniques map to sampling techniques?
One connection has been brought up by Max Welling and friends in these two papers:
Bayesian Learning via Stochastic Gradient Langevin Dynamics
Bayesian Posterior Sampling via Stochastic Gradient Fish
|
15,904
|
Do optimization techniques map to sampling techniques?
|
There is a link, it's the Gumbel-Max trick !
http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
|
Do optimization techniques map to sampling techniques?
|
There is a link, it's the Gumbel-Max trick !
http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
|
Do optimization techniques map to sampling techniques?
There is a link, it's the Gumbel-Max trick !
http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
|
Do optimization techniques map to sampling techniques?
There is a link, it's the Gumbel-Max trick !
http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
|
15,905
|
Do optimization techniques map to sampling techniques?
|
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If you cannot find the cdf exactly, you could use a simple acceptemce-rejection based heuristic.
|
Do optimization techniques map to sampling techniques?
|
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If
|
Do optimization techniques map to sampling techniques?
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If you cannot find the cdf exactly, you could use a simple acceptemce-rejection based heuristic.
|
Do optimization techniques map to sampling techniques?
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If
|
15,906
|
Repeated measures ANOVA: what is the normality assumption?
|
This is the simplest repeated measures ANOVA model if we treat it as a univariate model:
$$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$
where $i$ represents each case and $t$ the times we measured them (so the data are in long form). $y_{it}$ represents the outcomes stacked one on top of the other, $a_{i}$ represents the mean of each case, $b_{t}$ represents the mean of each time point and $\epsilon_{it}$ represents the deviations of the individual measurements from the case and time point means. You can include additional between-factors as predictors in this setup.
We do not need to make distributional assumptions about $a_{i}$, as they can go into the model as fixed effects, dummy variables (contrary to what we do with linear mixed models). Same happens for the time dummies. For this model, you simply regress the outcome in long form against the person dummies and the time dummies. The effect of interest is the time dummies, the $F$-test that tests the null hypothesis that $b_{1}=...=b_{t}=0$ is the major test in the univariate repeated measures ANOVA.
What are the required assumptions for the $F$-test to behave appropriately? The one relevant to your question is:
\begin{equation}
\epsilon_{it}\sim\mathcal{N}(0,\sigma)\quad\text{these errors are normally distributed and homoskedastic}
\end{equation}
There are additional (more consequential) assumptions for the $F$-test to be valid, as one can see that the data are not independent of each other since the individuals repeat across rows.
If you want to treat the repeated measures ANOVA as a multivariate model, the normality assumptions may be different, and I cannot expand on them beyond what you and I have seen on Wikipedia.
|
Repeated measures ANOVA: what is the normality assumption?
|
This is the simplest repeated measures ANOVA model if we treat it as a univariate model:
$$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$
where $i$ represents each case and $t$ the times we measured them (s
|
Repeated measures ANOVA: what is the normality assumption?
This is the simplest repeated measures ANOVA model if we treat it as a univariate model:
$$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$
where $i$ represents each case and $t$ the times we measured them (so the data are in long form). $y_{it}$ represents the outcomes stacked one on top of the other, $a_{i}$ represents the mean of each case, $b_{t}$ represents the mean of each time point and $\epsilon_{it}$ represents the deviations of the individual measurements from the case and time point means. You can include additional between-factors as predictors in this setup.
We do not need to make distributional assumptions about $a_{i}$, as they can go into the model as fixed effects, dummy variables (contrary to what we do with linear mixed models). Same happens for the time dummies. For this model, you simply regress the outcome in long form against the person dummies and the time dummies. The effect of interest is the time dummies, the $F$-test that tests the null hypothesis that $b_{1}=...=b_{t}=0$ is the major test in the univariate repeated measures ANOVA.
What are the required assumptions for the $F$-test to behave appropriately? The one relevant to your question is:
\begin{equation}
\epsilon_{it}\sim\mathcal{N}(0,\sigma)\quad\text{these errors are normally distributed and homoskedastic}
\end{equation}
There are additional (more consequential) assumptions for the $F$-test to be valid, as one can see that the data are not independent of each other since the individuals repeat across rows.
If you want to treat the repeated measures ANOVA as a multivariate model, the normality assumptions may be different, and I cannot expand on them beyond what you and I have seen on Wikipedia.
|
Repeated measures ANOVA: what is the normality assumption?
This is the simplest repeated measures ANOVA model if we treat it as a univariate model:
$$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$
where $i$ represents each case and $t$ the times we measured them (s
|
15,907
|
Repeated measures ANOVA: what is the normality assumption?
|
The explanation of normality of repeated-measure ANOVA can be found here:
Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output
You need normality of the dependent variables in residuals (this implies a normal distribution in all groups, with common variance and group-dependent average), as in regression.
As you noticed, multivariate normality implies that all linear combinations of the dependent variables are normally distributed, so it is a stronger concept than normality of single variables ($3 \rightarrow 1$). However, I'm not convinced this implies normality of residuals ($3 \rightarrow 2$), given residuals are determined by independent variables (groups, in ANOVA) as well. I agree with you for point $5$: you are basically talking about an individual-level random effect having a normal distribution.
|
Repeated measures ANOVA: what is the normality assumption?
|
The explanation of normality of repeated-measure ANOVA can be found here:
Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output
You need normality of the dependent
|
Repeated measures ANOVA: what is the normality assumption?
The explanation of normality of repeated-measure ANOVA can be found here:
Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output
You need normality of the dependent variables in residuals (this implies a normal distribution in all groups, with common variance and group-dependent average), as in regression.
As you noticed, multivariate normality implies that all linear combinations of the dependent variables are normally distributed, so it is a stronger concept than normality of single variables ($3 \rightarrow 1$). However, I'm not convinced this implies normality of residuals ($3 \rightarrow 2$), given residuals are determined by independent variables (groups, in ANOVA) as well. I agree with you for point $5$: you are basically talking about an individual-level random effect having a normal distribution.
|
Repeated measures ANOVA: what is the normality assumption?
The explanation of normality of repeated-measure ANOVA can be found here:
Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output
You need normality of the dependent
|
15,908
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
|
We know that
If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$
Hence there exists infinitely many UE's of $\lambda$. Now a question occur which of these should we choose? so we call UMVUE. Along unbiasedness is not a good property but UMVUE is a good property. But it is not extremely good.
If $X_1,X_2, \dots X_n$ be a random sample from $N(\mu, \sigma^2)$ then minimum MSE estimator of the form $T_\alpha =\alpha S^2$, with $(n-1)S^2=\sum_{i=1}^{n}(X_i-\bar X)^2$ for the parameter $\sigma^2$, is $\frac{n-1}{n+1}S^2=\frac{1}{n+1}\sum_{i=1}^{n}(X_i-\bar X)^2$ But it is biased that is it is not UMVUE though it is best in terms of minimum MSE.
Note that Rao-Blackwell Theorem says that to find UMVUE we can concentrate only on those UE which are function of sufficient statistic that is the UMVUE is the estimator which has minimum variance among all UEs which are function of sufficient statistic. Hence UMVUE is necessarily a function of a sufficient statistic.
MLE and UMVUE both are good from a point of view. But we can never say that one of them is better than other. In statistics we deal with uncertain and random data. So there is always scope for improvement. We may get a better estimator than MLE and UMVUE.
I think we don't overemphasize the UMVUE theory too much in graduate school.It is purely my personal view. I think graduation stage is a learning stage. So, a graduated student must need to carry a good basis about UMVUE and others estimators,
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
|
We know that
If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$
Hence there exists infini
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
We know that
If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$
Hence there exists infinitely many UE's of $\lambda$. Now a question occur which of these should we choose? so we call UMVUE. Along unbiasedness is not a good property but UMVUE is a good property. But it is not extremely good.
If $X_1,X_2, \dots X_n$ be a random sample from $N(\mu, \sigma^2)$ then minimum MSE estimator of the form $T_\alpha =\alpha S^2$, with $(n-1)S^2=\sum_{i=1}^{n}(X_i-\bar X)^2$ for the parameter $\sigma^2$, is $\frac{n-1}{n+1}S^2=\frac{1}{n+1}\sum_{i=1}^{n}(X_i-\bar X)^2$ But it is biased that is it is not UMVUE though it is best in terms of minimum MSE.
Note that Rao-Blackwell Theorem says that to find UMVUE we can concentrate only on those UE which are function of sufficient statistic that is the UMVUE is the estimator which has minimum variance among all UEs which are function of sufficient statistic. Hence UMVUE is necessarily a function of a sufficient statistic.
MLE and UMVUE both are good from a point of view. But we can never say that one of them is better than other. In statistics we deal with uncertain and random data. So there is always scope for improvement. We may get a better estimator than MLE and UMVUE.
I think we don't overemphasize the UMVUE theory too much in graduate school.It is purely my personal view. I think graduation stage is a learning stage. So, a graduated student must need to carry a good basis about UMVUE and others estimators,
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
We know that
If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$
Hence there exists infini
|
15,909
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
|
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in many cases do not exist.
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
|
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in many cases do not exist.
|
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in
|
15,910
|
How should standard errors for mixed effects model estimates be calculated?
|
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth.
However, take a look at McCulloch and Searle (2001) Generalized, linear and mixed models, 1st edition, Section 6.4b, "Sampling variance". They indicate that you can't just plug in the estimates of the variance components:
Instead of dealing with the variance (matrix) of a vector $X \hat{\beta}$ we
consider the simpler case of the scalar $l' \hat{\beta}$ for estimable $l'\beta$ (i.e., $l' = t'X$ for some $t'$).
For known $V$, we have from (6.21) that $\text{var}(l' \beta^0) = l'(X'V{-1}X)^- l$. A replacement for this when $V$ is not known is to use $l'(X'\hat{V}^{-1}X)^- l$, which is an estimate of $\text{var}(l'\beta^0) = \text{var}[l' (X' V^{-1} X)^- X' V^{-1} y]$. But it is not an estimate of $\text{var}(l'\hat{\beta}) = \text{var}[l' (X' \hat{V}^{-1} X)^- X' \hat{V}^{-1} y]$. The latter requires taking account of the variability of $\hat{V}$ as well as that in $y$. To deal with this, Kackar and Harville (1984, p. 854) observe that (in our notation) $l' \hat{\beta} - l' \beta$ can be expressed as the sum of two independent parts, $l' \hat{\beta} - l' \beta^0$ and $l'\beta^0 - l' \beta$. This leads to $\text{var}(l' \hat{\beta})$ being expressed as a sum of two variances which we write as
$$\text{var}(l'\hat{\beta}) = ... \approx l'(X' V^{-1} X)l + l' T \; l$$
They go on to explain $T$.
So this answers the first part of your question and indicates that your intuition was correct (and mine was wrong).
|
How should standard errors for mixed effects model estimates be calculated?
|
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth.
However, take a look at McCulloch and Searle (2
|
How should standard errors for mixed effects model estimates be calculated?
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth.
However, take a look at McCulloch and Searle (2001) Generalized, linear and mixed models, 1st edition, Section 6.4b, "Sampling variance". They indicate that you can't just plug in the estimates of the variance components:
Instead of dealing with the variance (matrix) of a vector $X \hat{\beta}$ we
consider the simpler case of the scalar $l' \hat{\beta}$ for estimable $l'\beta$ (i.e., $l' = t'X$ for some $t'$).
For known $V$, we have from (6.21) that $\text{var}(l' \beta^0) = l'(X'V{-1}X)^- l$. A replacement for this when $V$ is not known is to use $l'(X'\hat{V}^{-1}X)^- l$, which is an estimate of $\text{var}(l'\beta^0) = \text{var}[l' (X' V^{-1} X)^- X' V^{-1} y]$. But it is not an estimate of $\text{var}(l'\hat{\beta}) = \text{var}[l' (X' \hat{V}^{-1} X)^- X' \hat{V}^{-1} y]$. The latter requires taking account of the variability of $\hat{V}$ as well as that in $y$. To deal with this, Kackar and Harville (1984, p. 854) observe that (in our notation) $l' \hat{\beta} - l' \beta$ can be expressed as the sum of two independent parts, $l' \hat{\beta} - l' \beta^0$ and $l'\beta^0 - l' \beta$. This leads to $\text{var}(l' \hat{\beta})$ being expressed as a sum of two variances which we write as
$$\text{var}(l'\hat{\beta}) = ... \approx l'(X' V^{-1} X)l + l' T \; l$$
They go on to explain $T$.
So this answers the first part of your question and indicates that your intuition was correct (and mine was wrong).
|
How should standard errors for mixed effects model estimates be calculated?
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth.
However, take a look at McCulloch and Searle (2
|
15,911
|
In Random Forest, why is a random subset of features chosen at the node level rather than at the tree level? [duplicate]
|
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features into consideration.
We calculate the entropy, compare only these 4 features at every node and take that feature that yields maximum entropy. This isn't much use as we are restricting our tree learning to only those 4 features. Contrary to this, when we take some subset of features let's say f1, f8, f9 at first node, we calculate the entropy and compare them among these 3 features and chose the one that gives maximum value. Instead of growing the tree further with same features, we chose another subset of features let's say f4, f7, f2 and make the split based on these features. Suppose f8 was selected at the first node and f2 was selected at the second node. Model is able to learn the relationship between these both which wouldn't be possible if there is some other feature that gives maximum entropy than f2 after f8 has been selected as the root node.
In this way, the model can learn the relationship between different features in a more diversified way. This approach will have a number of features explored in a single tree and thus relations among them is preserved. Hope you got it now :)
|
In Random Forest, why is a random subset of features chosen at the node level rather than at the tre
|
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features
|
In Random Forest, why is a random subset of features chosen at the node level rather than at the tree level? [duplicate]
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features into consideration.
We calculate the entropy, compare only these 4 features at every node and take that feature that yields maximum entropy. This isn't much use as we are restricting our tree learning to only those 4 features. Contrary to this, when we take some subset of features let's say f1, f8, f9 at first node, we calculate the entropy and compare them among these 3 features and chose the one that gives maximum value. Instead of growing the tree further with same features, we chose another subset of features let's say f4, f7, f2 and make the split based on these features. Suppose f8 was selected at the first node and f2 was selected at the second node. Model is able to learn the relationship between these both which wouldn't be possible if there is some other feature that gives maximum entropy than f2 after f8 has been selected as the root node.
In this way, the model can learn the relationship between different features in a more diversified way. This approach will have a number of features explored in a single tree and thus relations among them is preserved. Hope you got it now :)
|
In Random Forest, why is a random subset of features chosen at the node level rather than at the tre
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features
|
15,912
|
How to interpret GARCH parameters?
|
Campbell et al (1996) have following interpretation on p. 483.
$\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ measures the rate at which this effect dies over time.
According to Chan (2010) persistence of volatility occurs when $\gamma_1 + \delta_1 = 1$ ,and thus $a_t$ is non-stationary process. This is also called as IGARCH (Integrated GARCH). Under this scenario, unconditional variance become infinite (p. 110)
Note: GARCH(1,1) can be written in the form of ARMA (1,1) to show that the persistence is given by the sum of the parameters (proof in p. 110 of Chan (2010) and p. 483 in Campbell et al (1996). Also, $a^2_{t-1} - \sigma^2_{t-1}$ is now the volatility shock.
|
How to interpret GARCH parameters?
|
Campbell et al (1996) have following interpretation on p. 483.
$\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ m
|
How to interpret GARCH parameters?
Campbell et al (1996) have following interpretation on p. 483.
$\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ measures the rate at which this effect dies over time.
According to Chan (2010) persistence of volatility occurs when $\gamma_1 + \delta_1 = 1$ ,and thus $a_t$ is non-stationary process. This is also called as IGARCH (Integrated GARCH). Under this scenario, unconditional variance become infinite (p. 110)
Note: GARCH(1,1) can be written in the form of ARMA (1,1) to show that the persistence is given by the sum of the parameters (proof in p. 110 of Chan (2010) and p. 483 in Campbell et al (1996). Also, $a^2_{t-1} - \sigma^2_{t-1}$ is now the volatility shock.
|
How to interpret GARCH parameters?
Campbell et al (1996) have following interpretation on p. 483.
$\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ m
|
15,913
|
How to interpret GARCH parameters?
|
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
|
How to interpret GARCH parameters?
|
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
|
How to interpret GARCH parameters?
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
|
How to interpret GARCH parameters?
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
|
15,914
|
How to interpret GARCH parameters?
|
Alpha catches the arch effect
Beeta catches the garch effect
Sum of both more close to 1, implies volatility remains long
|
How to interpret GARCH parameters?
|
Alpha catches the arch effect
Beeta catches the garch effect
Sum of both more close to 1, implies volatility remains long
|
How to interpret GARCH parameters?
Alpha catches the arch effect
Beeta catches the garch effect
Sum of both more close to 1, implies volatility remains long
|
How to interpret GARCH parameters?
Alpha catches the arch effect
Beeta catches the garch effect
Sum of both more close to 1, implies volatility remains long
|
15,915
|
How to interpret GARCH parameters?
|
Alpha (ARCH term) represents how volatility reacts to new information
Beta (GARCH Term) represents persistence of the volatility
Alpha + Beta shows overall measurement of persistence of volatility
|
How to interpret GARCH parameters?
|
Alpha (ARCH term) represents how volatility reacts to new information
Beta (GARCH Term) represents persistence of the volatility
Alpha + Beta shows overall measurement of persistence of volatility
|
How to interpret GARCH parameters?
Alpha (ARCH term) represents how volatility reacts to new information
Beta (GARCH Term) represents persistence of the volatility
Alpha + Beta shows overall measurement of persistence of volatility
|
How to interpret GARCH parameters?
Alpha (ARCH term) represents how volatility reacts to new information
Beta (GARCH Term) represents persistence of the volatility
Alpha + Beta shows overall measurement of persistence of volatility
|
15,916
|
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
|
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model extensions. For count data Poisson or Negative Binomial distributional assumptions will be likely suitable. Negative Binomial is indicated if the variance is higher than the mean indicating over-dispersion (https://en.wikipedia.org/wiki/Overdispersion). The interpretation of the parameter estimates is equivalent for the two.
Several options exist in R with lme4 being most commonly cited in my experience.
#glmer
library(lme4)
glmer(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) + (1|plot), family=poisson,
data = Data)
# use glmer.nb with identical syntax but no family for negative binomial.
# glmmADMB with negative binomial
install.packages("glmmADMB",
repos=c("http://glmmadmb.r-forge.r-project.org/repos",
getOption("repos")),type="source")
require(glmmADMB)
glmmadmb(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) + (1|plot),
family="nbinom", zeroInflation=FALSE, data=Data)
# glmmPQL, requires an estimate for theta which can be obtained from a
# glm model in which the correlation structure is ignored.
library(MASS)
glmmPQL(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) , random = list(~1 | plot),
data = Data, family = negative.binomial(theta = 4.22,
link = log))
These links may also be of assistance:
https://udrive.oit.umass.edu/xythoswfs/webui/_xy-11096203_1-t_yOxYgf1s
http://www.cell.com/trends/ecology-evolution/pdf/S0169-5347(09)00019-6.pdf
Both are by Ben Bolker, author of lme4.
I haven't tested the examples but they should give you an idea of where to start. Please supply data if you wish to verify their implementation.
|
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
|
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model e
|
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model extensions. For count data Poisson or Negative Binomial distributional assumptions will be likely suitable. Negative Binomial is indicated if the variance is higher than the mean indicating over-dispersion (https://en.wikipedia.org/wiki/Overdispersion). The interpretation of the parameter estimates is equivalent for the two.
Several options exist in R with lme4 being most commonly cited in my experience.
#glmer
library(lme4)
glmer(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) + (1|plot), family=poisson,
data = Data)
# use glmer.nb with identical syntax but no family for negative binomial.
# glmmADMB with negative binomial
install.packages("glmmADMB",
repos=c("http://glmmadmb.r-forge.r-project.org/repos",
getOption("repos")),type="source")
require(glmmADMB)
glmmadmb(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) + (1|plot),
family="nbinom", zeroInflation=FALSE, data=Data)
# glmmPQL, requires an estimate for theta which can be obtained from a
# glm model in which the correlation structure is ignored.
library(MASS)
glmmPQL(count ~ AirT + I(AirT^2) + RainAmt24 + I(RainAmt24^2) +
RHpct + windspeed + sin(2*pi/360*DOY) +
cos(2*pi/360*DOY) , random = list(~1 | plot),
data = Data, family = negative.binomial(theta = 4.22,
link = log))
These links may also be of assistance:
https://udrive.oit.umass.edu/xythoswfs/webui/_xy-11096203_1-t_yOxYgf1s
http://www.cell.com/trends/ecology-evolution/pdf/S0169-5347(09)00019-6.pdf
Both are by Ben Bolker, author of lme4.
I haven't tested the examples but they should give you an idea of where to start. Please supply data if you wish to verify their implementation.
|
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model e
|
15,917
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
|
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models.
Also, you have to look into how it makes sense for the Business itself. For example, the inputs of the model. If you have loan or application-level features in your custom credit score (disastrous, but some FIs still do this), binning would allow the model to be more tolerant of those changes.
Let's say it's an automobile loan through a dealership where the loan is practically never the same as how it is submitted. I am promising my customer risk-based pricing of 4% for their loan, they are requesting X dollars more for tax, title, and fees which will affect the numeric features in my model.
What is $X (tied to payment, debt, LTV ratios) more in risk if it wasn't binned? Does it make sense for me to adjust my pricing and inconvenience my customers over <1 bps in additional risk?
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
|
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models.
A
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models.
Also, you have to look into how it makes sense for the Business itself. For example, the inputs of the model. If you have loan or application-level features in your custom credit score (disastrous, but some FIs still do this), binning would allow the model to be more tolerant of those changes.
Let's say it's an automobile loan through a dealership where the loan is practically never the same as how it is submitted. I am promising my customer risk-based pricing of 4% for their loan, they are requesting X dollars more for tax, title, and fees which will affect the numeric features in my model.
What is $X (tied to payment, debt, LTV ratios) more in risk if it wasn't binned? Does it make sense for me to adjust my pricing and inconvenience my customers over <1 bps in additional risk?
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models.
A
|
15,918
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
|
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem, but let's think the main problem of ML is unexplainable. When you devise a scorecard system, the salesperson in the future will ask why you score a customer like that (the salesperson might believe that's a good customer based on what he knows, you have to explain to him that he has signs of bad customers such as bad credit history somewhere else and that's what Logistics Regression could serve you well whilst ML cannot).
Binning => It should be a manual work because you have to understand why the data look like that. It could be done automatically but the data might offer no meaning because the sample is biased.
We do use a lot of filtering criteria (WOE, IV, Marginal IV). I think they are easy to calculate and explain to non-risk persons as well, which explained its popularity.
I just think bining is necessary and transforming predictors improve the performance as you already wrote.
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
|
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem,
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem, but let's think the main problem of ML is unexplainable. When you devise a scorecard system, the salesperson in the future will ask why you score a customer like that (the salesperson might believe that's a good customer based on what he knows, you have to explain to him that he has signs of bad customers such as bad credit history somewhere else and that's what Logistics Regression could serve you well whilst ML cannot).
Binning => It should be a manual work because you have to understand why the data look like that. It could be done automatically but the data might offer no meaning because the sample is biased.
We do use a lot of filtering criteria (WOE, IV, Marginal IV). I think they are easy to calculate and explain to non-risk persons as well, which explained its popularity.
I just think bining is necessary and transforming predictors improve the performance as you already wrote.
|
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem,
|
15,919
|
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
|
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is for simplicity; the equations in your chapter work out in a closed-form way, whereas if you model the variances, they do not, and you need MCMC techniques from the later chapters.
In the school example, they rely on large sample size to assume that the variances are known "for all practical purposes" (p119), and I expect they estimate them using $\frac{1}{n-1} \sum (x_i - \overline{x})^2$ and then pretend those are the exact known values.
|
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
|
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is f
|
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is for simplicity; the equations in your chapter work out in a closed-form way, whereas if you model the variances, they do not, and you need MCMC techniques from the later chapters.
In the school example, they rely on large sample size to assume that the variances are known "for all practical purposes" (p119), and I expect they estimate them using $\frac{1}{n-1} \sum (x_i - \overline{x})^2$ and then pretend those are the exact known values.
|
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is f
|
15,920
|
What does function "effects" in R do?
|
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$.
Here is the numeric example which confirms the above:
> set.seed(1001)
> x<-rnorm(100)
> y<-1+2*x+rnorm(100)
> mod<-lm(y~x)
> xqr<-qr(cbind(1,x))
> sum(abs(qr.qty(xqr,y)-effects(mod)))
[1] 0
|
What does function "effects" in R do?
|
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$.
Here is the numeric example which confirms the above:
>
|
What does function "effects" in R do?
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$.
Here is the numeric example which confirms the above:
> set.seed(1001)
> x<-rnorm(100)
> y<-1+2*x+rnorm(100)
> mod<-lm(y~x)
> xqr<-qr(cbind(1,x))
> sum(abs(qr.qty(xqr,y)-effects(mod)))
[1] 0
|
What does function "effects" in R do?
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$.
Here is the numeric example which confirms the above:
>
|
15,921
|
Have I computed these likelihood ratios correctly?
|
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-squares approach and divided by n in a maximum-likelihood. The maximum likelihood estimate is thus consistent, but slightly biased).
This has some implications: you can calculate the LR as the likelihood is proportional to $\frac{1}{s}$, but that doesn't give you the likelihood of your anova model itself. It just tells you something about the ratio. As the AIC is classically defined in terms of the likelihood, I'm not sure if you can use the AIC as you intend.
I've looked at the spreadsheet, but the values for the "uncorrected LR within"(I'm also not completely following what exactly you're trying to calculate there) seem highly unlikely to me.
On a side note, the power of LR testing is that you can just contrast the models you want, you don't have to do that for all of them (which lowers the multitesting error). If you do this for every term, your LR is completely equivalent to an F test, and in the case of least squares, as far as I know even numerically about the same.
Your mile may vary, but I've never been confident mixing concepts of two different frameworks (i.e. least squares versus maximum likelihood). Personally, I'd report the F statistics and implement the LR in a function that allows to compare models (eg. the anova function for lme models which does exactly that).
My 2 cents.
PS : I did look at your code, but couldn't really figure out all the variables. If you would annotate your code using comments, that would make life a bit easier again. The EXCEL sheet is also not the easiest to figure out. I'll check again later to see if I can make something from it.
|
Have I computed these likelihood ratios correctly?
|
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg
|
Have I computed these likelihood ratios correctly?
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-squares approach and divided by n in a maximum-likelihood. The maximum likelihood estimate is thus consistent, but slightly biased).
This has some implications: you can calculate the LR as the likelihood is proportional to $\frac{1}{s}$, but that doesn't give you the likelihood of your anova model itself. It just tells you something about the ratio. As the AIC is classically defined in terms of the likelihood, I'm not sure if you can use the AIC as you intend.
I've looked at the spreadsheet, but the values for the "uncorrected LR within"(I'm also not completely following what exactly you're trying to calculate there) seem highly unlikely to me.
On a side note, the power of LR testing is that you can just contrast the models you want, you don't have to do that for all of them (which lowers the multitesting error). If you do this for every term, your LR is completely equivalent to an F test, and in the case of least squares, as far as I know even numerically about the same.
Your mile may vary, but I've never been confident mixing concepts of two different frameworks (i.e. least squares versus maximum likelihood). Personally, I'd report the F statistics and implement the LR in a function that allows to compare models (eg. the anova function for lme models which does exactly that).
My 2 cents.
PS : I did look at your code, but couldn't really figure out all the variables. If you would annotate your code using comments, that would make life a bit easier again. The EXCEL sheet is also not the easiest to figure out. I'll check again later to see if I can make something from it.
|
Have I computed these likelihood ratios correctly?
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg
|
15,922
|
Covariance for three variables
|
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include the dependence of two variables on each other to be include in any measure of their "relation".
However, we know that $cov(X,Y)=0$ does not imply that they are independent, as for example is the case with two random variables X~U(-1,1) and Y=X^2 (for a short proof, see: https://en.wikipedia.org/wiki/Covariance#Uncorrelatedness_and_independence).
So if we'd think that the covariance includes full information about variable relations, as you ask, zero covariance would suggest no dependence. This is what Zachary means when he says that there can be non-linear dependences that the covariance does not capture.
However, let $X:=(X_{1},...,X_{n})'$ be multivariate normal, X~$N(\mu,\Sigma)$. Then $X_{1},...,X_{n}$ are independent iff $\Sigma$ is a diagonal matrix with all off-diagonal elements = 0 (if all covariances = 0).
To see that this condition is sufficient, observe that the joint density factors,
\begin{equation} f(x_{1},...,x_{n}) =
\dfrac{1}{ \sqrt{(2 \pi)^{n} | \Sigma |}} exp(- \dfrac{1}{2} (x - \mu)' \Sigma^{-1} (x - \mu))= \Pi^{n}_{i=1} \dfrac{1}{\sqrt{2 \pi \sigma_{ii}}} exp(- \dfrac{(x_{i}-\mu_{i})^{2}}{2 \sigma_{ii}})=f_{1}(x_{1})...f_{n}(x_{n})\end{equation}.
To see that the condition is necessary, recall the bivariate case. If $X_{1}$ and $X_{2}$ are independent, then $X_{1}$ and $X_{1}|X_{2} = x_{2}$ must have the same variance, so
\begin{equation} \sigma_{11}=\sigma_{11|2}=\sigma_{11}-\sigma^{2}_{12} \sigma^{-1}_{22} \end{equation}
which implies $\sigma_{12}=0$. By the same argument, all off-diagonal elements of $\Sigma$ must be zero.
(source: prof. Geert Dhaene's Advanced Econometrics slides)
|
Covariance for three variables
|
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include
|
Covariance for three variables
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include the dependence of two variables on each other to be include in any measure of their "relation".
However, we know that $cov(X,Y)=0$ does not imply that they are independent, as for example is the case with two random variables X~U(-1,1) and Y=X^2 (for a short proof, see: https://en.wikipedia.org/wiki/Covariance#Uncorrelatedness_and_independence).
So if we'd think that the covariance includes full information about variable relations, as you ask, zero covariance would suggest no dependence. This is what Zachary means when he says that there can be non-linear dependences that the covariance does not capture.
However, let $X:=(X_{1},...,X_{n})'$ be multivariate normal, X~$N(\mu,\Sigma)$. Then $X_{1},...,X_{n}$ are independent iff $\Sigma$ is a diagonal matrix with all off-diagonal elements = 0 (if all covariances = 0).
To see that this condition is sufficient, observe that the joint density factors,
\begin{equation} f(x_{1},...,x_{n}) =
\dfrac{1}{ \sqrt{(2 \pi)^{n} | \Sigma |}} exp(- \dfrac{1}{2} (x - \mu)' \Sigma^{-1} (x - \mu))= \Pi^{n}_{i=1} \dfrac{1}{\sqrt{2 \pi \sigma_{ii}}} exp(- \dfrac{(x_{i}-\mu_{i})^{2}}{2 \sigma_{ii}})=f_{1}(x_{1})...f_{n}(x_{n})\end{equation}.
To see that the condition is necessary, recall the bivariate case. If $X_{1}$ and $X_{2}$ are independent, then $X_{1}$ and $X_{1}|X_{2} = x_{2}$ must have the same variance, so
\begin{equation} \sigma_{11}=\sigma_{11|2}=\sigma_{11}-\sigma^{2}_{12} \sigma^{-1}_{22} \end{equation}
which implies $\sigma_{12}=0$. By the same argument, all off-diagonal elements of $\Sigma$ must be zero.
(source: prof. Geert Dhaene's Advanced Econometrics slides)
|
Covariance for three variables
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include
|
15,923
|
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monotone?
|
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$.
Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n} = I_p$. With an orthonormal design the OLS regression coefficients are simply $\hat{\beta}^{ols} = \frac{X'y}{n}$.
The Karush-Khun-Tucker conditions for the LASSO thus simplify to:
$$
\frac{X'y}{n} = \hat{\beta}^{lasso} + \lambda s \implies \hat{\beta}^{ols} = \hat{\beta}^{lasso} + \lambda s
$$
Where $s$ is the sub gradient. Hence, for each $j\in \{1, \dots, p\}$ we have that $\hat{\beta}_j^{ols} = \hat{\beta}_j^{lasso} + \lambda s_j$, and we have a closed form solution to the lasso estimates:
$$
\hat{\beta}_j^{lasso} = sign\left(\hat{\beta}_j^{ols}\right)\left(|\hat{\beta}_j^{ols}| - \lambda \right)_{+}
$$
Which is monotonic in $\lambda$. While this is not a necessary condition, we see that the non-monotonicity must come from the correlation of the covariates in $X$.
|
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monot
|
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$.
Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n}
|
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monotone?
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$.
Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n} = I_p$. With an orthonormal design the OLS regression coefficients are simply $\hat{\beta}^{ols} = \frac{X'y}{n}$.
The Karush-Khun-Tucker conditions for the LASSO thus simplify to:
$$
\frac{X'y}{n} = \hat{\beta}^{lasso} + \lambda s \implies \hat{\beta}^{ols} = \hat{\beta}^{lasso} + \lambda s
$$
Where $s$ is the sub gradient. Hence, for each $j\in \{1, \dots, p\}$ we have that $\hat{\beta}_j^{ols} = \hat{\beta}_j^{lasso} + \lambda s_j$, and we have a closed form solution to the lasso estimates:
$$
\hat{\beta}_j^{lasso} = sign\left(\hat{\beta}_j^{ols}\right)\left(|\hat{\beta}_j^{ols}| - \lambda \right)_{+}
$$
Which is monotonic in $\lambda$. While this is not a necessary condition, we see that the non-monotonicity must come from the correlation of the covariates in $X$.
|
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monot
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$.
Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n}
|
15,924
|
MLE vs least squares in fitting probability distributions
|
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So in fact, rather than (as you speculate) that the MLE does not assume a noise model, what is going on is that it does assume there is random noise, but takes a more sophisticated view of how that is shaped rather than assuming it has a normal distribution.
Any text book on statistical inference will deal with the nice properties of MLEs with regard to efficiency and consistency (but not necessarily bias). MLEs also have the nice property of being asymptotically normal themselves under a reasonable set of conditions.
|
MLE vs least squares in fitting probability distributions
|
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So i
|
MLE vs least squares in fitting probability distributions
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So in fact, rather than (as you speculate) that the MLE does not assume a noise model, what is going on is that it does assume there is random noise, but takes a more sophisticated view of how that is shaped rather than assuming it has a normal distribution.
Any text book on statistical inference will deal with the nice properties of MLEs with regard to efficiency and consistency (but not necessarily bias). MLEs also have the nice property of being asymptotically normal themselves under a reasonable set of conditions.
|
MLE vs least squares in fitting probability distributions
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So i
|
15,925
|
Deep learning vs. Decision trees and boosting methods
|
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting?
In general, DL techniques can be described as layers of encoder/decoders. Unsupervised pre-training works by first pre-training each layer by encoding the signal, decoding the signal, then measuring the reconstruction error. Tuning can then be used to get better performance (e.g. if you use denoising stacked-autoencoders you can use back-propagation).
One good starting point for DL theory is:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.795&rep=rep1&type=pdf
as well as these:
http://portal.acm.org/citation.cfm?id=1756025
(sorry, had to delete last link due to SPAM filtration system)
I didn't include any information on RBMs, but they are closely related (though personally a little more difficult to understand at first).
|
Deep learning vs. Decision trees and boosting methods
|
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I'm also not sure how to compare methods like boostin
|
Deep learning vs. Decision trees and boosting methods
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting?
In general, DL techniques can be described as layers of encoder/decoders. Unsupervised pre-training works by first pre-training each layer by encoding the signal, decoding the signal, then measuring the reconstruction error. Tuning can then be used to get better performance (e.g. if you use denoising stacked-autoencoders you can use back-propagation).
One good starting point for DL theory is:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.795&rep=rep1&type=pdf
as well as these:
http://portal.acm.org/citation.cfm?id=1756025
(sorry, had to delete last link due to SPAM filtration system)
I didn't include any information on RBMs, but they are closely related (though personally a little more difficult to understand at first).
|
Deep learning vs. Decision trees and boosting methods
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.
I'm also not sure how to compare methods like boostin
|
15,926
|
Deep learning vs. Decision trees and boosting methods
|
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks", whereas "boosting" is a "meta-learning algorithm" that requires one or more learning networks, called weak learners, which can be "anything" (i.e. neural network, decision tree, etc). The boosting algorithm takes one or more of its weak learner networks to form what's called a "strong learner", which can significantly "boost" the overall learning networks results (i.e. Microsoft's Viola and Jones Face Detector, OpenCV).
|
Deep learning vs. Decision trees and boosting methods
|
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks
|
Deep learning vs. Decision trees and boosting methods
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks", whereas "boosting" is a "meta-learning algorithm" that requires one or more learning networks, called weak learners, which can be "anything" (i.e. neural network, decision tree, etc). The boosting algorithm takes one or more of its weak learner networks to form what's called a "strong learner", which can significantly "boost" the overall learning networks results (i.e. Microsoft's Viola and Jones Face Detector, OpenCV).
|
Deep learning vs. Decision trees and boosting methods
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks
|
15,927
|
Second moment method, Brownian motion?
|
Not the answer, but possibly useful reformulation
I assume that comment made above is right (that is sum has $2^{n+1}$ terms).
Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$
Observe that $p_n(\rho_1)>p_n(\rho_2)$ if $\rho_1 < \rho_2$
First point: if you ask whether such $\rho$ exists for all n, you need to show that for some $\delta $ the limit is positive $$\lim_{n\rightarrow \infty} p_n(\delta)>0$$
then, if $p_n(\delta)$ has positive limit and all values are positive, it must be separated from zero, let's say $p_n(\delta)>\varepsilon$. Then $$p_n(\min(\varepsilon,\delta)) \geq p_n(\delta)>\varepsilon \geq \min(\varepsilon,\delta)$$ so you have desired property for $\rho=\min(\varepsilon, \delta)$.
So you just need to show the limit of $p_n$ to be positive.
I would then investigate the variable $K_n/2^n$ and its expected value
|
Second moment method, Brownian motion?
|
Not the answer, but possibly useful reformulation
I assume that comment made above is right (that is sum has $2^{n+1}$ terms).
Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$
Observe that $p_n(\
|
Second moment method, Brownian motion?
Not the answer, but possibly useful reformulation
I assume that comment made above is right (that is sum has $2^{n+1}$ terms).
Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$
Observe that $p_n(\rho_1)>p_n(\rho_2)$ if $\rho_1 < \rho_2$
First point: if you ask whether such $\rho$ exists for all n, you need to show that for some $\delta $ the limit is positive $$\lim_{n\rightarrow \infty} p_n(\delta)>0$$
then, if $p_n(\delta)$ has positive limit and all values are positive, it must be separated from zero, let's say $p_n(\delta)>\varepsilon$. Then $$p_n(\min(\varepsilon,\delta)) \geq p_n(\delta)>\varepsilon \geq \min(\varepsilon,\delta)$$ so you have desired property for $\rho=\min(\varepsilon, \delta)$.
So you just need to show the limit of $p_n$ to be positive.
I would then investigate the variable $K_n/2^n$ and its expected value
|
Second moment method, Brownian motion?
Not the answer, but possibly useful reformulation
I assume that comment made above is right (that is sum has $2^{n+1}$ terms).
Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$
Observe that $p_n(\
|
15,928
|
pdf of the product of two independent random variables, normal and chi-square
|
simplify the term in the integral to
$T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $
find the polynomial $p(y)$ such that
$[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)}]'=p'(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} = T$
which reduces to finding $p(y)$ such that
$p'(y) + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' = y^{k/2-2}$
or
$p'(y) -\frac{1}{2} p(y) (\frac{z \mu_x }{\sigma_x^2} y^{-2} \frac{z^2}{\sigma_x^2} y^{-3} -1)= y^{k/2-2}$
which can be done evaluating all powers of $y$ seperately
edit after comments
Above solution won't work as it diverges.
Yet, some others have worked on this type of product.
Using Fourrier transform:
Schoenecker, Steven, and Tod Luginbuhl. "Characteristic Functions of the Product of Two Gaussian Random Variables and the Product of a Gaussian and a Gamma Random Variable." IEEE Signal Processing Letters 23.5 (2016): 644-647.
http://ieeexplore.ieee.org/document/7425177/#full-text-section
For the product $Z=XY$ with $X \sim \mathcal{N}(0,1)$ and $Y \sim \Gamma(\alpha,\beta)$ they obtained the characteristic function:
$\varphi_{Z} = \frac{1}{\beta^\alpha }\vert t \vert^{-\alpha} exp \left( \frac{1}{4\beta^2t^2} \right) D_{-\alpha} \left( \frac{1}{\beta \vert t \vert } \right)$
with $D_\alpha$ Whittaker's function ( http://people.math.sfu.ca/~cbm/aands/page_686.htm )
Using Mellin transform:
Springer and Thomson have described more generally the evaluation of products of beta, gamma and Gaussian distributed random variables.
Springer, M. D., and W. E. Thompson. "The distribution of products of beta, gamma and Gaussian random variables." SIAM Journal on Applied Mathematics 18.4 (1970): 721-737.
http://epubs.siam.org/doi/10.1137/0118065
They use the Mellin integral transform. The Mellin transform of $Z$ is the product of the Mellin transforms of $X$ and $Y$ (see http://epubs.siam.org/doi/10.1137/0118065 or https://projecteuclid.org/euclid.aoms/1177730201). In the studied cases of products the reverse transform of this product can be expressed as a Meijer G-function for which they also provide and prove computational methods.
They did not analyze the product of a Gaussian and gamma distributed variable, although you might be able to use the same techniques. If I try to do this quickly then I believe it should be possible to obtain an H-function (https://en.wikipedia.org/wiki/Fox_H-function ) although I do not directly see the possibility to get a G-function or make other simplifications.
$M\lbrace f_Y(x) \vert s \rbrace = 2^{s-1} \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k)$
and
$M\lbrace f_X(x) \vert s \rbrace = \frac{1}{\pi}2^{(s-1)/2} \sigma^{s-1} \Gamma(s/2) $
you get
$M\lbrace f_Z(x) \vert s \rbrace = \frac{1}{\pi}2^{\frac{3}{2}(s-1)} \sigma^{s-1} \Gamma(s/2) \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k) $
and the distribution of $Z$ is:
$f_Z(y) = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} y^{-s} M\lbrace f_Z(x) \vert s \rbrace ds $
which looks to me (after a change of variables to eliminate the $2^{\frac{3}{2}(s-1)}$ term) as at least a H-function
what is still left is the puzzle to express this inverse Mellin transform as a G function. The occurrence of both $s$ and $s/2$ complicates this. In the separate case for a product of only Gaussian distributed variables the $s/2$ could be transformed into $s$ by substituting the variable $x=w^2$. But because of the terms of the chi-square distribution this does not work anymore. Maybe this is the reason why nobody has provided a solution for this case.
|
pdf of the product of two independent random variables, normal and chi-square
|
simplify the term in the integral to
$T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $
find the polynomial $p(y)$ such that
$[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\s
|
pdf of the product of two independent random variables, normal and chi-square
simplify the term in the integral to
$T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $
find the polynomial $p(y)$ such that
$[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)}]'=p'(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} = T$
which reduces to finding $p(y)$ such that
$p'(y) + p(y) [-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)]' = y^{k/2-2}$
or
$p'(y) -\frac{1}{2} p(y) (\frac{z \mu_x }{\sigma_x^2} y^{-2} \frac{z^2}{\sigma_x^2} y^{-3} -1)= y^{k/2-2}$
which can be done evaluating all powers of $y$ seperately
edit after comments
Above solution won't work as it diverges.
Yet, some others have worked on this type of product.
Using Fourrier transform:
Schoenecker, Steven, and Tod Luginbuhl. "Characteristic Functions of the Product of Two Gaussian Random Variables and the Product of a Gaussian and a Gamma Random Variable." IEEE Signal Processing Letters 23.5 (2016): 644-647.
http://ieeexplore.ieee.org/document/7425177/#full-text-section
For the product $Z=XY$ with $X \sim \mathcal{N}(0,1)$ and $Y \sim \Gamma(\alpha,\beta)$ they obtained the characteristic function:
$\varphi_{Z} = \frac{1}{\beta^\alpha }\vert t \vert^{-\alpha} exp \left( \frac{1}{4\beta^2t^2} \right) D_{-\alpha} \left( \frac{1}{\beta \vert t \vert } \right)$
with $D_\alpha$ Whittaker's function ( http://people.math.sfu.ca/~cbm/aands/page_686.htm )
Using Mellin transform:
Springer and Thomson have described more generally the evaluation of products of beta, gamma and Gaussian distributed random variables.
Springer, M. D., and W. E. Thompson. "The distribution of products of beta, gamma and Gaussian random variables." SIAM Journal on Applied Mathematics 18.4 (1970): 721-737.
http://epubs.siam.org/doi/10.1137/0118065
They use the Mellin integral transform. The Mellin transform of $Z$ is the product of the Mellin transforms of $X$ and $Y$ (see http://epubs.siam.org/doi/10.1137/0118065 or https://projecteuclid.org/euclid.aoms/1177730201). In the studied cases of products the reverse transform of this product can be expressed as a Meijer G-function for which they also provide and prove computational methods.
They did not analyze the product of a Gaussian and gamma distributed variable, although you might be able to use the same techniques. If I try to do this quickly then I believe it should be possible to obtain an H-function (https://en.wikipedia.org/wiki/Fox_H-function ) although I do not directly see the possibility to get a G-function or make other simplifications.
$M\lbrace f_Y(x) \vert s \rbrace = 2^{s-1} \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k)$
and
$M\lbrace f_X(x) \vert s \rbrace = \frac{1}{\pi}2^{(s-1)/2} \sigma^{s-1} \Gamma(s/2) $
you get
$M\lbrace f_Z(x) \vert s \rbrace = \frac{1}{\pi}2^{\frac{3}{2}(s-1)} \sigma^{s-1} \Gamma(s/2) \Gamma(\tfrac{1}{2}k+s-1)/\Gamma(\tfrac{1}{2}k) $
and the distribution of $Z$ is:
$f_Z(y) = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} y^{-s} M\lbrace f_Z(x) \vert s \rbrace ds $
which looks to me (after a change of variables to eliminate the $2^{\frac{3}{2}(s-1)}$ term) as at least a H-function
what is still left is the puzzle to express this inverse Mellin transform as a G function. The occurrence of both $s$ and $s/2$ complicates this. In the separate case for a product of only Gaussian distributed variables the $s/2$ could be transformed into $s$ by substituting the variable $x=w^2$. But because of the terms of the chi-square distribution this does not work anymore. Maybe this is the reason why nobody has provided a solution for this case.
|
pdf of the product of two independent random variables, normal and chi-square
simplify the term in the integral to
$T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $
find the polynomial $p(y)$ such that
$[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\s
|
15,929
|
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
|
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the integrals below exist, we have:
$$
P(Y \le y) = \mathbb{E}_{f_A, f_X}\left[I(Y \le y)\right] =
\mathbb{E}_{f_X}\left[\mathbb{E}_{f_A}\left[I(Y \le y) \mid X \right]\right] =
\int_{supp(f_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \mid X = x \right]f_X(x)dx} =
\int_{supp(f_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \mid X = x \right]\frac{f_X(x)}{g_X(x)}g_X(x)dx} =
\int_{supp(g_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)} \mid X = x \right]g_X(x)dx} =
\mathbb{E}_{g_X}\left[\mathbb{E}_{f_A}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)} \mid X \right]\right] =
\mathbb{E}_{f_A, g_X}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)}\right]
$$
In your case
$$
f_X(x) = \left\{
\begin{array}{cc}
p & x = 1 \\
1 - p & x = 0\\
\end{array}
\right.
$$
and $g_X(\cdot)$ can be defined like this:
$$
g_X(x) = \left\{
\begin{array}{cc}
0.5 & x = 1 \\
0.5 & x = 0\\
\end{array}
\right.
$$
Therefore, you can simulate $X$ via distribution $g_X(\cdot)$, but all the observations with $X=1$ will have the weight $\frac{p}{0.5}=2p$ and all the observations with $X=0$ will have the weight $\frac{1-p}{0.5}=2(1-p)$. Simulation of the ARIMA process will not be affected.
|
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
|
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the
|
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the integrals below exist, we have:
$$
P(Y \le y) = \mathbb{E}_{f_A, f_X}\left[I(Y \le y)\right] =
\mathbb{E}_{f_X}\left[\mathbb{E}_{f_A}\left[I(Y \le y) \mid X \right]\right] =
\int_{supp(f_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \mid X = x \right]f_X(x)dx} =
\int_{supp(f_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \mid X = x \right]\frac{f_X(x)}{g_X(x)}g_X(x)dx} =
\int_{supp(g_X)}{\mathbb{E}_{f_A}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)} \mid X = x \right]g_X(x)dx} =
\mathbb{E}_{g_X}\left[\mathbb{E}_{f_A}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)} \mid X \right]\right] =
\mathbb{E}_{f_A, g_X}\left[I(Y \le y) \frac{f_X(X)}{g_X(X)}\right]
$$
In your case
$$
f_X(x) = \left\{
\begin{array}{cc}
p & x = 1 \\
1 - p & x = 0\\
\end{array}
\right.
$$
and $g_X(\cdot)$ can be defined like this:
$$
g_X(x) = \left\{
\begin{array}{cc}
0.5 & x = 1 \\
0.5 & x = 0\\
\end{array}
\right.
$$
Therefore, you can simulate $X$ via distribution $g_X(\cdot)$, but all the observations with $X=1$ will have the weight $\frac{p}{0.5}=2p$ and all the observations with $X=0$ will have the weight $\frac{1-p}{0.5}=2(1-p)$. Simulation of the ARIMA process will not be affected.
|
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the
|
15,930
|
How the Pearson's Chi Squared Test works
|
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't work to determine resulting fit in every instance.
Source: http://www.ling.upenn.edu/~clight/chisquared.htm
|
How the Pearson's Chi Squared Test works
|
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't wor
|
How the Pearson's Chi Squared Test works
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't work to determine resulting fit in every instance.
Source: http://www.ling.upenn.edu/~clight/chisquared.htm
|
How the Pearson's Chi Squared Test works
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't wor
|
15,931
|
How does cross-validation overcome the overfitting problem?
|
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does not have a negligible variance, especially if the size of the dataset is small; in other words you get a slightly different value depending on the particular sample of data you use. This means that if you have many degrees of freedom in model selection (e.g. lots of features from which to select a small subset, many hyper-parameters to tune, many models from which to choose) you can over-fit the cross-validation criterion as the model is tuned in ways that exploit this random variation rather than in ways that really do improve performance, and you can end up with a model that performs poorly. For a discussion of this, see Cawley and Talbot "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR, vol. 11, pp. 2079−2107, 2010
Sadly cross-validation is most likely to let you down when you have a small dataset, which is exactly when you need cross-validation the most. Note that k-fold cross-validation is generally more reliable than leave-one-out cross-validation as it has a lower variance, but may be more expensive to compute for some models (which is why LOOCV is sometimes used for model selection, even though it has a high variance).
|
How does cross-validation overcome the overfitting problem?
|
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model select
|
How does cross-validation overcome the overfitting problem?
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does not have a negligible variance, especially if the size of the dataset is small; in other words you get a slightly different value depending on the particular sample of data you use. This means that if you have many degrees of freedom in model selection (e.g. lots of features from which to select a small subset, many hyper-parameters to tune, many models from which to choose) you can over-fit the cross-validation criterion as the model is tuned in ways that exploit this random variation rather than in ways that really do improve performance, and you can end up with a model that performs poorly. For a discussion of this, see Cawley and Talbot "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR, vol. 11, pp. 2079−2107, 2010
Sadly cross-validation is most likely to let you down when you have a small dataset, which is exactly when you need cross-validation the most. Note that k-fold cross-validation is generally more reliable than leave-one-out cross-validation as it has a lower variance, but may be more expensive to compute for some models (which is why LOOCV is sometimes used for model selection, even though it has a high variance).
|
How does cross-validation overcome the overfitting problem?
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model select
|
15,932
|
How does cross-validation overcome the overfitting problem?
|
Not at all. However, cross validation helps you to assess by how much your method overfits.
For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0.48, you hardly have any overfitting and you feel good. On the other hand, if the crossvalidated R-squared is only 0.3 here, then a considerable part of your model performance comes due to overfitting and not from true relationships. In such a case you can either accept a lower performance or try different modelling strategies with less overfitting.
|
How does cross-validation overcome the overfitting problem?
|
Not at all. However, cross validation helps you to assess by how much your method overfits.
For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0
|
How does cross-validation overcome the overfitting problem?
Not at all. However, cross validation helps you to assess by how much your method overfits.
For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0.48, you hardly have any overfitting and you feel good. On the other hand, if the crossvalidated R-squared is only 0.3 here, then a considerable part of your model performance comes due to overfitting and not from true relationships. In such a case you can either accept a lower performance or try different modelling strategies with less overfitting.
|
How does cross-validation overcome the overfitting problem?
Not at all. However, cross validation helps you to assess by how much your method overfits.
For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0
|
15,933
|
How does cross-validation overcome the overfitting problem?
|
My answer is more intuitive than rigorous, but maybe it will help...
As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have a flexible fitting mechanism: you fit your sample of data so closely that you're fitting the noise, outliers, and all the other variance.
Splitting the data into a training and testing set keeps you from doing this. But a static split is not using your data efficiently and your split itself could be an issue. Cross-validation keeps the don't-reward-an-exact-fit-to-training-data advantage of the training-testing split, while also using the data that you have as efficiently as possible (i.e. all of your data is used as training and testing data, just not in the same run).
If you have a flexible fitting mechanism, you need to constrain your model selection so that it doesn't favor "perfect" but complex fits somehow. You can do it with AIC, BIC, or some other penalization method that penalizes fit complexity directly, or you can do it with CV. (Or you can do it by using a fitting method that is not very flexible, which is one reason linear models are nice.)
Another way of looking at it is that learning is about generalizing, and a fit that's too tight is in some sense not generalizing. By varying what you learn on and what you're tested on, you generalize better than if you only learned the answers to a specific set of questions.
|
How does cross-validation overcome the overfitting problem?
|
My answer is more intuitive than rigorous, but maybe it will help...
As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have
|
How does cross-validation overcome the overfitting problem?
My answer is more intuitive than rigorous, but maybe it will help...
As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have a flexible fitting mechanism: you fit your sample of data so closely that you're fitting the noise, outliers, and all the other variance.
Splitting the data into a training and testing set keeps you from doing this. But a static split is not using your data efficiently and your split itself could be an issue. Cross-validation keeps the don't-reward-an-exact-fit-to-training-data advantage of the training-testing split, while also using the data that you have as efficiently as possible (i.e. all of your data is used as training and testing data, just not in the same run).
If you have a flexible fitting mechanism, you need to constrain your model selection so that it doesn't favor "perfect" but complex fits somehow. You can do it with AIC, BIC, or some other penalization method that penalizes fit complexity directly, or you can do it with CV. (Or you can do it by using a fitting method that is not very flexible, which is one reason linear models are nice.)
Another way of looking at it is that learning is about generalizing, and a fit that's too tight is in some sense not generalizing. By varying what you learn on and what you're tested on, you generalize better than if you only learned the answers to a specific set of questions.
|
How does cross-validation overcome the overfitting problem?
My answer is more intuitive than rigorous, but maybe it will help...
As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have
|
15,934
|
How does cross-validation overcome the overfitting problem?
|
Cross-Validation is a good, but not perfect, technique to minimize over-fitting.
Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'll be trying to predict!
Here are two concrete situations when cross-validation has flaws:
You are using the past to predict the future: it is often a big assumption to assume that past observations will come from the same population with the same distribution as future observations. Cross-validating on a data set drawn from the past won't protect against this.
There is a bias in the data you collect: the data you observe is systematically different from the data you don't observed. For example, we know about respondent bias in those who chose to take a survey.
|
How does cross-validation overcome the overfitting problem?
|
Cross-Validation is a good, but not perfect, technique to minimize over-fitting.
Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'l
|
How does cross-validation overcome the overfitting problem?
Cross-Validation is a good, but not perfect, technique to minimize over-fitting.
Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'll be trying to predict!
Here are two concrete situations when cross-validation has flaws:
You are using the past to predict the future: it is often a big assumption to assume that past observations will come from the same population with the same distribution as future observations. Cross-validating on a data set drawn from the past won't protect against this.
There is a bias in the data you collect: the data you observe is systematically different from the data you don't observed. For example, we know about respondent bias in those who chose to take a survey.
|
How does cross-validation overcome the overfitting problem?
Cross-Validation is a good, but not perfect, technique to minimize over-fitting.
Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'l
|
15,935
|
How does cross-validation overcome the overfitting problem?
|
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does.
This is because if you are comparing models in a Bayesian way, then you are essentially already doing cross validation. This is because the posterior odds of model A $M_A$ against model B $M_B$, with data $D$ and prior information $I$ has the following form:
$$\frac{P(M_A|D,I)}{P(M_B|D,I)}=\frac{P(M_A|I)}{P(M_B|I)}\times\frac{P(D|M_A,I)}{P(D|M_B,I)}$$
And $P(D|M_A,I)$ is given by:
$$P(D|M_A,I)=\int P(D,\theta_A|M_A,I)d\theta_A=\int P(\theta_A|M_A,I)P(D|M_A,\theta_A,I)d\theta_A$$
Which is called the prior predictive distribution. It basically says how well the model predicted the data that was actually observed, which is exactly what cross validation does, with the "prior" being replaced by the "training" model fitted, and the "data" being replace by the "testing" data. So if model B predicted the data better than model A, its posterior probability increases relative to model A. It seems from this that Bayes theorem will actually do cross validation using all the data, rather than a subset. However, I am not fully convinced of this - seems like we get something for nothing.
Another neat feature of this method is that it has an in built "occam's razor", given by the ratio of normalisation constants of the prior distributions for each model.
However cross validation seems valuable for the dreaded old "something else" or what is sometimes called "model mispecification". I am constantly torn by whether this "something else" matters or not, for it seems like it should matter - but it leaves you paralyzed with no solution at all when it apparently matters. Just something to give you a headache, but nothing you can do about it - except for thinking of what that "something else" might be, and trying it out in your model (so that it is no longer part of "something else").
And further, cross validation is a way to actually do a Bayesian analysis when the integrals above are ridiculously hard. And cross validation "makes sense" to just about anyone - it is "mechanical" rather than "mathematical". So it is easy to understand what is going on. And it also seems to get your head to focus on the important part of models - making good predictions.
|
How does cross-validation overcome the overfitting problem?
|
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does.
This is bec
|
How does cross-validation overcome the overfitting problem?
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does.
This is because if you are comparing models in a Bayesian way, then you are essentially already doing cross validation. This is because the posterior odds of model A $M_A$ against model B $M_B$, with data $D$ and prior information $I$ has the following form:
$$\frac{P(M_A|D,I)}{P(M_B|D,I)}=\frac{P(M_A|I)}{P(M_B|I)}\times\frac{P(D|M_A,I)}{P(D|M_B,I)}$$
And $P(D|M_A,I)$ is given by:
$$P(D|M_A,I)=\int P(D,\theta_A|M_A,I)d\theta_A=\int P(\theta_A|M_A,I)P(D|M_A,\theta_A,I)d\theta_A$$
Which is called the prior predictive distribution. It basically says how well the model predicted the data that was actually observed, which is exactly what cross validation does, with the "prior" being replaced by the "training" model fitted, and the "data" being replace by the "testing" data. So if model B predicted the data better than model A, its posterior probability increases relative to model A. It seems from this that Bayes theorem will actually do cross validation using all the data, rather than a subset. However, I am not fully convinced of this - seems like we get something for nothing.
Another neat feature of this method is that it has an in built "occam's razor", given by the ratio of normalisation constants of the prior distributions for each model.
However cross validation seems valuable for the dreaded old "something else" or what is sometimes called "model mispecification". I am constantly torn by whether this "something else" matters or not, for it seems like it should matter - but it leaves you paralyzed with no solution at all when it apparently matters. Just something to give you a headache, but nothing you can do about it - except for thinking of what that "something else" might be, and trying it out in your model (so that it is no longer part of "something else").
And further, cross validation is a way to actually do a Bayesian analysis when the integrals above are ridiculously hard. And cross validation "makes sense" to just about anyone - it is "mechanical" rather than "mathematical". So it is easy to understand what is going on. And it also seems to get your head to focus on the important part of models - making good predictions.
|
How does cross-validation overcome the overfitting problem?
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does.
This is bec
|
15,936
|
How does cross-validation overcome the overfitting problem?
|
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively.
Cross-Validation and the Bootstrap (14:01)
K-fold Cross-Validation (13:33)
Cross-Validation: The Right and Wrong Ways (10:07)
|
How does cross-validation overcome the overfitting problem?
|
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively.
Cross-Validation and the Bootstra
|
How does cross-validation overcome the overfitting problem?
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively.
Cross-Validation and the Bootstrap (14:01)
K-fold Cross-Validation (13:33)
Cross-Validation: The Right and Wrong Ways (10:07)
|
How does cross-validation overcome the overfitting problem?
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively.
Cross-Validation and the Bootstra
|
15,937
|
Clustering & Time Series
|
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size increases over time. In that case, to see significant reduction on certain clusters, one should use a fixed sample-size. Then choose fixed sample from the initial time period, and see how their cluster sizes and memberships are changing over time.
Symbolically, let's say you have 3 datasets (feature matrices) over time:
$$X_{t_{0}} \supset X_{t_{1}} \supset X_{t_{2}}$$
and corresponding clusterings $C_{0}, C_{1}, C_{2}$, where $C$ is essentially instances and cluster membership tables. To judge how clustering changes, take samples at $t_{0}$, such that $X_{0} \supset X_{t_{0}}$. Tracking how $X_{0}$'s membership and cluster sizes on different clusterings $C_{0}, C_{1}, C_{2}$ changes. This would give a good idea if there are "reductions" (significant changes) over different clustering, given that $X_{0}$ is representative over-time.
|
Clustering & Time Series
|
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size i
|
Clustering & Time Series
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size increases over time. In that case, to see significant reduction on certain clusters, one should use a fixed sample-size. Then choose fixed sample from the initial time period, and see how their cluster sizes and memberships are changing over time.
Symbolically, let's say you have 3 datasets (feature matrices) over time:
$$X_{t_{0}} \supset X_{t_{1}} \supset X_{t_{2}}$$
and corresponding clusterings $C_{0}, C_{1}, C_{2}$, where $C$ is essentially instances and cluster membership tables. To judge how clustering changes, take samples at $t_{0}$, such that $X_{0} \supset X_{t_{0}}$. Tracking how $X_{0}$'s membership and cluster sizes on different clusterings $C_{0}, C_{1}, C_{2}$ changes. This would give a good idea if there are "reductions" (significant changes) over different clustering, given that $X_{0}$ is representative over-time.
|
Clustering & Time Series
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size i
|
15,938
|
Clustering & Time Series
|
Update in 2023
If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem:
https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html
In general we should mention that TSC problems are well known and it is indeed possible. To see how it works I would visit this repo, which also links a paper from 2022. Especially kmedois in Time Series seems to work well with distance based clustering in Time Series problems.
https://github.com/sktime/distance-based-time-series-clustering
For those, who are lazy, and do not want to visit the github repo, here is the paper: https://arxiv.org/pdf/2205.15181.pdf
|
Clustering & Time Series
|
Update in 2023
If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem:
https://sktime-backup.readthedocs.io/en/v0.15.1/a
|
Clustering & Time Series
Update in 2023
If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem:
https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html
In general we should mention that TSC problems are well known and it is indeed possible. To see how it works I would visit this repo, which also links a paper from 2022. Especially kmedois in Time Series seems to work well with distance based clustering in Time Series problems.
https://github.com/sktime/distance-based-time-series-clustering
For those, who are lazy, and do not want to visit the github repo, here is the paper: https://arxiv.org/pdf/2205.15181.pdf
|
Clustering & Time Series
Update in 2023
If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem:
https://sktime-backup.readthedocs.io/en/v0.15.1/a
|
15,939
|
Could any equation have predicted the results of this simulation?
|
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning.
For example, suppose you've flipped the following sequence so far:
$$
HTTHHHTTTTTTH
$$
You haven't won yet, but you could win in two more flips if those two flips are $TH$. In other words, your last flip was $H$ so you have made "one flip" worth of progress toward your goal.
Since you mentioned Markov Chains, let's describe the "state" of the game by how much progress you have made toward the desired sequence $HTH$. At every point in the game, your progress is either $0$, $1$, or $2$--if it reaches $3$, then you have won. So we'll label the states $0$, $1$, $2$. (And if you want, you can say that there's an "absorbing state" called "state $3$".)
You start out in state $0$, of course.
You want to know the expected number of flips, from the starting point, state $0$. Let $E_i$ denote the expected number of flips, starting from state $i$.
At state $0$, what can happen? You can either flip $H$, and move to state $1$, or you flip $T$ and remain in state $0$. But either way, your "flip counter" goes up by $1$. So:
$$
E_0 = p (1 + E_1) + (1-p)(1 + E_0),
$$
where $p = P(H)$, or equivalently
$$
E_0 = 1 + p E_1 + (1-p) E_0.
$$
The "$1+$" comes from incrementing your "flip counter".
At state $1$, you want $T$, not $H$. But if you do get an $H$, at least you don't go back to the beginning--you still have an $H$ that you can build on next time. So:
$$
E_1 = 1 + p E_1 + (1-p) E_2.
$$
At state $2$, you either flip $H$ and win, or you flip $T$ and go all the way back to the beginning.
$$
E_2 = 1 + (1-p) E_0.
$$
Now solve the three linear equations for the three unknowns.
In particular you want $E_0$. I get
$$
E_0 = \left( \frac{1}{p} \right) \left( \frac{1}{p} + \frac{1}{1-p} + 1 \right),
$$
which for $p=1/20$ gives $E_0 = 441 + 1/19 \approx 441.0526$. (So the mean is not $413$. In my own simulations I do get results around $441$ on average, at least if I do around $10^5$ or $10^6$ trials.)
In case you are interested, our three linear equations come from the Law of Total Expectation.
This is really the same as the approach in Stephan Kolassa's answer, but it is a little more efficient because we don't need as many states. For example, there is no real difference between $TTT$ and $HTT$--either way, you're back at the beginning. So we can "collapse" those sequences together, instead of treating them as separate states.
Simulation code (two ways, sorry for using Python instead of R):
# Python 3
import random
def one_trial(p=0.05):
# p = P(Heads)
state = 0 # states are 0, 1, 2, representing the progress made so far
flip_count = 0 # number of flips so far
while True:
flip_count += 1
if state == 0: # empty state
state = random.random() < p
# 1 if H, 0 if T
elif state == 1: # 'H'
state += random.random() >= p
# state 1 (H) if flip H, state 2 (HT) if flip T
else: # state 2, 'HT'
if random.random() < p: # HTH, game ends!
return flip_count
else: # HTT, back to empty state
state = 0
def slow_trial(p=0.05):
sequence = ''
while sequence[-3:] != 'HTH':
if random.random() < p:
sequence += 'H'
else:
sequence += 'T'
return len(sequence)
N = 10**5
print(sum(one_trial() for _ in range(N)) / N)
print(sum(slow_trial() for _ in range(N)) / N)
|
Could any equation have predicted the results of this simulation?
|
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning.
For example, suppose you've flipped the following sequence so far:
$$
HTTHHHTTTTTTH
$$
You haven't won yet, but y
|
Could any equation have predicted the results of this simulation?
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning.
For example, suppose you've flipped the following sequence so far:
$$
HTTHHHTTTTTTH
$$
You haven't won yet, but you could win in two more flips if those two flips are $TH$. In other words, your last flip was $H$ so you have made "one flip" worth of progress toward your goal.
Since you mentioned Markov Chains, let's describe the "state" of the game by how much progress you have made toward the desired sequence $HTH$. At every point in the game, your progress is either $0$, $1$, or $2$--if it reaches $3$, then you have won. So we'll label the states $0$, $1$, $2$. (And if you want, you can say that there's an "absorbing state" called "state $3$".)
You start out in state $0$, of course.
You want to know the expected number of flips, from the starting point, state $0$. Let $E_i$ denote the expected number of flips, starting from state $i$.
At state $0$, what can happen? You can either flip $H$, and move to state $1$, or you flip $T$ and remain in state $0$. But either way, your "flip counter" goes up by $1$. So:
$$
E_0 = p (1 + E_1) + (1-p)(1 + E_0),
$$
where $p = P(H)$, or equivalently
$$
E_0 = 1 + p E_1 + (1-p) E_0.
$$
The "$1+$" comes from incrementing your "flip counter".
At state $1$, you want $T$, not $H$. But if you do get an $H$, at least you don't go back to the beginning--you still have an $H$ that you can build on next time. So:
$$
E_1 = 1 + p E_1 + (1-p) E_2.
$$
At state $2$, you either flip $H$ and win, or you flip $T$ and go all the way back to the beginning.
$$
E_2 = 1 + (1-p) E_0.
$$
Now solve the three linear equations for the three unknowns.
In particular you want $E_0$. I get
$$
E_0 = \left( \frac{1}{p} \right) \left( \frac{1}{p} + \frac{1}{1-p} + 1 \right),
$$
which for $p=1/20$ gives $E_0 = 441 + 1/19 \approx 441.0526$. (So the mean is not $413$. In my own simulations I do get results around $441$ on average, at least if I do around $10^5$ or $10^6$ trials.)
In case you are interested, our three linear equations come from the Law of Total Expectation.
This is really the same as the approach in Stephan Kolassa's answer, but it is a little more efficient because we don't need as many states. For example, there is no real difference between $TTT$ and $HTT$--either way, you're back at the beginning. So we can "collapse" those sequences together, instead of treating them as separate states.
Simulation code (two ways, sorry for using Python instead of R):
# Python 3
import random
def one_trial(p=0.05):
# p = P(Heads)
state = 0 # states are 0, 1, 2, representing the progress made so far
flip_count = 0 # number of flips so far
while True:
flip_count += 1
if state == 0: # empty state
state = random.random() < p
# 1 if H, 0 if T
elif state == 1: # 'H'
state += random.random() >= p
# state 1 (H) if flip H, state 2 (HT) if flip T
else: # state 2, 'HT'
if random.random() < p: # HTH, game ends!
return flip_count
else: # HTT, back to empty state
state = 0
def slow_trial(p=0.05):
sequence = ''
while sequence[-3:] != 'HTH':
if random.random() < p:
sequence += 'H'
else:
sequence += 'T'
return len(sequence)
N = 10**5
print(sum(one_trial() for _ in range(N)) / N)
print(sum(slow_trial() for _ in range(N)) / N)
|
Could any equation have predicted the results of this simulation?
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning.
For example, suppose you've flipped the following sequence so far:
$$
HTTHHHTTTTTTH
$$
You haven't won yet, but y
|
15,940
|
Could any equation have predicted the results of this simulation?
|
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar:
n_sims <- 1e5
library(pbapply)
results <- pbreplicate(n_sims,{
flips <- NULL
while(length(flips)<3 || !identical(tail(flips,3),c("H","T","H"))){
flips <- c(flips,sample(c("H","T"),size=1,prob=c(.05,.95)))
}
length(flips)
})
hist(results,breaks=seq(-0.5,max(results)+0.5))
Growing the flips vector in each step makes this code very slow. It would be much faster to grow it in large batches - but that would be at the expense of legibility. As it is, this code runs its 100,000 replicates in about ten minutes, which we can use to google around a bit for an abstract solution.
Second, there indeed is an abstract solution. The details are a bit complicated, and I unfortunately don't have the time to write them all up right now, but I'll give the gist.
Specifically, we can model your sequence of unfair coin flips as a Markov chain, where each state corresponds to the last three flips, and where we stop after hitting HTH. Thus, we have a single absorbing state and seven transient states, and if we use $p$ to denote the probability of next getting a Head, we get a state transition matrix as follows:
(Apologies for pasting a picture; I don't know how to get the annotations to work in MathJaX. LaTeX code below.)
This is already in canonical form, with the single absorbing state at the bottom and the right. We can partition the matrix into a block matrix $T$ for the transient states and the row and column corresponding to the absorbing state:
$$\begin{pmatrix} T & T_0 \\ 0^t & 1 \end{pmatrix}.$$
The initial states are all equally probable - this is our initial distribution $\tau$.
The distribution of the number of steps necessary to reach the absorbing state is the discrete phase-type distribution, which depends on the initial state $\tau$ and the submatrix $T$. You can take a look at Dayar (2005) on how to calculate the expectation of this distribution (also see Expected number of steps between states in a Markov Chain, where I got the pointer to this paper), but note that (1) you already start after three steps, so you would need to add $3$ to the expectation, and (2) Dayar (2005) assumes that the probability of starting in the absorbing state is zero, and in your case, it isn't, it's already $p^2(1-p)$, so you need to correct for that.
LaTeX code for the matrix, kudos to Bernard:
\documentclass{article}
\usepackage{mathtools}
\usepackage{blkarray, bigstrut}
\begin{document}
\[ \setlength{\bigstrutjot}{4pt}
\begin{blockarray}{r@{\enspace\vrule}rcccccccc}
\phantom{HHH}& & HHT & HHH & THT & THH & TTT & TTH & HTT & HTH \\
\BAhline
\begin{block}{r@{\enspace\vrule}r(cccccccc)}
HHT & & 0 & 0 & 0 & 0 & 0 & 0 & 1-p & p \bigstrut[t]\\
HHH & & 1-p & p & 0 & 0 & 0 & 0 & 0 & 0 \\
THT & & 0 & 0 & 0 & 0 & 0 & 0 & 1-p & p \\
THH & & 1-p & p & 0 & 0 & 0 & 0 & 0 & 0 \\
TTT & & 0 & 0 & 0 & 0 & 1-p & p & 0 & 0 \\
TTH & & 0 & 0 & 1-p & p & 0 & 0 & 0 & 0 \\
HTT & & 0 & 0 & 0 & 0 & 1-p & p & 0 & 0 \\
HTH & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\bigstrut[b]\\
\end{block}
\end{blockarray} \]
\end{document}
|
Could any equation have predicted the results of this simulation?
|
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar:
n_sims <- 1e5
library(pbapply)
results <- pbreplicate(n_sims,{
|
Could any equation have predicted the results of this simulation?
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar:
n_sims <- 1e5
library(pbapply)
results <- pbreplicate(n_sims,{
flips <- NULL
while(length(flips)<3 || !identical(tail(flips,3),c("H","T","H"))){
flips <- c(flips,sample(c("H","T"),size=1,prob=c(.05,.95)))
}
length(flips)
})
hist(results,breaks=seq(-0.5,max(results)+0.5))
Growing the flips vector in each step makes this code very slow. It would be much faster to grow it in large batches - but that would be at the expense of legibility. As it is, this code runs its 100,000 replicates in about ten minutes, which we can use to google around a bit for an abstract solution.
Second, there indeed is an abstract solution. The details are a bit complicated, and I unfortunately don't have the time to write them all up right now, but I'll give the gist.
Specifically, we can model your sequence of unfair coin flips as a Markov chain, where each state corresponds to the last three flips, and where we stop after hitting HTH. Thus, we have a single absorbing state and seven transient states, and if we use $p$ to denote the probability of next getting a Head, we get a state transition matrix as follows:
(Apologies for pasting a picture; I don't know how to get the annotations to work in MathJaX. LaTeX code below.)
This is already in canonical form, with the single absorbing state at the bottom and the right. We can partition the matrix into a block matrix $T$ for the transient states and the row and column corresponding to the absorbing state:
$$\begin{pmatrix} T & T_0 \\ 0^t & 1 \end{pmatrix}.$$
The initial states are all equally probable - this is our initial distribution $\tau$.
The distribution of the number of steps necessary to reach the absorbing state is the discrete phase-type distribution, which depends on the initial state $\tau$ and the submatrix $T$. You can take a look at Dayar (2005) on how to calculate the expectation of this distribution (also see Expected number of steps between states in a Markov Chain, where I got the pointer to this paper), but note that (1) you already start after three steps, so you would need to add $3$ to the expectation, and (2) Dayar (2005) assumes that the probability of starting in the absorbing state is zero, and in your case, it isn't, it's already $p^2(1-p)$, so you need to correct for that.
LaTeX code for the matrix, kudos to Bernard:
\documentclass{article}
\usepackage{mathtools}
\usepackage{blkarray, bigstrut}
\begin{document}
\[ \setlength{\bigstrutjot}{4pt}
\begin{blockarray}{r@{\enspace\vrule}rcccccccc}
\phantom{HHH}& & HHT & HHH & THT & THH & TTT & TTH & HTT & HTH \\
\BAhline
\begin{block}{r@{\enspace\vrule}r(cccccccc)}
HHT & & 0 & 0 & 0 & 0 & 0 & 0 & 1-p & p \bigstrut[t]\\
HHH & & 1-p & p & 0 & 0 & 0 & 0 & 0 & 0 \\
THT & & 0 & 0 & 0 & 0 & 0 & 0 & 1-p & p \\
THH & & 1-p & p & 0 & 0 & 0 & 0 & 0 & 0 \\
TTT & & 0 & 0 & 0 & 0 & 1-p & p & 0 & 0 \\
TTH & & 0 & 0 & 1-p & p & 0 & 0 & 0 & 0 \\
HTT & & 0 & 0 & 0 & 0 & 1-p & p & 0 & 0 \\
HTH & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\bigstrut[b]\\
\end{block}
\end{blockarray} \]
\end{document}
|
Could any equation have predicted the results of this simulation?
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar:
n_sims <- 1e5
library(pbapply)
results <- pbreplicate(n_sims,{
|
15,941
|
Could any equation have predicted the results of this simulation?
|
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigorous Probability Theory by Jeffrey S. Rosenthal, in the martingale chapter. (I don't have the book in front of me at the moment but I'll edit and add a page or equation number when I do.)
Imagine that at each time step $t$ a person arrives and places a bet on the outcome of the next coin toss. The payoff to their first bet is:
$\begin{equation}
\begin{cases}
-1 &\mbox{if toss } t \mbox{ is T} \\
+19 &\mbox{if toss } t \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are wrong (i.e. lose money) on this bet, they stop betting. Note that their net gain if they exit at this point is $-1$. If they are correct, they continue to betting on toss $t+1$ and receive the following payoff:
$\begin{equation}
\begin{cases}
+\frac{20}{19} &\mbox{if toss } t+1 \mbox{ is T} \\
-20 &\mbox{if toss } t+1 \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are wrong on this second bet, they stop betting. Again, note that their net gain if they exit at this point is (by design) $19-20 = -1$. If they were correct for this second bet, they proceed to betting on the outcome of toss $t+2$ and receive:
$\begin{equation}
\begin{cases}
-\frac{400}{19} &\mbox{if toss } t+2 \mbox{ is T} \\
+400 &\mbox{if toss } t+2 \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are incorrect, they exit with a net gain of $19 + \frac{20}{19} - \frac{400}{19} = -1$. If they are correct, everything stops: we have just seen a HTH sequence, and the person who started betting at the beginning of that sequence has just won $19 + \frac{20}{19} + 400 = 420.0526$.
Note that two people have started betting after this big winner: one person whose first bet was incorrect (they exit with $-1$), and a second person whose first bet was correct (they win $19$ but nothing more because the process stops).
Let $\tau$ denote the stopping time, and let $X_t$ denote the cumulative net amount won by all gamblers up to and including time $t$. Let $X_0 = 0$, and note that $X_t$ is a martingale because all bets are (by construction) fair, with an expected payoff of $0$. This will let us use https://en.wikipedia.org/wiki/Optional_stopping_theorem and in particular $\mathbf{E}[X_\tau] = \mathbf{E}[X_0] = 0$.
$X_\tau$ is the total amount won when we reach the stopping time. At this point $\tau - 3$ people will have lost their bets and exited with a net gain of $-1$, one person will have won $420.0526$, and we also have to account for the last two people who start betting after the winner. We have:
$\begin{equation}
\mathbf{E}[X_\tau] = 0 = (\mathbf{E}[\tau]-3)*(-1) + 420.0526 + (-1) + 19
\end{equation}$
Which leads to $\mathbf{E}[\tau] = 420.0526 + 3 + 18 = 441.0526$, which agrees with the answers posted earlier.
|
Could any equation have predicted the results of this simulation?
|
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigoro
|
Could any equation have predicted the results of this simulation?
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigorous Probability Theory by Jeffrey S. Rosenthal, in the martingale chapter. (I don't have the book in front of me at the moment but I'll edit and add a page or equation number when I do.)
Imagine that at each time step $t$ a person arrives and places a bet on the outcome of the next coin toss. The payoff to their first bet is:
$\begin{equation}
\begin{cases}
-1 &\mbox{if toss } t \mbox{ is T} \\
+19 &\mbox{if toss } t \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are wrong (i.e. lose money) on this bet, they stop betting. Note that their net gain if they exit at this point is $-1$. If they are correct, they continue to betting on toss $t+1$ and receive the following payoff:
$\begin{equation}
\begin{cases}
+\frac{20}{19} &\mbox{if toss } t+1 \mbox{ is T} \\
-20 &\mbox{if toss } t+1 \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are wrong on this second bet, they stop betting. Again, note that their net gain if they exit at this point is (by design) $19-20 = -1$. If they were correct for this second bet, they proceed to betting on the outcome of toss $t+2$ and receive:
$\begin{equation}
\begin{cases}
-\frac{400}{19} &\mbox{if toss } t+2 \mbox{ is T} \\
+400 &\mbox{if toss } t+2 \mbox{ is H} \\
\end{cases}
\end{equation}$
If they are incorrect, they exit with a net gain of $19 + \frac{20}{19} - \frac{400}{19} = -1$. If they are correct, everything stops: we have just seen a HTH sequence, and the person who started betting at the beginning of that sequence has just won $19 + \frac{20}{19} + 400 = 420.0526$.
Note that two people have started betting after this big winner: one person whose first bet was incorrect (they exit with $-1$), and a second person whose first bet was correct (they win $19$ but nothing more because the process stops).
Let $\tau$ denote the stopping time, and let $X_t$ denote the cumulative net amount won by all gamblers up to and including time $t$. Let $X_0 = 0$, and note that $X_t$ is a martingale because all bets are (by construction) fair, with an expected payoff of $0$. This will let us use https://en.wikipedia.org/wiki/Optional_stopping_theorem and in particular $\mathbf{E}[X_\tau] = \mathbf{E}[X_0] = 0$.
$X_\tau$ is the total amount won when we reach the stopping time. At this point $\tau - 3$ people will have lost their bets and exited with a net gain of $-1$, one person will have won $420.0526$, and we also have to account for the last two people who start betting after the winner. We have:
$\begin{equation}
\mathbf{E}[X_\tau] = 0 = (\mathbf{E}[\tau]-3)*(-1) + 420.0526 + (-1) + 19
\end{equation}$
Which leads to $\mathbf{E}[\tau] = 420.0526 + 3 + 18 = 441.0526$, which agrees with the answers posted earlier.
|
Could any equation have predicted the results of this simulation?
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigoro
|
15,942
|
Could any equation have predicted the results of this simulation?
|
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work.
So first just generate the possible sequences and associated probabilities where "HTH" are the last 3 flips (with that sequence not occurring previously). Then look for patterns. For integer patterns the go to place is http://oeis.org.
Note: To make some of the calculations easier I've used 1 for H and 0 for T. Using Mathematica the probabilities for the flip that ends with "HTH" (or "101") with general $p$ is generated as follows:
s[1] = {{1, 0, 1}};
pr[1] = p^2 (1 - p);
Print[{1, pr[1]}]
Do[s[i] = Select[Flatten[{Prepend[#, 1], Prepend[#, 0]} & /@ s[i - 1], 1], ! (#[[1]] == 1 && #[[2]] == 0 && #[[3]] == 1) &];
pr[i] = Total[p^Total[#] (1 - p)^(Length[#] - Total[#]) & /@ s[i]],
{i, 2, 8}]
Table[{i, pr[i]}, {i, 1, 8}] // TableForm
The pattern of the powers of $p$ and $1-p$ seems obvious but maybe not the associated coefficients. That's where http://oeis.org comes in. We plug in the coefficients for $n=8$
1,6,12,13,9,6,1,1
and find sequence A124279. That translates to the formula for the probability of flip $n$ ending in "HTH" (and not containing any previous "HTH" sequence):
pr[n_] := p^2 (1 - p) Sum[p^(n - k - 1) (1 - p)^k Sum[Binomial[j, k - j] Binomial[n - k - 1, k - j], {j, 0, n}], {k, 0, n - 1}]
or
$$pr(n) = p^2 (1-p) \sum _{k=0}^{n-1} (1-p)^k p^{-k+n-1} \sum _{j=0}^n \binom{j}{k-j} \binom{-k+n-1}{k-j}$$
The median is between flip 303 and 304 as the associated cumulative probabilities are 0.49891 and 0.500051, respectively, when $p=0.05$.
To calculate the probabilities in R you'll either need to use multiple precision arithmetic or reduce round-off errors by using logs.
|
Could any equation have predicted the results of this simulation?
|
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work.
So first just generate the possible sequences and associated proba
|
Could any equation have predicted the results of this simulation?
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work.
So first just generate the possible sequences and associated probabilities where "HTH" are the last 3 flips (with that sequence not occurring previously). Then look for patterns. For integer patterns the go to place is http://oeis.org.
Note: To make some of the calculations easier I've used 1 for H and 0 for T. Using Mathematica the probabilities for the flip that ends with "HTH" (or "101") with general $p$ is generated as follows:
s[1] = {{1, 0, 1}};
pr[1] = p^2 (1 - p);
Print[{1, pr[1]}]
Do[s[i] = Select[Flatten[{Prepend[#, 1], Prepend[#, 0]} & /@ s[i - 1], 1], ! (#[[1]] == 1 && #[[2]] == 0 && #[[3]] == 1) &];
pr[i] = Total[p^Total[#] (1 - p)^(Length[#] - Total[#]) & /@ s[i]],
{i, 2, 8}]
Table[{i, pr[i]}, {i, 1, 8}] // TableForm
The pattern of the powers of $p$ and $1-p$ seems obvious but maybe not the associated coefficients. That's where http://oeis.org comes in. We plug in the coefficients for $n=8$
1,6,12,13,9,6,1,1
and find sequence A124279. That translates to the formula for the probability of flip $n$ ending in "HTH" (and not containing any previous "HTH" sequence):
pr[n_] := p^2 (1 - p) Sum[p^(n - k - 1) (1 - p)^k Sum[Binomial[j, k - j] Binomial[n - k - 1, k - j], {j, 0, n}], {k, 0, n - 1}]
or
$$pr(n) = p^2 (1-p) \sum _{k=0}^{n-1} (1-p)^k p^{-k+n-1} \sum _{j=0}^n \binom{j}{k-j} \binom{-k+n-1}{k-j}$$
The median is between flip 303 and 304 as the associated cumulative probabilities are 0.49891 and 0.500051, respectively, when $p=0.05$.
To calculate the probabilities in R you'll either need to use multiple precision arithmetic or reduce round-off errors by using logs.
|
Could any equation have predicted the results of this simulation?
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work.
So first just generate the possible sequences and associated proba
|
15,943
|
Could any equation have predicted the results of this simulation?
|
Disclosure: I wrote the samc R package used in this answer
This answer is more of a supplement to Stephan Kolassa's answer
In it, he showed how to construct a transition matrix representing the problem:
Image credit: Stephan Kolassa's answer
Now, as indicated in comments/another answer, this matrix can be simplified a variety of different ways, but I'm going to keep it as-is because there are other things about the experiment you can learn from it that you can't from the others.
Let's call this transition matrix $P$, which is how I define it in the samc package: samc overview
This can be broken down as:
$$
P =
\begin{bmatrix}
Q & R \\
0 & I
\end{bmatrix}
$$
Using this, there a variety of things you can calculate about the model. For example:
$$z=(I-Q)^{-1}{\cdot}1=F{\cdot}1$$
Will tell you the expected time to absorption given a starting transient state. In this example, that means that given your last 3 flips, how many more flips do you expect it to take before $HTH$ occurs. Some code:
library(samc)
p <- 0.05
q <- 1 - p
p_mat <- matrix(c(0, 0, 0, 0, 0, 0, q, p,
q, p, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, q, p,
q, p, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, q, p, 0, 0,
0, 0, q, p, 0, 0, 0, 0,
0, 0, 0, 0, q, p, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1),
8, byrow = TRUE)
rownames(p_mat) <- c("HHT", "HHH", "THT", "THH", "TTT", "TTH", "HTT", "HTH")
colnames(p_mat) <- c("HHT", "HHH", "THT", "THH", "TTT", "TTH", "HTT", "HTH")
# A samc object is the core of the package
samc_obj <- samc(p_mat)
# Given the last 3 flips, how many more flips until we hit HTH (absorption)?
survival(samc_obj)
[1] 420.0000 421.0526 420.0000 421.0526 441.0526 421.0526 441.0526
So, if we start with $HHT$, we expect it would take 420 flips on average to end up with $HTH$. $THT$ is the same, and they represent the best case scenario where we start only one flip away from $HTH$. Conversely, $TTT$ and $HTT$ are a worst case scenario where we start off needing 3 perfect flips; the result for these is 441.0526, the same as other answers.
As Neil Slater pointed out in a comment, the transition matrix can be reduced to 5 elements if this all you want. But there are other metrics to explore. For example, we can calculate the number of times we expect a sequence to occur before we hit $HTH$:
$$F = (I-Q)^{-1}$$
Here's some code (I include some code to show the relationship to the previous metric as well):
# Given a starting point (in this case, a sequence of 3 flips), how many times
# would we expect the different combinations of 3 flips to occur before absorption?
visitation(samc_obj, origin = "HHT")
[1] 1.95 0.05 18.05 0.95 361.00 19.00 19.00
sum(visitation(samc_obj, origin = "HHT")) # Compare to survival() result
[1] 420
# Instead of a start point, we can look at an end point and how often we expect
# it to occur for each of the possible starting points
visitation(samc_obj, dest = "THT")
[1] 18.05 18.05 19.05 18.05 19.00 19.00 19.00
# These results are just rows/cols of a larger matrix. We can get the entire matrix
# of the start/end possibilities but first we have to disable some safety measures
# in place because this package is designed to work with extremely large P matrices
# (millions of rows/cols) where these types of results will consume too much RAM and
# crash R
visitation(samc_obj)
Error: This version of the visitation() method produces a large dense matrix.
See the documentation for details.
samc_obj$override <- TRUE
visitation(samc_obj)
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 1.95 0.05000000 18.05 0.95 361 19 19
[2,] 1.95 1.10263158 18.05 0.95 361 19 19
[3,] 0.95 0.05000000 19.05 0.95 361 19 19
[4,] 1.95 0.10263158 18.05 1.95 361 19 19
[5,] 1.00 0.05263158 19.00 1.00 381 20 19
[6,] 1.00 0.05263158 19.00 1.00 361 20 19
[7,] 1.00 0.05263158 19.00 1.00 380 20 20
rowSums(visitation(samc_obj)) # equivalent to survival() above
[1] 420.0000 421.0526 420.0000 421.0526 441.0526 421.0526 441.0526
Another one:
$$D=(F-I)diag(F)^{-1}$$
would calculate the probability of a sequence of flips occurring before you hit $HTH$. And there's a "short-term" version of this:
$$\tilde{D}_{jt}=(\sum_{n=0}^{t-1}\tilde{Q}^n)\tilde{q}_j$$
which calculates the same thing, but within a given number of time steps (or coin flips in this context). So let's say you're interested in the probability of flipping $TTT$ with only 3 flips:
dispersal(samc_obj, dest = "TTT", time = 3)
[1] 0.902500 0.857375 0.902500 0.857375 NA 0.857375 0.950000
Depending on your last 3 flips, you basically have 3 options (4 if you count having already flipped $TTT$): $0.95$, $0.95^2=0.9025$, or $0.95^3= 0.857375$.
Obviously, some of these things can be easier or more intuitive to calculate using other methods, but once set up, absorbing markov chains give you a lot of flexibility in exploring a scenario.
A full list of the things you can calculate (at least using my package) are available in the function reference: samc functions. Which calculations are relevant depends on the context. When writing the functions, I used the book “Finite Markov Chains” by Kemeny and Snell as reference, which includes proofs. You can find the pdf of it online for free (legally, I believe) pretty easily.
Tying things back to OP's questions: as the other answers have shown, there are a variety of ways to mathematically model your experiment, including Markov chains. One thing Stephen Kolassa alluded to, and what I was hoping to show, is there is a LOT more you can learn about this experiment than you might have realized.
Note: the package was originally written for spatial applications, so a lot of the terminology in it is biased towards that. However, it is usable for pretty much any application of absorbing Markov chains
|
Could any equation have predicted the results of this simulation?
|
Disclosure: I wrote the samc R package used in this answer
This answer is more of a supplement to Stephan Kolassa's answer
In it, he showed how to construct a transition matrix representing the proble
|
Could any equation have predicted the results of this simulation?
Disclosure: I wrote the samc R package used in this answer
This answer is more of a supplement to Stephan Kolassa's answer
In it, he showed how to construct a transition matrix representing the problem:
Image credit: Stephan Kolassa's answer
Now, as indicated in comments/another answer, this matrix can be simplified a variety of different ways, but I'm going to keep it as-is because there are other things about the experiment you can learn from it that you can't from the others.
Let's call this transition matrix $P$, which is how I define it in the samc package: samc overview
This can be broken down as:
$$
P =
\begin{bmatrix}
Q & R \\
0 & I
\end{bmatrix}
$$
Using this, there a variety of things you can calculate about the model. For example:
$$z=(I-Q)^{-1}{\cdot}1=F{\cdot}1$$
Will tell you the expected time to absorption given a starting transient state. In this example, that means that given your last 3 flips, how many more flips do you expect it to take before $HTH$ occurs. Some code:
library(samc)
p <- 0.05
q <- 1 - p
p_mat <- matrix(c(0, 0, 0, 0, 0, 0, q, p,
q, p, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, q, p,
q, p, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, q, p, 0, 0,
0, 0, q, p, 0, 0, 0, 0,
0, 0, 0, 0, q, p, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1),
8, byrow = TRUE)
rownames(p_mat) <- c("HHT", "HHH", "THT", "THH", "TTT", "TTH", "HTT", "HTH")
colnames(p_mat) <- c("HHT", "HHH", "THT", "THH", "TTT", "TTH", "HTT", "HTH")
# A samc object is the core of the package
samc_obj <- samc(p_mat)
# Given the last 3 flips, how many more flips until we hit HTH (absorption)?
survival(samc_obj)
[1] 420.0000 421.0526 420.0000 421.0526 441.0526 421.0526 441.0526
So, if we start with $HHT$, we expect it would take 420 flips on average to end up with $HTH$. $THT$ is the same, and they represent the best case scenario where we start only one flip away from $HTH$. Conversely, $TTT$ and $HTT$ are a worst case scenario where we start off needing 3 perfect flips; the result for these is 441.0526, the same as other answers.
As Neil Slater pointed out in a comment, the transition matrix can be reduced to 5 elements if this all you want. But there are other metrics to explore. For example, we can calculate the number of times we expect a sequence to occur before we hit $HTH$:
$$F = (I-Q)^{-1}$$
Here's some code (I include some code to show the relationship to the previous metric as well):
# Given a starting point (in this case, a sequence of 3 flips), how many times
# would we expect the different combinations of 3 flips to occur before absorption?
visitation(samc_obj, origin = "HHT")
[1] 1.95 0.05 18.05 0.95 361.00 19.00 19.00
sum(visitation(samc_obj, origin = "HHT")) # Compare to survival() result
[1] 420
# Instead of a start point, we can look at an end point and how often we expect
# it to occur for each of the possible starting points
visitation(samc_obj, dest = "THT")
[1] 18.05 18.05 19.05 18.05 19.00 19.00 19.00
# These results are just rows/cols of a larger matrix. We can get the entire matrix
# of the start/end possibilities but first we have to disable some safety measures
# in place because this package is designed to work with extremely large P matrices
# (millions of rows/cols) where these types of results will consume too much RAM and
# crash R
visitation(samc_obj)
Error: This version of the visitation() method produces a large dense matrix.
See the documentation for details.
samc_obj$override <- TRUE
visitation(samc_obj)
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 1.95 0.05000000 18.05 0.95 361 19 19
[2,] 1.95 1.10263158 18.05 0.95 361 19 19
[3,] 0.95 0.05000000 19.05 0.95 361 19 19
[4,] 1.95 0.10263158 18.05 1.95 361 19 19
[5,] 1.00 0.05263158 19.00 1.00 381 20 19
[6,] 1.00 0.05263158 19.00 1.00 361 20 19
[7,] 1.00 0.05263158 19.00 1.00 380 20 20
rowSums(visitation(samc_obj)) # equivalent to survival() above
[1] 420.0000 421.0526 420.0000 421.0526 441.0526 421.0526 441.0526
Another one:
$$D=(F-I)diag(F)^{-1}$$
would calculate the probability of a sequence of flips occurring before you hit $HTH$. And there's a "short-term" version of this:
$$\tilde{D}_{jt}=(\sum_{n=0}^{t-1}\tilde{Q}^n)\tilde{q}_j$$
which calculates the same thing, but within a given number of time steps (or coin flips in this context). So let's say you're interested in the probability of flipping $TTT$ with only 3 flips:
dispersal(samc_obj, dest = "TTT", time = 3)
[1] 0.902500 0.857375 0.902500 0.857375 NA 0.857375 0.950000
Depending on your last 3 flips, you basically have 3 options (4 if you count having already flipped $TTT$): $0.95$, $0.95^2=0.9025$, or $0.95^3= 0.857375$.
Obviously, some of these things can be easier or more intuitive to calculate using other methods, but once set up, absorbing markov chains give you a lot of flexibility in exploring a scenario.
A full list of the things you can calculate (at least using my package) are available in the function reference: samc functions. Which calculations are relevant depends on the context. When writing the functions, I used the book “Finite Markov Chains” by Kemeny and Snell as reference, which includes proofs. You can find the pdf of it online for free (legally, I believe) pretty easily.
Tying things back to OP's questions: as the other answers have shown, there are a variety of ways to mathematically model your experiment, including Markov chains. One thing Stephen Kolassa alluded to, and what I was hoping to show, is there is a LOT more you can learn about this experiment than you might have realized.
Note: the package was originally written for spatial applications, so a lot of the terminology in it is biased towards that. However, it is usable for pretty much any application of absorbing Markov chains
|
Could any equation have predicted the results of this simulation?
Disclosure: I wrote the samc R package used in this answer
This answer is more of a supplement to Stephan Kolassa's answer
In it, he showed how to construct a transition matrix representing the proble
|
15,944
|
Could any equation have predicted the results of this simulation?
|
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging into his work will provide the equations/ formulas of interest.
And because I can't resist, "HTH"
|
Could any equation have predicted the results of this simulation?
|
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging int
|
Could any equation have predicted the results of this simulation?
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging into his work will provide the equations/ formulas of interest.
And because I can't resist, "HTH"
|
Could any equation have predicted the results of this simulation?
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging int
|
15,945
|
Could any equation have predicted the results of this simulation?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Final formula appears to be
sum(i = 1 to length(pattern) : if (first i flips of pattern match the last i flips of pattern) then P(series of i flips matches first i flips of pattern)^-1 else 0)
(For a fair coin, this reduces to Conway's algorithm; it can probably be proven by a similar method.)
|
Could any equation have predicted the results of this simulation?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Could any equation have predicted the results of this simulation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Final formula appears to be
sum(i = 1 to length(pattern) : if (first i flips of pattern match the last i flips of pattern) then P(series of i flips matches first i flips of pattern)^-1 else 0)
(For a fair coin, this reduces to Conway's algorithm; it can probably be proven by a similar method.)
|
Could any equation have predicted the results of this simulation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
15,946
|
Can I trust a significant result of a t-test if the sample size is small?
|
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size.
In practice there are some not-quite-true assumptions which we can get away with for large sample sizes but they can cause problems for small sample sizes. Do you know if the underlying distribution is normally distributed? Are all the samples independent and identically distributed?
If you doubt the validity of the test then an alternative you can make use of is bootstrapping. Bootstrapping involves resampling from your sample in order to see how often the null hypothesis is true or false. Perhaps your null hypothesis is $\mu<0$ and your p-value is 0.05 but the bootstrapping shows that the sample mean is less than zero 10% of the time. This would indicate that it was a fluke which caused a p-value of 0.05 and you should be less confident that the null hypothesis is false.
|
Can I trust a significant result of a t-test if the sample size is small?
|
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size.
In practice there are some not-quite-true assumptions which we can get away with for large sam
|
Can I trust a significant result of a t-test if the sample size is small?
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size.
In practice there are some not-quite-true assumptions which we can get away with for large sample sizes but they can cause problems for small sample sizes. Do you know if the underlying distribution is normally distributed? Are all the samples independent and identically distributed?
If you doubt the validity of the test then an alternative you can make use of is bootstrapping. Bootstrapping involves resampling from your sample in order to see how often the null hypothesis is true or false. Perhaps your null hypothesis is $\mu<0$ and your p-value is 0.05 but the bootstrapping shows that the sample mean is less than zero 10% of the time. This would indicate that it was a fluke which caused a p-value of 0.05 and you should be less confident that the null hypothesis is false.
|
Can I trust a significant result of a t-test if the sample size is small?
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size.
In practice there are some not-quite-true assumptions which we can get away with for large sam
|
15,947
|
Can I trust a significant result of a t-test if the sample size is small?
|
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggling to be able to claim a statistically significant outcome!
Setting that aside, consider the following from p. 261 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed.. Cambridge, MA: Morgan-Kaufmann.
How Ronald Fisher recommended using p-values
When Karl Pearson was the grand old man of statistics and Ronald Fisher was a relative newcomer, Pearson, apparently threatened by Fisher’s ideas and mathematical ability, used his influence to prevent Fisher from publishing in the major statistical journals of the time, Biometrika and the Journal of the Royal Statistical Society. Consequently, Fisher published his ideas in a variety of other venues such as agricultural and meteorological journals, including several papers for the Proceedings of the Society for Psychical Research. It was in one of the papers for this latter journal that he mentioned the convention of setting what we now call the acceptable Type I error (alpha) to 0.05 and, critically, also mentioned the importance of reproducibility when encountering an unexpected significant result:
An observation is judged to be significant, if it would rarely have been produced, in the absence of a real cause of the kind we are seeking. It is a common practice to judge a result significant, if it is of such a magnitude that it would have been produced by chance not more frequently than once in twenty trials. This is an arbitrary, but convenient, level of significance for the practical investigator, but it does not mean that he allows himself to be deceived once in every twenty experiments. The test of significance only tells him what to ignore, namely, all experiments in which significant results are not obtained. He should only claim that a phenomenon is experimentally demonstrable when he knows how to design an experiment so that it will rarely fail to give a significant result. Consequently, isolated significant results which he does not know how to reproduce are left in suspense pending further investigation. (Fisher, 1929, p. 191)
Reference
Fisher, R. A. (1929). The statistical method in psychical research. Proceedings of the Society for Psychical Research, 39, 189-192.
|
Can I trust a significant result of a t-test if the sample size is small?
|
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggl
|
Can I trust a significant result of a t-test if the sample size is small?
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggling to be able to claim a statistically significant outcome!
Setting that aside, consider the following from p. 261 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed.. Cambridge, MA: Morgan-Kaufmann.
How Ronald Fisher recommended using p-values
When Karl Pearson was the grand old man of statistics and Ronald Fisher was a relative newcomer, Pearson, apparently threatened by Fisher’s ideas and mathematical ability, used his influence to prevent Fisher from publishing in the major statistical journals of the time, Biometrika and the Journal of the Royal Statistical Society. Consequently, Fisher published his ideas in a variety of other venues such as agricultural and meteorological journals, including several papers for the Proceedings of the Society for Psychical Research. It was in one of the papers for this latter journal that he mentioned the convention of setting what we now call the acceptable Type I error (alpha) to 0.05 and, critically, also mentioned the importance of reproducibility when encountering an unexpected significant result:
An observation is judged to be significant, if it would rarely have been produced, in the absence of a real cause of the kind we are seeking. It is a common practice to judge a result significant, if it is of such a magnitude that it would have been produced by chance not more frequently than once in twenty trials. This is an arbitrary, but convenient, level of significance for the practical investigator, but it does not mean that he allows himself to be deceived once in every twenty experiments. The test of significance only tells him what to ignore, namely, all experiments in which significant results are not obtained. He should only claim that a phenomenon is experimentally demonstrable when he knows how to design an experiment so that it will rarely fail to give a significant result. Consequently, isolated significant results which he does not know how to reproduce are left in suspense pending further investigation. (Fisher, 1929, p. 191)
Reference
Fisher, R. A. (1929). The statistical method in psychical research. Proceedings of the Society for Psychical Research, 39, 189-192.
|
Can I trust a significant result of a t-test if the sample size is small?
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggl
|
15,948
|
Can I trust a significant result of a t-test if the sample size is small?
|
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true.
Indeed, let's model it using a super-simple urn-type model; in the urn, there are numbered balls each corresponding to an experiment you might choose to do, some of which have the null true and some which have the null false. Call the proportion of true nulls in the urn $t$.
To further simplify the idea, let us assume the power for those false nulls is constant (at $(1-\beta)$, since $\beta$ is the usual symbol for the type II error rate).
You choose some experiments from our urn ($n$ of them, say) "at random", perform them and reject or fail to reject their hypothesis. We can assume that the total number of experiments in the urn ($M$, say) is large enough that it doesn't make a difference that this is sampling without replacement (i.e. we'd be happy to approximate this as a binomial if need be), and both $n$ and $M$ are large enough that we can discuss what happens on average as if they're what we experience.
What proportion of your rejections will be "correct"?
Expected total number of rejections: $nt\alpha+n(1-t)(1-\beta)$
Expected total number of correct rejections: $n(1-t)(1-\beta)$
Overall proportion of times a rejection was actually the right decision: $\frac{(1-t)(1-\beta)}{t\alpha+(1-t)(1-\beta)}$
Overall proportion of times a rejection was an error: $\frac{t\alpha}{t\alpha+(1-t)(1-\beta)}$
For the proportion of correct rejections to be more than a small number you need to avoid the situation where $(1-t)(1-\beta)\ll t\alpha$
Since in our setup a substantial fraction of nulls are true, if $1-\beta$ is not substantially larger than $\alpha$ (i.e. if you don't have fairly high power), a lot of our rejections are mistakes!
So when your sample size is small (and hence power is low), if a reasonable fraction of our nulls were true, we'd often be making an error when we reject.
The situation isn't much better if almost all our nulls are strictly false -- while most of our rejections will be correct (trivially, since tiny effects are still strictly false), if the power isn't high, a substantial fraction of those rejections will be "in the wrong direction" - we'll conclude the null is false quite often because by chance the sample turned out to be on the wrong side (this may be one argument to use one sided tests - when one sided tests make sense - to at least avoid rejections that make no sense if large sample sizes are hard to get).
We can see that small sample sizes can certainly be a problem.
[This proportion of incorrect rejections is called the false discovery rate]
If you have a notion of likely effect size you're in a better position to judge what an adequate sample size might be. With large anticipated effects, a rejection with a small sample size would not necessarily be a major concern.
|
Can I trust a significant result of a t-test if the sample size is small?
|
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true.
Indeed, let's model it using a super-simple urn-type m
|
Can I trust a significant result of a t-test if the sample size is small?
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true.
Indeed, let's model it using a super-simple urn-type model; in the urn, there are numbered balls each corresponding to an experiment you might choose to do, some of which have the null true and some which have the null false. Call the proportion of true nulls in the urn $t$.
To further simplify the idea, let us assume the power for those false nulls is constant (at $(1-\beta)$, since $\beta$ is the usual symbol for the type II error rate).
You choose some experiments from our urn ($n$ of them, say) "at random", perform them and reject or fail to reject their hypothesis. We can assume that the total number of experiments in the urn ($M$, say) is large enough that it doesn't make a difference that this is sampling without replacement (i.e. we'd be happy to approximate this as a binomial if need be), and both $n$ and $M$ are large enough that we can discuss what happens on average as if they're what we experience.
What proportion of your rejections will be "correct"?
Expected total number of rejections: $nt\alpha+n(1-t)(1-\beta)$
Expected total number of correct rejections: $n(1-t)(1-\beta)$
Overall proportion of times a rejection was actually the right decision: $\frac{(1-t)(1-\beta)}{t\alpha+(1-t)(1-\beta)}$
Overall proportion of times a rejection was an error: $\frac{t\alpha}{t\alpha+(1-t)(1-\beta)}$
For the proportion of correct rejections to be more than a small number you need to avoid the situation where $(1-t)(1-\beta)\ll t\alpha$
Since in our setup a substantial fraction of nulls are true, if $1-\beta$ is not substantially larger than $\alpha$ (i.e. if you don't have fairly high power), a lot of our rejections are mistakes!
So when your sample size is small (and hence power is low), if a reasonable fraction of our nulls were true, we'd often be making an error when we reject.
The situation isn't much better if almost all our nulls are strictly false -- while most of our rejections will be correct (trivially, since tiny effects are still strictly false), if the power isn't high, a substantial fraction of those rejections will be "in the wrong direction" - we'll conclude the null is false quite often because by chance the sample turned out to be on the wrong side (this may be one argument to use one sided tests - when one sided tests make sense - to at least avoid rejections that make no sense if large sample sizes are hard to get).
We can see that small sample sizes can certainly be a problem.
[This proportion of incorrect rejections is called the false discovery rate]
If you have a notion of likely effect size you're in a better position to judge what an adequate sample size might be. With large anticipated effects, a rejection with a small sample size would not necessarily be a major concern.
|
Can I trust a significant result of a t-test if the sample size is small?
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true.
Indeed, let's model it using a super-simple urn-type m
|
15,949
|
Can I trust a significant result of a t-test if the sample size is small?
|
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal approximation would be fine. That said, Gosset was doing very careful, controlled experiments on data that he understood very well. There's a limit to the number of things a brewery has to test, and Gosset spent his working life at Guinness. He knew his data.
I'm a bit suspicious of your emphasis on one-sided testing. The logic of testing is the same whatever the hypothesis, but I've seen people go with a significant one-sided test when the two-sided was non-significant.
This is what a (upper) one-sided test implies. You are testing that a mean is 0. You do the math and are prepared to reject when T > 2.5. You run your experiment and observe that T=-50,000. You say, "phhhhht", and life goes on. Unless it is physically impossible for the test statistic to sink way below the hypothesized parameter value, and unless you would never take any decision if the test statistic goes in the opposite direction than you expect, you should be using a two-sided test.
|
Can I trust a significant result of a t-test if the sample size is small?
|
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal a
|
Can I trust a significant result of a t-test if the sample size is small?
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal approximation would be fine. That said, Gosset was doing very careful, controlled experiments on data that he understood very well. There's a limit to the number of things a brewery has to test, and Gosset spent his working life at Guinness. He knew his data.
I'm a bit suspicious of your emphasis on one-sided testing. The logic of testing is the same whatever the hypothesis, but I've seen people go with a significant one-sided test when the two-sided was non-significant.
This is what a (upper) one-sided test implies. You are testing that a mean is 0. You do the math and are prepared to reject when T > 2.5. You run your experiment and observe that T=-50,000. You say, "phhhhht", and life goes on. Unless it is physically impossible for the test statistic to sink way below the hypothesized parameter value, and unless you would never take any decision if the test statistic goes in the opposite direction than you expect, you should be using a two-sided test.
|
Can I trust a significant result of a t-test if the sample size is small?
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal a
|
15,950
|
Can I trust a significant result of a t-test if the sample size is small?
|
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a true significant effect of a reasonable size. If typical effects are very large, an n of 8 could be totally adequate (as with many experiments in molecular biology). If the effects you are interested in are typically subtle, however (as in many social psychology experiments), an n of thousands might still be underpowered.
This is important because underpowered tests can give very misleading results. For example, if your test is underpowered, even if you find a significant result, you have a relatively high probability of making what Andrew Gelman calls a "Type S" error, i.e., there is a real effect but in the opposite direction, or a "Type M" error, i.e., there is a real effect but the true magnitude is much weaker than what is estimated from the data.
Gelman and Carlin wrote a useful paper about doing post-hoc power analysis that I think applies in your case. Importantly, they recommend using independent data (i.e., not the data you tested, but reviews, modeling, the results of similar experiments, etc.) to estimate a plausible true effect size. By performing power analysis using that plausible estimated true effect size and comparing to your results, you can determine the probability of making a Type S error and the typical "exaggeration ratio," and thus get a better sense for how strong your evidence really is.
|
Can I trust a significant result of a t-test if the sample size is small?
|
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a t
|
Can I trust a significant result of a t-test if the sample size is small?
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a true significant effect of a reasonable size. If typical effects are very large, an n of 8 could be totally adequate (as with many experiments in molecular biology). If the effects you are interested in are typically subtle, however (as in many social psychology experiments), an n of thousands might still be underpowered.
This is important because underpowered tests can give very misleading results. For example, if your test is underpowered, even if you find a significant result, you have a relatively high probability of making what Andrew Gelman calls a "Type S" error, i.e., there is a real effect but in the opposite direction, or a "Type M" error, i.e., there is a real effect but the true magnitude is much weaker than what is estimated from the data.
Gelman and Carlin wrote a useful paper about doing post-hoc power analysis that I think applies in your case. Importantly, they recommend using independent data (i.e., not the data you tested, but reviews, modeling, the results of similar experiments, etc.) to estimate a plausible true effect size. By performing power analysis using that plausible estimated true effect size and comparing to your results, you can determine the probability of making a Type S error and the typical "exaggeration ratio," and thus get a better sense for how strong your evidence really is.
|
Can I trust a significant result of a t-test if the sample size is small?
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a t
|
15,951
|
Can I trust a significant result of a t-test if the sample size is small?
|
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact that with small sample sizes, you can get flukes, when no real effect exists. The statistical significance, that is to say the p-value, is precisely the answer to the question, "if no real effect existed, how likely would I be to get a fluke as big as this?". If it's very unlikely, that indicates that it's not a fluke.
So the answer is "yes", if the p-value is low, and if you have followed the correct statistical procedures and are satisfying the relevant assumptions, then yes, it is good evidence, and has the same weight as if you'd gotten the same p-value with a very large sample size.
|
Can I trust a significant result of a t-test if the sample size is small?
|
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact t
|
Can I trust a significant result of a t-test if the sample size is small?
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact that with small sample sizes, you can get flukes, when no real effect exists. The statistical significance, that is to say the p-value, is precisely the answer to the question, "if no real effect existed, how likely would I be to get a fluke as big as this?". If it's very unlikely, that indicates that it's not a fluke.
So the answer is "yes", if the p-value is low, and if you have followed the correct statistical procedures and are satisfying the relevant assumptions, then yes, it is good evidence, and has the same weight as if you'd gotten the same p-value with a very large sample size.
|
Can I trust a significant result of a t-test if the sample size is small?
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact t
|
15,952
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
|
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existing tradition to use greek letters in mathematical abbrevation.
To satisfy certain individuals craving for actual historical research and assuming that the webpage here is credible I can now confirm that the assumption that it comes from English turns out to be valid.
Fisher wrote the normal density with $m$ for the mean (see section 12 of his Statistical Methods for Research Workers) until the mid-1930s when he replaced $m$ with $\mu$. The new symbol appears in The Fiducial Argument in Statistical Inference (1935) and it went into the 1936 (sixth) edition of the Statistical Methods for Research Workers.
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
|
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existi
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existing tradition to use greek letters in mathematical abbrevation.
To satisfy certain individuals craving for actual historical research and assuming that the webpage here is credible I can now confirm that the assumption that it comes from English turns out to be valid.
Fisher wrote the normal density with $m$ for the mean (see section 12 of his Statistical Methods for Research Workers) until the mid-1930s when he replaced $m$ with $\mu$. The new symbol appears in The Fiducial Argument in Statistical Inference (1935) and it went into the 1936 (sixth) edition of the Statistical Methods for Research Workers.
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existi
|
15,953
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
|
There is a general rule to use Greek letters for parameters and Latin letters for statistics.
Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google translate:
Latin: Media
French: Moyenne
Spanish: Media
German: Mittel
Dutch: Midden
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
|
There is a general rule to use Greek letters for parameters and Latin letters for statistics.
Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google t
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
There is a general rule to use Greek letters for parameters and Latin letters for statistics.
Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google translate:
Latin: Media
French: Moyenne
Spanish: Media
German: Mittel
Dutch: Midden
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
There is a general rule to use Greek letters for parameters and Latin letters for statistics.
Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google t
|
15,954
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
|
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" in 1809. The parameters $\mu$ and $\sigma$ of the normal distribution are called "Mittelwert" and "Standardabweichung" in German and their Greek letters have the phonetic equivalents "M" and "S" in Latin script.
That may well have been the motivation for giving them those letters. The equivalent Latin (rather than German) terms would likely also start with "M" and "S".
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
|
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus cor
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" in 1809. The parameters $\mu$ and $\sigma$ of the normal distribution are called "Mittelwert" and "Standardabweichung" in German and their Greek letters have the phonetic equivalents "M" and "S" in Latin script.
That may well have been the motivation for giving them those letters. The equivalent Latin (rather than German) terms would likely also start with "M" and "S".
|
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus cor
|
15,955
|
When to remove insignificant variables?
|
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you should be externally validating your model via a validation/test prodecedure on unseen data.
If, instead, you are interested in examining which factors contribute to the probability of a customer buying, then there is no need to remove variables which fail to reject the null (especially in a stepwise sort of manner). Presumably, you included a variable in your model because you thought (from past experience or expert opinion) that it played an important part in a customer deciding if they will buy. That the variable failed to reject the null doesn't make your model a bad one, it just means that your sample didin't detect an effect of that variable. That's perfectly ok.
|
When to remove insignificant variables?
|
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you
|
When to remove insignificant variables?
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you should be externally validating your model via a validation/test prodecedure on unseen data.
If, instead, you are interested in examining which factors contribute to the probability of a customer buying, then there is no need to remove variables which fail to reject the null (especially in a stepwise sort of manner). Presumably, you included a variable in your model because you thought (from past experience or expert opinion) that it played an important part in a customer deciding if they will buy. That the variable failed to reject the null doesn't make your model a bad one, it just means that your sample didin't detect an effect of that variable. That's perfectly ok.
|
When to remove insignificant variables?
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you
|
15,956
|
When to remove insignificant variables?
|
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better way would be to use the functions in the penalised or glmnet package to perform a lasso regression.
|
When to remove insignificant variables?
|
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better
|
When to remove insignificant variables?
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better way would be to use the functions in the penalised or glmnet package to perform a lasso regression.
|
When to remove insignificant variables?
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better
|
15,957
|
When to remove insignificant variables?
|
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variables be fairly uncorrelated. Typically, when you use logistic regression in a business setting, both inferential information about the variables used along with a good prediction are what stakeholders are looking for.
Additionally, another good reason to remove variables is for model parsimony. Some reasons for this is for internal review purposes, legal regulation, and ease of implementation. These lead to it being highly desirable to find the smallest set of variables that give good business information and good predictions. For example, if you are developing a credit model, every variable is subject to legal review, every variable has to be available and immediately return values when called to score the loan, and the stakeholders (who usually are not versed in model building) tend to not want to look at complicated models loaded with variables.
It may be also helpful to try a random forest to get some idea of variable importance and also to check the predictive power with and without all the variables.
Finally, you should have a good reason for transforming a variable. Throwing every transformation against a variable until you find one that gives you the result you want is a good way to get an overfit model that performs poorly on new data.
|
When to remove insignificant variables?
|
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variabl
|
When to remove insignificant variables?
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variables be fairly uncorrelated. Typically, when you use logistic regression in a business setting, both inferential information about the variables used along with a good prediction are what stakeholders are looking for.
Additionally, another good reason to remove variables is for model parsimony. Some reasons for this is for internal review purposes, legal regulation, and ease of implementation. These lead to it being highly desirable to find the smallest set of variables that give good business information and good predictions. For example, if you are developing a credit model, every variable is subject to legal review, every variable has to be available and immediately return values when called to score the loan, and the stakeholders (who usually are not versed in model building) tend to not want to look at complicated models loaded with variables.
It may be also helpful to try a random forest to get some idea of variable importance and also to check the predictive power with and without all the variables.
Finally, you should have a good reason for transforming a variable. Throwing every transformation against a variable until you find one that gives you the result you want is a good way to get an overfit model that performs poorly on new data.
|
When to remove insignificant variables?
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variabl
|
15,958
|
What is the easiest way to create publication-quality plots under Linux?
|
The easiest way is to use R
Use read.csv to enter the data into R, then use a combination of the plot and line commands
If you want something really special, then look at the libraries ggplot2 or lattice.
In ggplot2 the following commands should get you started.
require(ggplot2)
#You would use read.csv here
N = 10
d = data.frame(x=1:N,y1=runif(N),y2=rnorm(N), y3 = rnorm(N, 0.5))
p = ggplot(d)
p = p+geom_line(aes(x, y1, colour="Type 1"))
p = p+geom_line(aes(x, y2, colour="Type 2"))
p = p+geom_line(aes(x, y3, colour="Type 3"))
#Add points
p = p+geom_point(aes(x, y3, colour="Type 3"))
print(p)
This would give you the following plot:
Saving plots in R
Saving plots in R is straightforward:
#Look at ?jpeg to other different saving options
jpeg("figure.jpg")
print(p)#for ggplot2 graphics
dev.off()
Instead of jpeg's you can also save as a pdf or postscript file:
#This example uses R base graphics
#Just change to print(p) for ggplot2
pdf("figure.pdf")
plot(d$x,y1, type="l")
lines(d$x, y2)
dev.off()
|
What is the easiest way to create publication-quality plots under Linux?
|
The easiest way is to use R
Use read.csv to enter the data into R, then use a combination of the plot and line commands
If you want something really special, then look at the libraries ggplot2 or latt
|
What is the easiest way to create publication-quality plots under Linux?
The easiest way is to use R
Use read.csv to enter the data into R, then use a combination of the plot and line commands
If you want something really special, then look at the libraries ggplot2 or lattice.
In ggplot2 the following commands should get you started.
require(ggplot2)
#You would use read.csv here
N = 10
d = data.frame(x=1:N,y1=runif(N),y2=rnorm(N), y3 = rnorm(N, 0.5))
p = ggplot(d)
p = p+geom_line(aes(x, y1, colour="Type 1"))
p = p+geom_line(aes(x, y2, colour="Type 2"))
p = p+geom_line(aes(x, y3, colour="Type 3"))
#Add points
p = p+geom_point(aes(x, y3, colour="Type 3"))
print(p)
This would give you the following plot:
Saving plots in R
Saving plots in R is straightforward:
#Look at ?jpeg to other different saving options
jpeg("figure.jpg")
print(p)#for ggplot2 graphics
dev.off()
Instead of jpeg's you can also save as a pdf or postscript file:
#This example uses R base graphics
#Just change to print(p) for ggplot2
pdf("figure.pdf")
plot(d$x,y1, type="l")
lines(d$x, y2)
dev.off()
|
What is the easiest way to create publication-quality plots under Linux?
The easiest way is to use R
Use read.csv to enter the data into R, then use a combination of the plot and line commands
If you want something really special, then look at the libraries ggplot2 or latt
|
15,959
|
What is the easiest way to create publication-quality plots under Linux?
|
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns:
x <- read.csv("file.csv")
matplot(x[,1],x[,2:4],type="l",col=1:3)
legend("topleft",legend=c("A","B","C"),lty=1,col=1:3)
|
What is the easiest way to create publication-quality plots under Linux?
|
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns:
x <- read.csv("file.csv")
matplot(x[,1],x[,2:4],type="l",col=1:3)
legen
|
What is the easiest way to create publication-quality plots under Linux?
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns:
x <- read.csv("file.csv")
matplot(x[,1],x[,2:4],type="l",col=1:3)
legend("topleft",legend=c("A","B","C"),lty=1,col=1:3)
|
What is the easiest way to create publication-quality plots under Linux?
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns:
x <- read.csv("file.csv")
matplot(x[,1],x[,2:4],type="l",col=1:3)
legen
|
15,960
|
What is the easiest way to create publication-quality plots under Linux?
|
R is definitely the answer. I would just add to what Rob and Colin already said:
To improve the quality of your plots, you should consider using the Cairo package for the output device. That will greatly improve the quality of the final graphics. You simply call the function before plotting and it redirects to Cairo as the output device.
Cairo(600, 600, file="plot.png", type="png", bg="white")
plot(rnorm(4000),rnorm(4000),col="#ff000018",pch=19,cex=2) # semi-transparent red
dev.off() # creates a file "plot.png" with the above plot
Lastly, in terms of putting it in a publication, that's the role that Sweave plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use cacheSweave if you have long-running computations.
|
What is the easiest way to create publication-quality plots under Linux?
|
R is definitely the answer. I would just add to what Rob and Colin already said:
To improve the quality of your plots, you should consider using the Cairo package for the output device. That will gr
|
What is the easiest way to create publication-quality plots under Linux?
R is definitely the answer. I would just add to what Rob and Colin already said:
To improve the quality of your plots, you should consider using the Cairo package for the output device. That will greatly improve the quality of the final graphics. You simply call the function before plotting and it redirects to Cairo as the output device.
Cairo(600, 600, file="plot.png", type="png", bg="white")
plot(rnorm(4000),rnorm(4000),col="#ff000018",pch=19,cex=2) # semi-transparent red
dev.off() # creates a file "plot.png" with the above plot
Lastly, in terms of putting it in a publication, that's the role that Sweave plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use cacheSweave if you have long-running computations.
|
What is the easiest way to create publication-quality plots under Linux?
R is definitely the answer. I would just add to what Rob and Colin already said:
To improve the quality of your plots, you should consider using the Cairo package for the output device. That will gr
|
15,961
|
What is the easiest way to create publication-quality plots under Linux?
|
My favorite tool is Python with matplotlib.
The advantages:
Immediate export from the environment where I do my experiments in
Support for the scipy/numpy data structures
Familiar syntax/options (matlab background)
Full latex support for labels/legends etc. So same typesetting as in the rest of your document!
Specifically, for different file formats like svg and eps, use the format parameter of savefig.
An example:
input.csv
"Line 1",0.5,0.8,1.0,0.9,0.9
"Line 2",0.2,0.7,1.2,1.1,1.1
Code:
import csv
import matplotlib.pyplot as plt
legends = []
for row in csv.reader(open('input.csv')):
legends.append(row[0])
plt.plot(row[1:])
plt.legend(legends)
plt.savefig("out.svg", format='svg')
|
What is the easiest way to create publication-quality plots under Linux?
|
My favorite tool is Python with matplotlib.
The advantages:
Immediate export from the environment where I do my experiments in
Support for the scipy/numpy data structures
Familiar syntax/options (mat
|
What is the easiest way to create publication-quality plots under Linux?
My favorite tool is Python with matplotlib.
The advantages:
Immediate export from the environment where I do my experiments in
Support for the scipy/numpy data structures
Familiar syntax/options (matlab background)
Full latex support for labels/legends etc. So same typesetting as in the rest of your document!
Specifically, for different file formats like svg and eps, use the format parameter of savefig.
An example:
input.csv
"Line 1",0.5,0.8,1.0,0.9,0.9
"Line 2",0.2,0.7,1.2,1.1,1.1
Code:
import csv
import matplotlib.pyplot as plt
legends = []
for row in csv.reader(open('input.csv')):
legends.append(row[0])
plt.plot(row[1:])
plt.legend(legends)
plt.savefig("out.svg", format='svg')
|
What is the easiest way to create publication-quality plots under Linux?
My favorite tool is Python with matplotlib.
The advantages:
Immediate export from the environment where I do my experiments in
Support for the scipy/numpy data structures
Familiar syntax/options (mat
|
15,962
|
What is the easiest way to create publication-quality plots under Linux?
|
Take a look at the sample galleries for three popular visualization libraries:
matplotlib gallery (Python)
R graph gallery (R) -- (also see ggplot2, scroll down to reference)
prefuse visualization gallery (Java)
For the first two, you can even view the associated source code -- the simple stuff is simple, not many lines of code. The prefuse case will have the requisite Java boilerplate code. All three support a number of backends/devices/renderers (pdf, ps, png, etc). All three are clearly capable of high quality graphics.
I think it pretty much boils down to which language are you most comfortable working in. Go with that.
|
What is the easiest way to create publication-quality plots under Linux?
|
Take a look at the sample galleries for three popular visualization libraries:
matplotlib gallery (Python)
R graph gallery (R) -- (also see ggplot2, scroll down to reference)
prefuse visualization ga
|
What is the easiest way to create publication-quality plots under Linux?
Take a look at the sample galleries for three popular visualization libraries:
matplotlib gallery (Python)
R graph gallery (R) -- (also see ggplot2, scroll down to reference)
prefuse visualization gallery (Java)
For the first two, you can even view the associated source code -- the simple stuff is simple, not many lines of code. The prefuse case will have the requisite Java boilerplate code. All three support a number of backends/devices/renderers (pdf, ps, png, etc). All three are clearly capable of high quality graphics.
I think it pretty much boils down to which language are you most comfortable working in. Go with that.
|
What is the easiest way to create publication-quality plots under Linux?
Take a look at the sample galleries for three popular visualization libraries:
matplotlib gallery (Python)
R graph gallery (R) -- (also see ggplot2, scroll down to reference)
prefuse visualization ga
|
15,963
|
What is the easiest way to create publication-quality plots under Linux?
|
Another option is Gnuplot
|
What is the easiest way to create publication-quality plots under Linux?
|
Another option is Gnuplot
|
What is the easiest way to create publication-quality plots under Linux?
Another option is Gnuplot
|
What is the easiest way to create publication-quality plots under Linux?
Another option is Gnuplot
|
15,964
|
What is the easiest way to create publication-quality plots under Linux?
|
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them.
I have recently started to make my plots in pgfplots. Being a LaTeX package (on top of tikz), it is particularly good at making things look good. Fonts will be consistent with the rest of the document and it's much easier to integrate your plots visually. It's not the easiest option to make plots, but it's a rather easy way to make plots that are certainly publication-quality.
|
What is the easiest way to create publication-quality plots under Linux?
|
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them.
I have recently
|
What is the easiest way to create publication-quality plots under Linux?
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them.
I have recently started to make my plots in pgfplots. Being a LaTeX package (on top of tikz), it is particularly good at making things look good. Fonts will be consistent with the rest of the document and it's much easier to integrate your plots visually. It's not the easiest option to make plots, but it's a rather easy way to make plots that are certainly publication-quality.
|
What is the easiest way to create publication-quality plots under Linux?
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them.
I have recently
|
15,965
|
What does "a.s." stand for?
|
It stands for "almost surely," i.e. the probability of this occurring is 1.
See: https://en.wikipedia.org/wiki/Almost_surely
|
What does "a.s." stand for?
|
It stands for "almost surely," i.e. the probability of this occurring is 1.
See: https://en.wikipedia.org/wiki/Almost_surely
|
What does "a.s." stand for?
It stands for "almost surely," i.e. the probability of this occurring is 1.
See: https://en.wikipedia.org/wiki/Almost_surely
|
What does "a.s." stand for?
It stands for "almost surely," i.e. the probability of this occurring is 1.
See: https://en.wikipedia.org/wiki/Almost_surely
|
15,966
|
What does "a.s." stand for?
|
As noted by @Matt, a.s. stands for "almost surely", or with probability 1.
Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. For example, suppose $X \sim$ Uniform(0,1). What's $P(X = 0.5)$? Well, since $X$ is a continuous random variable, $P(X = $ any finite set of values) = 0. Therefore, $X$ is almost surely not equal to 0.5. But that's not to say $X$ cannot be equal to 0.5!
|
What does "a.s." stand for?
|
As noted by @Matt, a.s. stands for "almost surely", or with probability 1.
Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. F
|
What does "a.s." stand for?
As noted by @Matt, a.s. stands for "almost surely", or with probability 1.
Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. For example, suppose $X \sim$ Uniform(0,1). What's $P(X = 0.5)$? Well, since $X$ is a continuous random variable, $P(X = $ any finite set of values) = 0. Therefore, $X$ is almost surely not equal to 0.5. But that's not to say $X$ cannot be equal to 0.5!
|
What does "a.s." stand for?
As noted by @Matt, a.s. stands for "almost surely", or with probability 1.
Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. F
|
15,967
|
What does "a.s." stand for?
|
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia,
To say that the sequence $X_n$ converges almost surely or almost everywhere or with probability 1 or strongly towards $X$ means that
$$Pr(\lim_{n\to\infty}{X_n}=X)=1$$
|
What does "a.s." stand for?
|
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia,
To say that the sequence $X_n$ converges almost surely or
|
What does "a.s." stand for?
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia,
To say that the sequence $X_n$ converges almost surely or almost everywhere or with probability 1 or strongly towards $X$ means that
$$Pr(\lim_{n\to\infty}{X_n}=X)=1$$
|
What does "a.s." stand for?
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia,
To say that the sequence $X_n$ converges almost surely or
|
15,968
|
What does "a.s." stand for?
|
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms.
There is however a subtle distinction between almost surely (or with probability 1) to always [resp., between with probability zero to never].
Imagine an infinite series of i.i.d. random variables which are head a.s. (=with probability 1), tail with probability zero. It is possible in such an infinite series to have a finite number of tails although the probability for tail is 0, as the empirical distribution of the series remains 1-0 (only a finite number of instances out of infinitely many). On the other hand, when one says that the series is always head one means that not even a single tail occurs in the series.
|
What does "a.s." stand for?
|
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms.
There is however a subtle distinction between a
|
What does "a.s." stand for?
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms.
There is however a subtle distinction between almost surely (or with probability 1) to always [resp., between with probability zero to never].
Imagine an infinite series of i.i.d. random variables which are head a.s. (=with probability 1), tail with probability zero. It is possible in such an infinite series to have a finite number of tails although the probability for tail is 0, as the empirical distribution of the series remains 1-0 (only a finite number of instances out of infinitely many). On the other hand, when one says that the series is always head one means that not even a single tail occurs in the series.
|
What does "a.s." stand for?
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms.
There is however a subtle distinction between a
|
15,969
|
Why use odds and not probability in logistic regression?
|
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like
$$\log \left(\frac{p_i}{1-p_i}\right) = \beta_0 + \sum_{j=1}^J \beta_j x_{ij}$$
for the log-odds without any problem (i.e. for any value of the regression coefficients and covariates a valid value for the odds are predicted). You would need extremely complicated multi-dimensional constraints on the regression coefficients $\beta_0,\beta_1,\ldots$, if you wanted to do the same for the log probability (and of course this would not work in a straightforward way for the untransformed probability or odds, either). As a consequence you get effects like being unable to have a constant risk ratio across all baseline probabilities (some risk ratios would result in probabilities > 1), while this is not an issue with an odds-ratio.
|
Why use odds and not probability in logistic regression?
|
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like
$$\log \l
|
Why use odds and not probability in logistic regression?
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like
$$\log \left(\frac{p_i}{1-p_i}\right) = \beta_0 + \sum_{j=1}^J \beta_j x_{ij}$$
for the log-odds without any problem (i.e. for any value of the regression coefficients and covariates a valid value for the odds are predicted). You would need extremely complicated multi-dimensional constraints on the regression coefficients $\beta_0,\beta_1,\ldots$, if you wanted to do the same for the log probability (and of course this would not work in a straightforward way for the untransformed probability or odds, either). As a consequence you get effects like being unable to have a constant risk ratio across all baseline probabilities (some risk ratios would result in probabilities > 1), while this is not an issue with an odds-ratio.
|
Why use odds and not probability in logistic regression?
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like
$$\log \l
|
15,970
|
Why use odds and not probability in logistic regression?
|
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but -3 successes per failure does not make sense. The logarithm of an odds can take any positive or negative value. Logistic regression is a linear model for the log(odds). This works because the log(odds) can take any positive or negative number, so a linear model won't lead to impossible predictions. We can do a linear model for the probability, a linear probability model, but that can lead to impossible predictions as a probability must remain between 0 and 1.
|
Why use odds and not probability in logistic regression?
|
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but
|
Why use odds and not probability in logistic regression?
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but -3 successes per failure does not make sense. The logarithm of an odds can take any positive or negative value. Logistic regression is a linear model for the log(odds). This works because the log(odds) can take any positive or negative number, so a linear model won't lead to impossible predictions. We can do a linear model for the probability, a linear probability model, but that can lead to impossible predictions as a probability must remain between 0 and 1.
|
Why use odds and not probability in logistic regression?
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but
|
15,971
|
Why use odds and not probability in logistic regression?
|
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons.
First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increase the odds of a positive response multiplicatively by the factor exp(beta_2).
Beta_x2 has units of odds/unit of x2 where x2 is continuous.
Beta_x2 is the odds ratio for a categorical variable x2.
The corresponding statements from the probability scale functions are more complicated.
Probabilities are readily back-calculated from odds: p = (odds)/(1+odds).
Second, an important property of the logistic (log odds) function not shared by the
probability scale functions (probit, log-log) is that differences on the logistic scale can be estimated regardless of whether the data are sampled prospectively or retrospectively.
This is an advantage in medical applications because prospective studies can take years to accumulate sufficient data for making inferences.
|
Why use odds and not probability in logistic regression?
|
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons.
First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increa
|
Why use odds and not probability in logistic regression?
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons.
First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increase the odds of a positive response multiplicatively by the factor exp(beta_2).
Beta_x2 has units of odds/unit of x2 where x2 is continuous.
Beta_x2 is the odds ratio for a categorical variable x2.
The corresponding statements from the probability scale functions are more complicated.
Probabilities are readily back-calculated from odds: p = (odds)/(1+odds).
Second, an important property of the logistic (log odds) function not shared by the
probability scale functions (probit, log-log) is that differences on the logistic scale can be estimated regardless of whether the data are sampled prospectively or retrospectively.
This is an advantage in medical applications because prospective studies can take years to accumulate sufficient data for making inferences.
|
Why use odds and not probability in logistic regression?
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons.
First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increa
|
15,972
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
|
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
|
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
|
15,973
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
|
Roll two dice.
X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die
X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
(This is a concrete instance of the answer given by fblundun, but I came up with it before seeing their answer.)
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
|
Roll two dice.
X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die
X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
Roll two dice.
X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die
X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
(This is a concrete instance of the answer given by fblundun, but I came up with it before seeing their answer.)
|
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
Roll two dice.
X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die
X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
|
15,974
|
Computation speed in R?
|
R works in-memory - so your data do need to fit into memory for the majority of functions.
The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supplied with R), is not the same thing as a compiled language in the traditional sense (C, Fortran). It is a byte compiler for R in the sense of Java bytecode executed by the Java VM or byte compiling of Emacs LISP code. It doesn't compile R code down into machine code but rather prepares the R code into bytecode so it can be used more efficiently than raw R code to be interpreted.
Note that if you have well formed Fortran you could probably have best of both worlds; R can call compiled Fortran routines.
|
Computation speed in R?
|
R works in-memory - so your data do need to fit into memory for the majority of functions.
The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supp
|
Computation speed in R?
R works in-memory - so your data do need to fit into memory for the majority of functions.
The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supplied with R), is not the same thing as a compiled language in the traditional sense (C, Fortran). It is a byte compiler for R in the sense of Java bytecode executed by the Java VM or byte compiling of Emacs LISP code. It doesn't compile R code down into machine code but rather prepares the R code into bytecode so it can be used more efficiently than raw R code to be interpreted.
Note that if you have well formed Fortran you could probably have best of both worlds; R can call compiled Fortran routines.
|
Computation speed in R?
R works in-memory - so your data do need to fit into memory for the majority of functions.
The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supp
|
15,975
|
Computation speed in R?
|
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data manipulations directly, there is no equivalent to DATA or PROC SQLprocedures because they're not needed (the latter being more efficient in SAS when there is a lot of data manipulation to do from external data sources, e.g. administrative data). This means that, now I'm getting the hang of it, data manipulation is faster in R and requires much less code.
The main issue I have encountered is memory. Not all R packages allow WEIGHT type specifications, so if you have SAS datasets with variables used in FREQ or REPLICATE statements, you may have issues. I have looked at the ff and bigmemory packages in R but they do not appear to be compatible with all R packages, so if you have very large datasets that require analyses that are relatively uncommon, and have been aggregated, you may have issues with memory.
For automation, if you have SAS macros then you should be able to programme the equivalent in R and run as batch.
For coding in R, I was using Notepad++ and setting the language to R, and am now discovering the joys of R Studio. Both these products are free, and do language mark up like the improved SAS syntax GUI (I've only ever used the syntax screen in SAS).
There is a website, and related book, for people swapping from SAS to R. I found them useful for trying to work out how to translate some SAS commands into R.
Update: one thing that drove me nuts when coming to R is that R doesn't assume everything is a data set (data frame in R parlance), because it's not a statistical package in the way that SAS, SPSS, Stata, etc are. So, for example, it took me a while to get if statements working because I kept getting the help for if statements with vectors (or maybe matrices) whereas I needed an if statement that worked with data frames. So the help pages probably need to be read more closely than you would normally, because you'll need to check that the command you want to do will operate with the data object type you have.
The bit that still drives me crazy when learning a new R command (e.g. analysis method in a contributed package) is that the help for commands is often not entirely self-contained. I will go to the help page to try to learn the command and the usage notes often have ... contained in them. Sometimes trying to work out what can or should go where the ... is has lead me into a recursive loop. The relative brevity of the help notes, coming from SAS which provides detailed examples of syntax and worked examples with an explanation of the study in the example, was quite a large shock.
|
Computation speed in R?
|
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data m
|
Computation speed in R?
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data manipulations directly, there is no equivalent to DATA or PROC SQLprocedures because they're not needed (the latter being more efficient in SAS when there is a lot of data manipulation to do from external data sources, e.g. administrative data). This means that, now I'm getting the hang of it, data manipulation is faster in R and requires much less code.
The main issue I have encountered is memory. Not all R packages allow WEIGHT type specifications, so if you have SAS datasets with variables used in FREQ or REPLICATE statements, you may have issues. I have looked at the ff and bigmemory packages in R but they do not appear to be compatible with all R packages, so if you have very large datasets that require analyses that are relatively uncommon, and have been aggregated, you may have issues with memory.
For automation, if you have SAS macros then you should be able to programme the equivalent in R and run as batch.
For coding in R, I was using Notepad++ and setting the language to R, and am now discovering the joys of R Studio. Both these products are free, and do language mark up like the improved SAS syntax GUI (I've only ever used the syntax screen in SAS).
There is a website, and related book, for people swapping from SAS to R. I found them useful for trying to work out how to translate some SAS commands into R.
Update: one thing that drove me nuts when coming to R is that R doesn't assume everything is a data set (data frame in R parlance), because it's not a statistical package in the way that SAS, SPSS, Stata, etc are. So, for example, it took me a while to get if statements working because I kept getting the help for if statements with vectors (or maybe matrices) whereas I needed an if statement that worked with data frames. So the help pages probably need to be read more closely than you would normally, because you'll need to check that the command you want to do will operate with the data object type you have.
The bit that still drives me crazy when learning a new R command (e.g. analysis method in a contributed package) is that the help for commands is often not entirely self-contained. I will go to the help page to try to learn the command and the usage notes often have ... contained in them. Sometimes trying to work out what can or should go where the ... is has lead me into a recursive loop. The relative brevity of the help notes, coming from SAS which provides detailed examples of syntax and worked examples with an explanation of the study in the example, was quite a large shock.
|
Computation speed in R?
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data m
|
15,976
|
Computation speed in R?
|
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets and colons.
Think of it like Fortran or C, but with implicit vectorisation so you don't have to loop over arrays, and dynamic memory management so you don't have to malloc() or declare array sizes at any time.
It mostly does all its work in memory, but if you want to read part of a file in, mung it, then spit out some of the results, and read the next bit in, well, you go ahead and write an R program that does that.
You contradict yourself in saying the model is computationally intensive yet SAS is slow because of I/O... One or the other surely...
If you've got something similar in Fortran already, and you say you want to move away from an interpreted language, then why not just do it in Fortran as well?
The R compiler can cause some speedups, but if your R code is well written anyway you won't get anything too massive - not like writing it in C or Fortran.
|
Computation speed in R?
|
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets an
|
Computation speed in R?
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets and colons.
Think of it like Fortran or C, but with implicit vectorisation so you don't have to loop over arrays, and dynamic memory management so you don't have to malloc() or declare array sizes at any time.
It mostly does all its work in memory, but if you want to read part of a file in, mung it, then spit out some of the results, and read the next bit in, well, you go ahead and write an R program that does that.
You contradict yourself in saying the model is computationally intensive yet SAS is slow because of I/O... One or the other surely...
If you've got something similar in Fortran already, and you say you want to move away from an interpreted language, then why not just do it in Fortran as well?
The R compiler can cause some speedups, but if your R code is well written anyway you won't get anything too massive - not like writing it in C or Fortran.
|
Computation speed in R?
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets an
|
15,977
|
Computation speed in R?
|
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff.
However, if you are doing array work in R that can be vectorised it will be very quick - maybe half the speed of a C program in some cases, but if you are doing something that can't be vectorised, then it will seem quite slow. To give you an example:
# create a data.frame with 4 columns of standard normally distributed RVs
N <- 10000
# test 1
system.time( {df1 <- data.frame(h1=rnorm(N),
h2=rpois(N, lambda=5),
h3=runif(N),
h4=rexp(N))
} )
# about 0.003 seconds elapsed time
# vectorised sum of columns 1 to 4
# i.e. it can work on an entire column all at once
# test 2
system.time( { df1$rowtotal1 <- df1$h1 + df1$h2 + df1$h3 + df1$h4 })
# about 0.001 seconds elapsed time
# test 3
# another version of the vectorised sum
system.time( { df1$rowtotal2 <- rowSums(df1[,c(1:4)]) })
# about 0.001 seconds elapsed time
# test 4
# using a loop... THIS IS *VERY* SLOW AND GENERALLY A BAD IDEA!!! :-)
system.time( {
for(i in 1:nrow(df1)) {
df1$rowtotal3 <- df1[i,1]+ df1[i,2] + df1[i,3] + df1[i,4]
}
} )
# about 9.2 seconds elapsed time
When I increased N by a factor of ten to 100,000, I gave up on test 4 after 20 minutes, but tests 1:3 took 61, 3 and 37 milli-seconds each
For N=10,000,000 the time for tests 1:3 are 3.3s, 0.6s and 1.6s
Note that this was done on an i7 laptop and at 480mb for N=10million, memory was not an issue.
For users on 32-bit windows there is a 1.5gb memory limit for R no matter how much memory you have, but there is no such limit for 64-bit windows or 64-bit linux. These days memory is very cheap compared with the cost of an hour of my time so I just buy more memory rather than spend time trying to get around this. But this assumes that your model will fit in memory.
|
Computation speed in R?
|
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff.
However, if you are doing arr
|
Computation speed in R?
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff.
However, if you are doing array work in R that can be vectorised it will be very quick - maybe half the speed of a C program in some cases, but if you are doing something that can't be vectorised, then it will seem quite slow. To give you an example:
# create a data.frame with 4 columns of standard normally distributed RVs
N <- 10000
# test 1
system.time( {df1 <- data.frame(h1=rnorm(N),
h2=rpois(N, lambda=5),
h3=runif(N),
h4=rexp(N))
} )
# about 0.003 seconds elapsed time
# vectorised sum of columns 1 to 4
# i.e. it can work on an entire column all at once
# test 2
system.time( { df1$rowtotal1 <- df1$h1 + df1$h2 + df1$h3 + df1$h4 })
# about 0.001 seconds elapsed time
# test 3
# another version of the vectorised sum
system.time( { df1$rowtotal2 <- rowSums(df1[,c(1:4)]) })
# about 0.001 seconds elapsed time
# test 4
# using a loop... THIS IS *VERY* SLOW AND GENERALLY A BAD IDEA!!! :-)
system.time( {
for(i in 1:nrow(df1)) {
df1$rowtotal3 <- df1[i,1]+ df1[i,2] + df1[i,3] + df1[i,4]
}
} )
# about 9.2 seconds elapsed time
When I increased N by a factor of ten to 100,000, I gave up on test 4 after 20 minutes, but tests 1:3 took 61, 3 and 37 milli-seconds each
For N=10,000,000 the time for tests 1:3 are 3.3s, 0.6s and 1.6s
Note that this was done on an i7 laptop and at 480mb for N=10million, memory was not an issue.
For users on 32-bit windows there is a 1.5gb memory limit for R no matter how much memory you have, but there is no such limit for 64-bit windows or 64-bit linux. These days memory is very cheap compared with the cost of an hour of my time so I just buy more memory rather than spend time trying to get around this. But this assumes that your model will fit in memory.
|
Computation speed in R?
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff.
However, if you are doing arr
|
15,978
|
Computation speed in R?
|
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language
Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to easily make your functions interact with other tools provided by R, e.g. feeding them bootstraped data... or whatever they want to. If you don’t think this is important, stick to C/C++ or your favorite compiled language.
I want to add a caveat: you’re already a programmer, learning R will be easy and fast; learning efficient R programming will be longer. Because R is interpreted, the constants hidden in the $O()$ of the asymptotic complexity can be huge or small... for example, if you are interested in runs in your data, you will use rle(), it will be fast (its a precompiled function). If you script exactly the same algorithm, it will be slow (it will be interpreted). This is a basic example: you have plenty of tricks using vector and matrices, to avoid interpreted loops and make precompiled functions do all the job.
So be very careful. After your first tries, you’ll surely have a disgust with R, because you’ll find it slow, with a weird syntax, etc. Once you know it, it can be a very efficient tool. You may even end by scripting your methods in R as a preliminary phase for C/C++ coding. The ultimate stage will be to learn R’s API to create precompiled functions, and you’ll be a R wizard :)
|
Computation speed in R?
|
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language
Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to eas
|
Computation speed in R?
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language
Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to easily make your functions interact with other tools provided by R, e.g. feeding them bootstraped data... or whatever they want to. If you don’t think this is important, stick to C/C++ or your favorite compiled language.
I want to add a caveat: you’re already a programmer, learning R will be easy and fast; learning efficient R programming will be longer. Because R is interpreted, the constants hidden in the $O()$ of the asymptotic complexity can be huge or small... for example, if you are interested in runs in your data, you will use rle(), it will be fast (its a precompiled function). If you script exactly the same algorithm, it will be slow (it will be interpreted). This is a basic example: you have plenty of tricks using vector and matrices, to avoid interpreted loops and make precompiled functions do all the job.
So be very careful. After your first tries, you’ll surely have a disgust with R, because you’ll find it slow, with a weird syntax, etc. Once you know it, it can be a very efficient tool. You may even end by scripting your methods in R as a preliminary phase for C/C++ coding. The ultimate stage will be to learn R’s API to create precompiled functions, and you’ll be a R wizard :)
|
Computation speed in R?
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language
Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to eas
|
15,979
|
Computation speed in R?
|
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for R, ff and bigmemory, move data from memory to disk. I have pointers for you if you want to improve either speed or memory usage. In order to improve speed, you first need to use R as intended, that is: vectorize your code and use byte code compilation. (Also: avoid memory-copy operations as much as possible.) Second, use the provided code profiler Rprof() to identify slow patches in your code, and rewrite them in C or C++ if need be. If you need more memory, you can use the skip argument in the read.table() function to read in your data a chunk at a time and you can also use a package such as RMySQL, which adds database manipulation utilities to R. If you need still more memory and can afford the concomitant decrease in speed, you can use the snow package to run R in parallel. (You can find details about this, and much more, in the book "The Art of R Programming", by Norman Matloff, published at the end of last year. Details about packages mentioned here can be found online.)
|
Computation speed in R?
|
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for
|
Computation speed in R?
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for R, ff and bigmemory, move data from memory to disk. I have pointers for you if you want to improve either speed or memory usage. In order to improve speed, you first need to use R as intended, that is: vectorize your code and use byte code compilation. (Also: avoid memory-copy operations as much as possible.) Second, use the provided code profiler Rprof() to identify slow patches in your code, and rewrite them in C or C++ if need be. If you need more memory, you can use the skip argument in the read.table() function to read in your data a chunk at a time and you can also use a package such as RMySQL, which adds database manipulation utilities to R. If you need still more memory and can afford the concomitant decrease in speed, you can use the snow package to run R in parallel. (You can find details about this, and much more, in the book "The Art of R Programming", by Norman Matloff, published at the end of last year. Details about packages mentioned here can be found online.)
|
Computation speed in R?
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for
|
15,980
|
What machine learning algorithm can be used to predict the stock market?
|
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the box unless you are doing something clever with it (at which point it sort of stops being widely available since you are adding to it).
That said, learning about autoregressive integerated moving average (ARIMA) models might be a useful start for forecasting time-series data. Don't expect better than random results though.
|
What machine learning algorithm can be used to predict the stock market?
|
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the
|
What machine learning algorithm can be used to predict the stock market?
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the box unless you are doing something clever with it (at which point it sort of stops being widely available since you are adding to it).
That said, learning about autoregressive integerated moving average (ARIMA) models might be a useful start for forecasting time-series data. Don't expect better than random results though.
|
What machine learning algorithm can be used to predict the stock market?
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the
|
15,981
|
What machine learning algorithm can be used to predict the stock market?
|
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it.
Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some people (both in academia and real-life quants) are motivated by the intellectual challenge, not just to get-rich-quick, and they do publish interesting results (and I count a failed result as an interesting one). But treat everything you read with a pinch of salt; if the results are really good, perhaps their scientific method isn't.
Data Mining With R might be a useful book for you; it is pricey, so try and find it in your university library. Chapter 2 covers just what you want to do, and he gets best results with a neural net. But be warned that he gets poor results, and spends a lot of CPU time to get them. The Amazon reviews point out the book costs $20 more because that chapter mentions the word finance; when reading it I got the impression the publisher had pushed him to write it. He's done his homework, read the docs, perused the right mailing lists, but his heart was not in it. I got some useful R knowledge from it, but won't be beating the market with it :-)
|
What machine learning algorithm can be used to predict the stock market?
|
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it.
Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some
|
What machine learning algorithm can be used to predict the stock market?
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it.
Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some people (both in academia and real-life quants) are motivated by the intellectual challenge, not just to get-rich-quick, and they do publish interesting results (and I count a failed result as an interesting one). But treat everything you read with a pinch of salt; if the results are really good, perhaps their scientific method isn't.
Data Mining With R might be a useful book for you; it is pricey, so try and find it in your university library. Chapter 2 covers just what you want to do, and he gets best results with a neural net. But be warned that he gets poor results, and spends a lot of CPU time to get them. The Amazon reviews point out the book costs $20 more because that chapter mentions the word finance; when reading it I got the impression the publisher had pushed him to write it. He's done his homework, read the docs, perused the right mailing lists, but his heart was not in it. I got some useful R knowledge from it, but won't be beating the market with it :-)
|
What machine learning algorithm can be used to predict the stock market?
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it.
Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some
|
15,982
|
What machine learning algorithm can be used to predict the stock market?
|
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction:
Gather and understand rumours
Access and interpret all government knowledge
Do so in every relevant country
Make relevant predictions about:
Weather conditions
Terrorist activity
Thoughts and feelings of individuals
Everything else that affects trade
Statistical analysis is the least of your worries, really.
|
What machine learning algorithm can be used to predict the stock market?
|
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction:
Gather and understand rumours
Access and interpret all govern
|
What machine learning algorithm can be used to predict the stock market?
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction:
Gather and understand rumours
Access and interpret all government knowledge
Do so in every relevant country
Make relevant predictions about:
Weather conditions
Terrorist activity
Thoughts and feelings of individuals
Everything else that affects trade
Statistical analysis is the least of your worries, really.
|
What machine learning algorithm can be used to predict the stock market?
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction:
Gather and understand rumours
Access and interpret all govern
|
15,983
|
What machine learning algorithm can be used to predict the stock market?
|
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could get parameters for the mean model from auto.arima, then pass them to rugarch and add garch(1,1)?
There's all sorts of blogs out there that claim some success doing this. Here's a system using an arima model (and later a garch model) and system using an SVM model. You'll find a lot of good info on FOSS trading, particularly if you start reading the blogs on his blogroll.
Whatever model you use, be sure to cross-validate and benchmark! I'd be very surprised if you found an arima, ets, or even garch model that could consistantly beat a naive model out-of-sample. Examples of time series cross-validation can be found here and here. Keep in mind that what you REALLY want to forecast is returns, not prices.
|
What machine learning algorithm can be used to predict the stock market?
|
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could
|
What machine learning algorithm can be used to predict the stock market?
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could get parameters for the mean model from auto.arima, then pass them to rugarch and add garch(1,1)?
There's all sorts of blogs out there that claim some success doing this. Here's a system using an arima model (and later a garch model) and system using an SVM model. You'll find a lot of good info on FOSS trading, particularly if you start reading the blogs on his blogroll.
Whatever model you use, be sure to cross-validate and benchmark! I'd be very surprised if you found an arima, ets, or even garch model that could consistantly beat a naive model out-of-sample. Examples of time series cross-validation can be found here and here. Keep in mind that what you REALLY want to forecast is returns, not prices.
|
What machine learning algorithm can be used to predict the stock market?
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could
|
15,984
|
What machine learning algorithm can be used to predict the stock market?
|
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fund.
In other words:
A hedge fund provides open access to an encrypted version of data on a couple of hundred investment vehicles, most likely stocks. Thousands of data scientists and the like train all sorts of machine learning algorithms against that data and upload the results to a scoreboard. The highest scorers get a small amount of money depending on the accuracy of their results and how long their result has been available online.
The best predictions are supposedly made by ensembles of algorithms.
So you have a lot of scientists providing trained guesses, some of which are themselves ensembles of guesses and the hedge fund uses the ensemble of all provided guesses to direct their investments.
This rather interesting hedge fund's results taught me two things:
Ensembles are often viewed as a good way of making predictions on the stock market.
Good predictions require more ensembles than I'm willing to build myself...
If you want to have a go, visit: https://numer.ai/
No, I'm NOT affiliated with them, I'd most likely not spend my days online were I connected to a hedge fund that employs thousands of people, but paying only those that provide measurable results :)
The numer.ai community has a forum where they discuss their approach so you CAN learn from others who are trying to do the same.
Personally I think anyone with a good algorithm is going to keep it very, very secret.
|
What machine learning algorithm can be used to predict the stock market?
|
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fu
|
What machine learning algorithm can be used to predict the stock market?
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fund.
In other words:
A hedge fund provides open access to an encrypted version of data on a couple of hundred investment vehicles, most likely stocks. Thousands of data scientists and the like train all sorts of machine learning algorithms against that data and upload the results to a scoreboard. The highest scorers get a small amount of money depending on the accuracy of their results and how long their result has been available online.
The best predictions are supposedly made by ensembles of algorithms.
So you have a lot of scientists providing trained guesses, some of which are themselves ensembles of guesses and the hedge fund uses the ensemble of all provided guesses to direct their investments.
This rather interesting hedge fund's results taught me two things:
Ensembles are often viewed as a good way of making predictions on the stock market.
Good predictions require more ensembles than I'm willing to build myself...
If you want to have a go, visit: https://numer.ai/
No, I'm NOT affiliated with them, I'd most likely not spend my days online were I connected to a hedge fund that employs thousands of people, but paying only those that provide measurable results :)
The numer.ai community has a forum where they discuss their approach so you CAN learn from others who are trying to do the same.
Personally I think anyone with a good algorithm is going to keep it very, very secret.
|
What machine learning algorithm can be used to predict the stock market?
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fu
|
15,985
|
What machine learning algorithm can be used to predict the stock market?
|
You should try GMDH-type neural networks.
I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation.
In a nutshell it is a multilayered iterative neural network, so you are on the right way.
|
What machine learning algorithm can be used to predict the stock market?
|
You should try GMDH-type neural networks.
I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation.
In a nutshel
|
What machine learning algorithm can be used to predict the stock market?
You should try GMDH-type neural networks.
I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation.
In a nutshell it is a multilayered iterative neural network, so you are on the right way.
|
What machine learning algorithm can be used to predict the stock market?
You should try GMDH-type neural networks.
I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation.
In a nutshel
|
15,986
|
What machine learning algorithm can be used to predict the stock market?
|
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
|
What machine learning algorithm can be used to predict the stock market?
|
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
|
What machine learning algorithm can be used to predict the stock market?
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
|
What machine learning algorithm can be used to predict the stock market?
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
|
15,987
|
Variance of linear combinations of correlated random variables
|
This is just an exercise in applying basic properties of
sums, the linearity of expectation, and definitions of variance and
covariance
\begin{align}
\operatorname{var}\left(\sum_{i=1}^n a_i X_i\right)
&= E\left[\left(\sum_{i=1}^n a_i X_i\right)^2\right]
- \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2
&\scriptstyle{\text{one definition of variance}}\\
&= E\left[\sum_{i=1}^n\sum_{j=1}^n a_i a_j X_iX_j\right]
- \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2
&\scriptstyle{\text{basic properties of sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j]
- \left(\sum_{i=1}^n a_i E[X_i]\right)^2
&\scriptstyle{\text{linearity of expectation}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j]
- \sum_{i=1}^n \sum_{j=1}^n a_ia_j E[X_i]E[X_j]
&\scriptstyle{\text{basic properties of sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j \left(E[X_iX_j]
- E[X_i]E[X_j]\right)&\scriptstyle{\text{combine the sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j\operatorname{cov}(X_i,X_j)
&\scriptstyle{\text{apply a definition of covariance}}\\
&= \sum_{i=1}^n a_i^2\operatorname{var}(X_i)
+ 2\sum_{i=1}^n \sum_{j\colon j > i}^n a_ia_j\operatorname{cov}(X_i,X_j)
&\scriptstyle{\text{re-arrange sum}}\\
\end{align}
Note that in that last step, we have also identified
$\operatorname{cov}(X_i,X_i)$ as the variance
$\operatorname{var}(X_i)$.
|
Variance of linear combinations of correlated random variables
|
This is just an exercise in applying basic properties of
sums, the linearity of expectation, and definitions of variance and
covariance
\begin{align}
\operatorname{var}\left(\sum_{i=1}^n a_i X_i\right
|
Variance of linear combinations of correlated random variables
This is just an exercise in applying basic properties of
sums, the linearity of expectation, and definitions of variance and
covariance
\begin{align}
\operatorname{var}\left(\sum_{i=1}^n a_i X_i\right)
&= E\left[\left(\sum_{i=1}^n a_i X_i\right)^2\right]
- \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2
&\scriptstyle{\text{one definition of variance}}\\
&= E\left[\sum_{i=1}^n\sum_{j=1}^n a_i a_j X_iX_j\right]
- \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2
&\scriptstyle{\text{basic properties of sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j]
- \left(\sum_{i=1}^n a_i E[X_i]\right)^2
&\scriptstyle{\text{linearity of expectation}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j]
- \sum_{i=1}^n \sum_{j=1}^n a_ia_j E[X_i]E[X_j]
&\scriptstyle{\text{basic properties of sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j \left(E[X_iX_j]
- E[X_i]E[X_j]\right)&\scriptstyle{\text{combine the sums}}\\
&= \sum_{i=1}^n\sum_{j=1}^n a_i a_j\operatorname{cov}(X_i,X_j)
&\scriptstyle{\text{apply a definition of covariance}}\\
&= \sum_{i=1}^n a_i^2\operatorname{var}(X_i)
+ 2\sum_{i=1}^n \sum_{j\colon j > i}^n a_ia_j\operatorname{cov}(X_i,X_j)
&\scriptstyle{\text{re-arrange sum}}\\
\end{align}
Note that in that last step, we have also identified
$\operatorname{cov}(X_i,X_i)$ as the variance
$\operatorname{var}(X_i)$.
|
Variance of linear combinations of correlated random variables
This is just an exercise in applying basic properties of
sums, the linearity of expectation, and definitions of variance and
covariance
\begin{align}
\operatorname{var}\left(\sum_{i=1}^n a_i X_i\right
|
15,988
|
Variance of linear combinations of correlated random variables
|
You can actually do it by recursion without using matrices:
Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$.
$\text{Var}(a_1X_1+Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,Y_1)+\text{Var}(Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,a_2X_2+Y_2)+\text{Var}(a_2X_2+Y_2)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1a_2\text{Cov}(X_1,X_2)+2a_1\text{Cov}(X_1,Y_2)+\text{Var}(a_2X_2+Y_2)$
Then keep substituting $Y_{i-1}=a_iX_i+Y_i$ and using the same basic results, then at the last step use $Y_{n-1}=a_nX_n$
With vectors (so the result must be scalar):
$\text{Var}(a'\,X)=a'\,\text{Var}(X)\,a$
Or with a matrix (the result will be a variance-covariance matrix):
$\text{Var}(A\,X)=A\,\text{Var}(X)\,A'$
This has the advantage of giving covariances of the various linear combinations whose coefficients are the rows of $A$ on the off-diagonal elements in the result.
Even if you only know the univariate results, you can confirm these by checking element-by-element.
|
Variance of linear combinations of correlated random variables
|
You can actually do it by recursion without using matrices:
Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$.
$\text{Var}(a_1X_1+Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}
|
Variance of linear combinations of correlated random variables
You can actually do it by recursion without using matrices:
Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$.
$\text{Var}(a_1X_1+Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,Y_1)+\text{Var}(Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,a_2X_2+Y_2)+\text{Var}(a_2X_2+Y_2)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1a_2\text{Cov}(X_1,X_2)+2a_1\text{Cov}(X_1,Y_2)+\text{Var}(a_2X_2+Y_2)$
Then keep substituting $Y_{i-1}=a_iX_i+Y_i$ and using the same basic results, then at the last step use $Y_{n-1}=a_nX_n$
With vectors (so the result must be scalar):
$\text{Var}(a'\,X)=a'\,\text{Var}(X)\,a$
Or with a matrix (the result will be a variance-covariance matrix):
$\text{Var}(A\,X)=A\,\text{Var}(X)\,A'$
This has the advantage of giving covariances of the various linear combinations whose coefficients are the rows of $A$ on the off-diagonal elements in the result.
Even if you only know the univariate results, you can confirm these by checking element-by-element.
|
Variance of linear combinations of correlated random variables
You can actually do it by recursion without using matrices:
Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$.
$\text{Var}(a_1X_1+Y_1)$
$\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}
|
15,989
|
Variance of linear combinations of correlated random variables
|
Here is a slightly different proof based on matrix algebra.
Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated.
Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\ldots,\mu_n) = E(X)$ and set $Y = a_1X_1+\ldots+a_nX_n = a^\top X$. Note first that, by the linearity of the integral (or sum)
$$E(Y) = E(a_1X_1+\ldots+a_nX_n) = a_1\mu_1+\cdots +a_n\mu_n = a^\top \mu.$$
Then
\begin{align}
\text{var}(Y) &= E(Y-E(Y))^2 = E\left(a_1X_1+\ldots+a_nX_n-E(a_1X_1+\ldots+a_nX_n)\right)^2\\
& = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)\right]\\
& = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)^\top\right]\\
& = E\left[a^\top(X - \mu)(a^\top(X - \mu))^\top\right]\\
& = a^\top E\left[(X - \mu)(X - \mu)^\top a\right] \\
& = a^\top E\left[(X - \mu)(X - \mu)^\top\right]a \\\tag{*}
& = a^\top \operatorname{cov}(X)a.
\end{align}
Here $\operatorname{cov}(X) = [\operatorname{cov}(X_i,X_j)]$, is the covariance matrix of $X$ and with entries $\operatorname{cov}(X_i,X_j)$ such that $\operatorname{cov}(X_i,X_i) = \operatorname{var}(X_i)$. Note the trick of placing a $^\top$ symbol in the third line of the last equation, which is valid since $r^\top = r$ for any real $r$. In passing from the 4th equality to the 5th equality and from the 5th equality to the 6th equality I have again used the linearity of the expectation.
Straightforward matrix multiplication will reveal that the desired result is nothing but the expanded version of the quadratic form (*).
|
Variance of linear combinations of correlated random variables
|
Here is a slightly different proof based on matrix algebra.
Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated.
Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\
|
Variance of linear combinations of correlated random variables
Here is a slightly different proof based on matrix algebra.
Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated.
Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\ldots,\mu_n) = E(X)$ and set $Y = a_1X_1+\ldots+a_nX_n = a^\top X$. Note first that, by the linearity of the integral (or sum)
$$E(Y) = E(a_1X_1+\ldots+a_nX_n) = a_1\mu_1+\cdots +a_n\mu_n = a^\top \mu.$$
Then
\begin{align}
\text{var}(Y) &= E(Y-E(Y))^2 = E\left(a_1X_1+\ldots+a_nX_n-E(a_1X_1+\ldots+a_nX_n)\right)^2\\
& = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)\right]\\
& = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)^\top\right]\\
& = E\left[a^\top(X - \mu)(a^\top(X - \mu))^\top\right]\\
& = a^\top E\left[(X - \mu)(X - \mu)^\top a\right] \\
& = a^\top E\left[(X - \mu)(X - \mu)^\top\right]a \\\tag{*}
& = a^\top \operatorname{cov}(X)a.
\end{align}
Here $\operatorname{cov}(X) = [\operatorname{cov}(X_i,X_j)]$, is the covariance matrix of $X$ and with entries $\operatorname{cov}(X_i,X_j)$ such that $\operatorname{cov}(X_i,X_i) = \operatorname{var}(X_i)$. Note the trick of placing a $^\top$ symbol in the third line of the last equation, which is valid since $r^\top = r$ for any real $r$. In passing from the 4th equality to the 5th equality and from the 5th equality to the 6th equality I have again used the linearity of the expectation.
Straightforward matrix multiplication will reveal that the desired result is nothing but the expanded version of the quadratic form (*).
|
Variance of linear combinations of correlated random variables
Here is a slightly different proof based on matrix algebra.
Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated.
Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\
|
15,990
|
Variance of linear combinations of correlated random variables
|
Just for fun, proof by induction!
Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$
Then $P(2)$ is (trivially) true (you said you're happy with that in the question).
Let's assume P(k) is true. Thus,
$Var[\sum_{i=1}^{k+1} a_iX_i] = Var[\sum_{i=1}^{k} a_iX_i + a_{k+1}X_{k+1}]$
$=Var[\sum_{i=1}^{k} a_iX_i] + Var[a_{k+1}X_{k+1}] + 2 Cov[\sum_{i=1}^{k} a_iX_i,a_{k+1}X_{k+1}]$
$=\sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]+ a_{k+1}^2\sigma_{k+1}^2 + 2Cov[\sum_{i=1}^{k} a_iX_i, a_{k+1}X_{k+1}]$
$=\sum_{i=1}^{k+1} a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j] + 2\sum_{i=1}^ka_ia_{k+1}Cov[X_i, X_{k+1}]$
$=\sum_{i=1}^{k+1} a_i^2\sigma_i^2 + 2\sum_{i=1}^{k+1} \sum _{j>i}^{k+1} a_ia_jCov[X_i, X_j]$
Thus $P(k+1)$ is true.
So, by induction,
$Var[\sum_{i=1}^n a_iX_i] = \sum_{i=1}^n a_i^2\sigma_i^2 + 2\sum_{i=1}^n \sum _{j>i}^n a_ia_jCov[X_i, X_j]$ for all integer $n \geq 2$.
|
Variance of linear combinations of correlated random variables
|
Just for fun, proof by induction!
Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$
Then $P(2)$ is (trivially
|
Variance of linear combinations of correlated random variables
Just for fun, proof by induction!
Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$
Then $P(2)$ is (trivially) true (you said you're happy with that in the question).
Let's assume P(k) is true. Thus,
$Var[\sum_{i=1}^{k+1} a_iX_i] = Var[\sum_{i=1}^{k} a_iX_i + a_{k+1}X_{k+1}]$
$=Var[\sum_{i=1}^{k} a_iX_i] + Var[a_{k+1}X_{k+1}] + 2 Cov[\sum_{i=1}^{k} a_iX_i,a_{k+1}X_{k+1}]$
$=\sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]+ a_{k+1}^2\sigma_{k+1}^2 + 2Cov[\sum_{i=1}^{k} a_iX_i, a_{k+1}X_{k+1}]$
$=\sum_{i=1}^{k+1} a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j] + 2\sum_{i=1}^ka_ia_{k+1}Cov[X_i, X_{k+1}]$
$=\sum_{i=1}^{k+1} a_i^2\sigma_i^2 + 2\sum_{i=1}^{k+1} \sum _{j>i}^{k+1} a_ia_jCov[X_i, X_j]$
Thus $P(k+1)$ is true.
So, by induction,
$Var[\sum_{i=1}^n a_iX_i] = \sum_{i=1}^n a_i^2\sigma_i^2 + 2\sum_{i=1}^n \sum _{j>i}^n a_ia_jCov[X_i, X_j]$ for all integer $n \geq 2$.
|
Variance of linear combinations of correlated random variables
Just for fun, proof by induction!
Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$
Then $P(2)$ is (trivially
|
15,991
|
Variance of linear combinations of correlated random variables
|
Basically, the proof is the same as the first formula. I will prove it use a very brutal method.
$Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2
=E[(a_1X_1)^2+...+(a_nX_n)^2+2a_1a_2X_1X_2+2a_1a_3X_1X_3+...+2a_1a_nX_1X_n+...+2a_{n-1}a_nX_{n-1}X_n]-[a_1E(X1)+...a_nE(X_n)]^2
$
$=a_1^2E(X_1^2)+...+a_n^2E(X_n^2)+2a_1a_2E(X_1X_2)+...+2a_{n-1}a_nE(X_{n-1}X_n)-a_1^2[E(X_1)]^2-...-a_n^2[E(X_n)]^2-2a_1a_2E(X_1)E(X_2)-...-2a_{n-1}a_nE(X_{n-1})E(X_n)
$
$=a_1^2E(X_1^2)-a_1^2[E(X_1)]^2+...+a_n^2E(X_n^2)-a_n^2[E(Xn)]^2+2a_1a_2E(X_1X_2)-2a_1a_2E(X_1)E(X_2)+...+2a_{n-1}a_nE(X_{n-1}X_n)-2a_{n-1}a_nE(X_{n-1})E(X_n)$
Next just note:
$a_n^2E(X_n^2)-a_n^2[E(X_n)]^2=a_n\sigma_n^2$
and
$2a_{n-1}a_nE(X_{n-1}X_n)-2a_{n-1}a_nE(X_{n-1})E(X_n)=2a_{n-1}a_nCov(X_{n-1},Xn)$
|
Variance of linear combinations of correlated random variables
|
Basically, the proof is the same as the first formula. I will prove it use a very brutal method.
$Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2
=E[(a_1X_1)^2+...+(a_nX_n)^2+2a_
|
Variance of linear combinations of correlated random variables
Basically, the proof is the same as the first formula. I will prove it use a very brutal method.
$Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2
=E[(a_1X_1)^2+...+(a_nX_n)^2+2a_1a_2X_1X_2+2a_1a_3X_1X_3+...+2a_1a_nX_1X_n+...+2a_{n-1}a_nX_{n-1}X_n]-[a_1E(X1)+...a_nE(X_n)]^2
$
$=a_1^2E(X_1^2)+...+a_n^2E(X_n^2)+2a_1a_2E(X_1X_2)+...+2a_{n-1}a_nE(X_{n-1}X_n)-a_1^2[E(X_1)]^2-...-a_n^2[E(X_n)]^2-2a_1a_2E(X_1)E(X_2)-...-2a_{n-1}a_nE(X_{n-1})E(X_n)
$
$=a_1^2E(X_1^2)-a_1^2[E(X_1)]^2+...+a_n^2E(X_n^2)-a_n^2[E(Xn)]^2+2a_1a_2E(X_1X_2)-2a_1a_2E(X_1)E(X_2)+...+2a_{n-1}a_nE(X_{n-1}X_n)-2a_{n-1}a_nE(X_{n-1})E(X_n)$
Next just note:
$a_n^2E(X_n^2)-a_n^2[E(X_n)]^2=a_n\sigma_n^2$
and
$2a_{n-1}a_nE(X_{n-1}X_n)-2a_{n-1}a_nE(X_{n-1})E(X_n)=2a_{n-1}a_nCov(X_{n-1},Xn)$
|
Variance of linear combinations of correlated random variables
Basically, the proof is the same as the first formula. I will prove it use a very brutal method.
$Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2
=E[(a_1X_1)^2+...+(a_nX_n)^2+2a_
|
15,992
|
Continuous random variables - probability of a kid arriving on time for school
|
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y \leq 420]$. This problem can be handled with a straightforward geometric approach.
$$\mathbb{P}[X + Y \leq 420] = \frac{\text{grey area}}{\text{total area}} = \frac{ \Delta y \times d_1+ \frac{1}{2}\Delta y \times d_2}{\Delta x \times \Delta y} = \frac{5 + \frac{1}{2}\cdot10}{30} = \frac{1}{3},$$
where $\Delta x = 405 - 375 = 30$, $\Delta y = 40 - 30 = 10$, $d_1 = (420-40)-375$ and $d_2 = 390 - (420-40)$
|
Continuous random variables - probability of a kid arriving on time for school
|
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y
|
Continuous random variables - probability of a kid arriving on time for school
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y \leq 420]$. This problem can be handled with a straightforward geometric approach.
$$\mathbb{P}[X + Y \leq 420] = \frac{\text{grey area}}{\text{total area}} = \frac{ \Delta y \times d_1+ \frac{1}{2}\Delta y \times d_2}{\Delta x \times \Delta y} = \frac{5 + \frac{1}{2}\cdot10}{30} = \frac{1}{3},$$
where $\Delta x = 405 - 375 = 30$, $\Delta y = 40 - 30 = 10$, $d_1 = (420-40)-375$ and $d_2 = 390 - (420-40)$
|
Continuous random variables - probability of a kid arriving on time for school
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y
|
15,993
|
Continuous random variables - probability of a kid arriving on time for school
|
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed until departure and drive time to be less than 60.
Define $Z=X+Y,$ so that
$$F_Z(z) = \int_{30}^{40} F_X(z-y)f_Y(y)dy,$$
which follows from summing two independent continuous random variables. We would like to solve for $F_Z(60)$. Replacing $z=60, F_X(x)=\frac{x-15}{30},$ and $f_Y(y) = \frac{1}{10}$, we have
$$P(\text{Arrive before 7AM}) = \int_{30}^{40} \frac{(60-y)-15}{30}\frac{1}{10}dy$$ $$P(\text{Arrive before 7AM})= \frac{1}{300} \int_{30}^{40} (45-y)dy=\frac{1}{3}.$$
|
Continuous random variables - probability of a kid arriving on time for school
|
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed unt
|
Continuous random variables - probability of a kid arriving on time for school
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed until departure and drive time to be less than 60.
Define $Z=X+Y,$ so that
$$F_Z(z) = \int_{30}^{40} F_X(z-y)f_Y(y)dy,$$
which follows from summing two independent continuous random variables. We would like to solve for $F_Z(60)$. Replacing $z=60, F_X(x)=\frac{x-15}{30},$ and $f_Y(y) = \frac{1}{10}$, we have
$$P(\text{Arrive before 7AM}) = \int_{30}^{40} \frac{(60-y)-15}{30}\frac{1}{10}dy$$ $$P(\text{Arrive before 7AM})= \frac{1}{300} \int_{30}^{40} (45-y)dy=\frac{1}{3}.$$
|
Continuous random variables - probability of a kid arriving on time for school
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed unt
|
15,994
|
Continuous random variables - probability of a kid arriving on time for school
|
A simpler approach:
There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period.
There's a $5/30$ chance of leaving between 6:15 and 6:20, and there'll be a $100\%$ chance of arriving before 7 if he leaves at any point during that interval.
There's a $10/30$ chance of leaving between 6:20 and 6:30.
At 6:20 there's a $100\%$ chance of arriving before 7.
At 6:30 there's a $0\%$ chance of arriving before 7 (since time is continuous, there's a $0\%$ chance of taking exactly 30 minutes).
The chance of arriving before 7 decreases linearly between 6:20 and 6:30, because this simply corresponds to (the reverse of) the probability of having the duration of the journey be shorter than some duration, which is linear.
We can average these percentages and say there's a $50\%$ chance of arriving before 7 if we leave at some random point between 6:20 and 6:30.
There's a $0\%$ chance of arriving before 7 if we leave after 6:30, so we can disregard this.
Now we can simply add up the probabilities to get the overall probability:
$5/30 * 100\% + 10/30 * 50\% + 0 = 5/30 + 5/30 = 10/30 = 1/3$
So there's a $1/3$ chance of arriving before 7 AM.
|
Continuous random variables - probability of a kid arriving on time for school
|
A simpler approach:
There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period.
There's a $5/30$ chance of leaving between 6:15
|
Continuous random variables - probability of a kid arriving on time for school
A simpler approach:
There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period.
There's a $5/30$ chance of leaving between 6:15 and 6:20, and there'll be a $100\%$ chance of arriving before 7 if he leaves at any point during that interval.
There's a $10/30$ chance of leaving between 6:20 and 6:30.
At 6:20 there's a $100\%$ chance of arriving before 7.
At 6:30 there's a $0\%$ chance of arriving before 7 (since time is continuous, there's a $0\%$ chance of taking exactly 30 minutes).
The chance of arriving before 7 decreases linearly between 6:20 and 6:30, because this simply corresponds to (the reverse of) the probability of having the duration of the journey be shorter than some duration, which is linear.
We can average these percentages and say there's a $50\%$ chance of arriving before 7 if we leave at some random point between 6:20 and 6:30.
There's a $0\%$ chance of arriving before 7 if we leave after 6:30, so we can disregard this.
Now we can simply add up the probabilities to get the overall probability:
$5/30 * 100\% + 10/30 * 50\% + 0 = 5/30 + 5/30 = 10/30 = 1/3$
So there's a $1/3$ chance of arriving before 7 AM.
|
Continuous random variables - probability of a kid arriving on time for school
A simpler approach:
There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period.
There's a $5/30$ chance of leaving between 6:15
|
15,995
|
Continuous random variables - probability of a kid arriving on time for school
|
We should begin by partitioning the space.
If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest, the dad needs to leave between 6:15 and 6:30.
Let's write out some scenarios:
Dad leaves 0 minutes after 6:15, he can take at most 45 minutes
Dad leaves 5 minutes after 6:15, he can take at most 40 minutes
Dad leaves 10 minutes after 6:15, he can take at most 35 minutes
Dad leaves 15 minutes after 6:15, he can take at most 30 minutes
Dad leaves any later than that, the kid will be late
Let $X \sim U(0,30)$ be the minutes dad leaves after 6:15 and let $Y\sim U(30,40)$ be the duration of the ride in minutes. The kid will arrive on time if
$$ X+Y \leq 45 $$
An alternative way to think about this is "the time for the kid to get to school, including the time for dad to leave, must not exceed 45 minutes assuming the dad leaves at 6:15 at the earliest". Because both random variables are uniform, you can just take the ratio of areas to compute probability.
The area of the space where dad can make it to school is 5*10 + 10*10/2 = 100. The area of the entire space is 300. So there is a 1/3 chance dad makes it. Let's verify this with simulation.
x = runif(100000000,0, 30)
y = runif(100000000,30,40)
mean(x+y<45)
>>>0.3333499
Which is correct to within simulation error.
|
Continuous random variables - probability of a kid arriving on time for school
|
We should begin by partitioning the space.
If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest,
|
Continuous random variables - probability of a kid arriving on time for school
We should begin by partitioning the space.
If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest, the dad needs to leave between 6:15 and 6:30.
Let's write out some scenarios:
Dad leaves 0 minutes after 6:15, he can take at most 45 minutes
Dad leaves 5 minutes after 6:15, he can take at most 40 minutes
Dad leaves 10 minutes after 6:15, he can take at most 35 minutes
Dad leaves 15 minutes after 6:15, he can take at most 30 minutes
Dad leaves any later than that, the kid will be late
Let $X \sim U(0,30)$ be the minutes dad leaves after 6:15 and let $Y\sim U(30,40)$ be the duration of the ride in minutes. The kid will arrive on time if
$$ X+Y \leq 45 $$
An alternative way to think about this is "the time for the kid to get to school, including the time for dad to leave, must not exceed 45 minutes assuming the dad leaves at 6:15 at the earliest". Because both random variables are uniform, you can just take the ratio of areas to compute probability.
The area of the space where dad can make it to school is 5*10 + 10*10/2 = 100. The area of the entire space is 300. So there is a 1/3 chance dad makes it. Let's verify this with simulation.
x = runif(100000000,0, 30)
y = runif(100000000,30,40)
mean(x+y<45)
>>>0.3333499
Which is correct to within simulation error.
|
Continuous random variables - probability of a kid arriving on time for school
We should begin by partitioning the space.
If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest,
|
15,996
|
Why am I getting information entropy greater than 1?
|
Entropy is not the same as probability.
Entropy measures the "information" or "uncertainty" of a random variable.
When you are using base 2, it is measured in bits; and there can be more than one bit of information in a variable.
In this example, one sample "contains" about 1.15 bits of information.
In other words, if you were able to compress a series of samples perfectly, you would need that many bits per sample, on average.
|
Why am I getting information entropy greater than 1?
|
Entropy is not the same as probability.
Entropy measures the "information" or "uncertainty" of a random variable.
When you are using base 2, it is measured in bits; and there can be more than one bit
|
Why am I getting information entropy greater than 1?
Entropy is not the same as probability.
Entropy measures the "information" or "uncertainty" of a random variable.
When you are using base 2, it is measured in bits; and there can be more than one bit of information in a variable.
In this example, one sample "contains" about 1.15 bits of information.
In other words, if you were able to compress a series of samples perfectly, you would need that many bits per sample, on average.
|
Why am I getting information entropy greater than 1?
Entropy is not the same as probability.
Entropy measures the "information" or "uncertainty" of a random variable.
When you are using base 2, it is measured in bits; and there can be more than one bit
|
15,997
|
Why am I getting information entropy greater than 1?
|
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using.
Using base 2 logarithms as an example, as in the question: $\log_2 1$ is $0$ and $\log_2 2$ is $1$, so a result greater than $1$ is definitely wrong if the number of categories is $1$ or $2$. A value greater than $1$ will be wrong if it exceeds $\log_2 k$.
In view of this it is fairly common to scale entropy by $\log k$, so that results then do fall between $0$ and $1$,
|
Why am I getting information entropy greater than 1?
|
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using.
Using base 2 logarithms
|
Why am I getting information entropy greater than 1?
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using.
Using base 2 logarithms as an example, as in the question: $\log_2 1$ is $0$ and $\log_2 2$ is $1$, so a result greater than $1$ is definitely wrong if the number of categories is $1$ or $2$. A value greater than $1$ will be wrong if it exceeds $\log_2 k$.
In view of this it is fairly common to scale entropy by $\log k$, so that results then do fall between $0$ and $1$,
|
Why am I getting information entropy greater than 1?
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using.
Using base 2 logarithms
|
15,998
|
Why am I getting information entropy greater than 1?
|
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct.
As stated earlier "Entropy measures the "information" or "uncertainty" of a random variable." Information can be measured in bits and when doing so log2 should be used. However, if a different information unit is used, the amount of information changes simply because the unit can encode more information. As an example, 1 bit can encode two events 0,1, while 1 ban can encode 10 different events, it follows then that 1 ban = 3.322 bits (3 bits = 8 events).
In summary, using entropy values between 0-1 and >1 is really no different as long as you use the same entropy units across comparisons. However, for some applications (Cross-entropy loss) using a value between 0 and 1 may be more convenient.
|
Why am I getting information entropy greater than 1?
|
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct.
As stated earlier "Entropy measures the "information" or "uncerta
|
Why am I getting information entropy greater than 1?
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct.
As stated earlier "Entropy measures the "information" or "uncertainty" of a random variable." Information can be measured in bits and when doing so log2 should be used. However, if a different information unit is used, the amount of information changes simply because the unit can encode more information. As an example, 1 bit can encode two events 0,1, while 1 ban can encode 10 different events, it follows then that 1 ban = 3.322 bits (3 bits = 8 events).
In summary, using entropy values between 0-1 and >1 is really no different as long as you use the same entropy units across comparisons. However, for some applications (Cross-entropy loss) using a value between 0 and 1 may be more convenient.
|
Why am I getting information entropy greater than 1?
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct.
As stated earlier "Entropy measures the "information" or "uncerta
|
15,999
|
Do we ever use maximum likelihood estimation?
|
I am wondering if maximum likelihood estimation ever used in statistics.
Certainly! Actually quite a lot -- but not always.
We learn the concept of it but I wonder when it is actually used.
When people have a parametric distributional model, they quite often choose to use maximum likelihood estimation. When the model is correct, there are a number of handy properties of maximum likelihood estimators.
For one example -- the use of generalized linear models is quite widespread and in that case the parameters describing the mean are estimated by maximum likelihood.
It can happen that some parameters are estimated by maximum likelihood and others are not. For example, consider an overdispersed Poisson GLM -- the dispersion parameter won't be estimated by maximum likelihood, because the MLE is not useful in that case.
If we assume the distribution of the data, we find two parameters
Well, sometimes you might have two, but sometimes you have one parameter, sometimes three or four or more.
one for the mean and one for the variance,
Are you thinking of a particular model perhaps? This is not always the case. Consider estimating the parameter of an exponential distribution or a Poisson distribution, or a binomial distribution. In each of those cases, there's one parameter and the variance is a function of the parameter that describes the mean.
Or consider a generalized gamma distribution, which has three parameters. Or a four-parameter beta distribution, which has (perhaps unsurprisingly) four parameters. Note also that (depending on the particular parameterization) the mean or the variance or both might not be represented by a single parameter but by functions of several of them.
For example, the gamma distribution, for which there are three parameterizations that see fairly common use -- the two most common of which have both the mean and the variance being functions of two parameters.
Typically in a regression model or a GLM, or a survival model (among many other model types), the model may depend on multiple predictors, in which case the distribution associated with each observation under the model may have one of its own parameter (or even several parameters) that are related to many predictor variables ("independent variables").
|
Do we ever use maximum likelihood estimation?
|
I am wondering if maximum likelihood estimation ever used in statistics.
Certainly! Actually quite a lot -- but not always.
We learn the concept of it but I wonder when it is actually used.
When p
|
Do we ever use maximum likelihood estimation?
I am wondering if maximum likelihood estimation ever used in statistics.
Certainly! Actually quite a lot -- but not always.
We learn the concept of it but I wonder when it is actually used.
When people have a parametric distributional model, they quite often choose to use maximum likelihood estimation. When the model is correct, there are a number of handy properties of maximum likelihood estimators.
For one example -- the use of generalized linear models is quite widespread and in that case the parameters describing the mean are estimated by maximum likelihood.
It can happen that some parameters are estimated by maximum likelihood and others are not. For example, consider an overdispersed Poisson GLM -- the dispersion parameter won't be estimated by maximum likelihood, because the MLE is not useful in that case.
If we assume the distribution of the data, we find two parameters
Well, sometimes you might have two, but sometimes you have one parameter, sometimes three or four or more.
one for the mean and one for the variance,
Are you thinking of a particular model perhaps? This is not always the case. Consider estimating the parameter of an exponential distribution or a Poisson distribution, or a binomial distribution. In each of those cases, there's one parameter and the variance is a function of the parameter that describes the mean.
Or consider a generalized gamma distribution, which has three parameters. Or a four-parameter beta distribution, which has (perhaps unsurprisingly) four parameters. Note also that (depending on the particular parameterization) the mean or the variance or both might not be represented by a single parameter but by functions of several of them.
For example, the gamma distribution, for which there are three parameterizations that see fairly common use -- the two most common of which have both the mean and the variance being functions of two parameters.
Typically in a regression model or a GLM, or a survival model (among many other model types), the model may depend on multiple predictors, in which case the distribution associated with each observation under the model may have one of its own parameter (or even several parameters) that are related to many predictor variables ("independent variables").
|
Do we ever use maximum likelihood estimation?
I am wondering if maximum likelihood estimation ever used in statistics.
Certainly! Actually quite a lot -- but not always.
We learn the concept of it but I wonder when it is actually used.
When p
|
16,000
|
Do we ever use maximum likelihood estimation?
|
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribution and solve for the MLE, then remove the explicit distributional assumption and instead look at how your estimator performs under more general conditions. So the Quasi MLE just becomes a smart way of getting an estimator, and the bulk of the work is then deriving the properties of the estimator. Since the distributional assumptions are dropped, the quasi MLE usually doesn't have the nice efficiency properties though.
As a toy example, suppose you have an iid sample $x_1, x_2, ..., x_n$, and you want an estimator for the variance of $X$. You could start by assuming $X \sim N (\mu, \sigma^2)$, write the likelihood using the normal pdf, and solve for the argmax to get $\hat\sigma^2 = n^{-1}\sum (x_i - \bar x)^2$. We can then ask questions like under what conditions is $\hat\sigma^2$ a consistent estimator, is it unbiased (it is not), is it root n consistent, what is it's asypmtotic distribution, etc.
|
Do we ever use maximum likelihood estimation?
|
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribu
|
Do we ever use maximum likelihood estimation?
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribution and solve for the MLE, then remove the explicit distributional assumption and instead look at how your estimator performs under more general conditions. So the Quasi MLE just becomes a smart way of getting an estimator, and the bulk of the work is then deriving the properties of the estimator. Since the distributional assumptions are dropped, the quasi MLE usually doesn't have the nice efficiency properties though.
As a toy example, suppose you have an iid sample $x_1, x_2, ..., x_n$, and you want an estimator for the variance of $X$. You could start by assuming $X \sim N (\mu, \sigma^2)$, write the likelihood using the normal pdf, and solve for the argmax to get $\hat\sigma^2 = n^{-1}\sum (x_i - \bar x)^2$. We can then ask questions like under what conditions is $\hat\sigma^2$ a consistent estimator, is it unbiased (it is not), is it root n consistent, what is it's asypmtotic distribution, etc.
|
Do we ever use maximum likelihood estimation?
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribu
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.