idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
17,801 | Why cross-validation gives biased estimates of error? | The CV error for the best model is optimistic because the model was chosen precisely to minimise that error. If $\widehat{\text{Err}}_i$ is the CV error of the $i$-th model, then
$$ \mathbb{E}[\min\{\widehat{\text{Err}}_1, ..., \widehat{\text{Err}}_m\}] \le \mathbb{E}[\widehat{\text{Err}}_i] $$
for all $i$.
If you don't do model selection, then the problem is perhaps more subtle. Namely, the fact that the quantity that is approximated is not the error of the given model as trained on a fixed dataset, but the expected generalization error over all possible datasets of the given size. In other words, CV approximates
$$ \text{Err} := \mathbb{E}_{D \sim \mathbb{P}^n_{X Y}} [ \mathbb{E}_{X Y}[\text{loss}(f_D(X),Y)|D]], $$
where the training set $D$ itself is random, instead of what one might want:
$$ \text{Err}_{X Y} := \mathbb{E}_{X Y}[\text{loss}(f_D(X),Y)|D], $$
for a fixed dataset $D$.
But even for $\text{Err}$, since CV splits in, say $K$, folds, the actual expectation approximated is wrt to $D \sim \mathbb{P}^{\frac{K-1}{K} n }_{X Y}$, so there is some bias wrt. $\text{Err}$. Of course as $K/n$ decreases this can become irrelevant.
Other sources of error in CV are the dependencies in the error terms for each test sample: because of the common training split and because of the dependence among different training splits there is variance which cannot be estimated using the empirical standard error of the individual losses at each test sample. This is the reason why common confidence intervals are bogus and have poor coverage. See this recent paper [1] for a nested procedure which provides good confidence intervals using a nested procedure.
[1] Bates, Stephen, Trevor Hastie, and Robert Tibshirani. ‘Cross-Validation: What Does It Estimate and How Well Does It Do It?’, 1 April 2021. | Why cross-validation gives biased estimates of error? | The CV error for the best model is optimistic because the model was chosen precisely to minimise that error. If $\widehat{\text{Err}}_i$ is the CV error of the $i$-th model, then
$$ \mathbb{E}[\min\{\ | Why cross-validation gives biased estimates of error?
The CV error for the best model is optimistic because the model was chosen precisely to minimise that error. If $\widehat{\text{Err}}_i$ is the CV error of the $i$-th model, then
$$ \mathbb{E}[\min\{\widehat{\text{Err}}_1, ..., \widehat{\text{Err}}_m\}] \le \mathbb{E}[\widehat{\text{Err}}_i] $$
for all $i$.
If you don't do model selection, then the problem is perhaps more subtle. Namely, the fact that the quantity that is approximated is not the error of the given model as trained on a fixed dataset, but the expected generalization error over all possible datasets of the given size. In other words, CV approximates
$$ \text{Err} := \mathbb{E}_{D \sim \mathbb{P}^n_{X Y}} [ \mathbb{E}_{X Y}[\text{loss}(f_D(X),Y)|D]], $$
where the training set $D$ itself is random, instead of what one might want:
$$ \text{Err}_{X Y} := \mathbb{E}_{X Y}[\text{loss}(f_D(X),Y)|D], $$
for a fixed dataset $D$.
But even for $\text{Err}$, since CV splits in, say $K$, folds, the actual expectation approximated is wrt to $D \sim \mathbb{P}^{\frac{K-1}{K} n }_{X Y}$, so there is some bias wrt. $\text{Err}$. Of course as $K/n$ decreases this can become irrelevant.
Other sources of error in CV are the dependencies in the error terms for each test sample: because of the common training split and because of the dependence among different training splits there is variance which cannot be estimated using the empirical standard error of the individual losses at each test sample. This is the reason why common confidence intervals are bogus and have poor coverage. See this recent paper [1] for a nested procedure which provides good confidence intervals using a nested procedure.
[1] Bates, Stephen, Trevor Hastie, and Robert Tibshirani. ‘Cross-Validation: What Does It Estimate and How Well Does It Do It?’, 1 April 2021. | Why cross-validation gives biased estimates of error?
The CV error for the best model is optimistic because the model was chosen precisely to minimise that error. If $\widehat{\text{Err}}_i$ is the CV error of the $i$-th model, then
$$ \mathbb{E}[\min\{\ |
17,802 | Why cross-validation gives biased estimates of error? | Can it be related with the fact that your observations may not be independent? Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: you can have it in a mixture of A+B, A+C, A+D, etc. Now, imagine that you use k-fold validation. When the model is predicting for the A+C mixture, maybe it was already trained with the observation "A+B", therefore, it will be biased towards the output of that observation (because half of the variables of the two observations are the same: in one you have the properties of A and the properties of C, and in the other one you gave the properties of A and the properties of B). | Why cross-validation gives biased estimates of error? | Can it be related with the fact that your observations may not be independent? Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of | Why cross-validation gives biased estimates of error?
Can it be related with the fact that your observations may not be independent? Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: you can have it in a mixture of A+B, A+C, A+D, etc. Now, imagine that you use k-fold validation. When the model is predicting for the A+C mixture, maybe it was already trained with the observation "A+B", therefore, it will be biased towards the output of that observation (because half of the variables of the two observations are the same: in one you have the properties of A and the properties of C, and in the other one you gave the properties of A and the properties of B). | Why cross-validation gives biased estimates of error?
Can it be related with the fact that your observations may not be independent? Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of |
17,803 | Why experience replay requires off-policy algorithm? | The on-policy methods, like SARSA, expects that the actions in every state are chosen based on the current policy of the agent, that usually tends to exploit rewards.
Doing so, the policy gets better when we update our policy based on the last rewards. Here in particular, they update the parameters of the NN that predicts the value of a certain state/action).
But, if we update our policy based on stored transitions, like in experience replay, we are actually evaluating actions from a policy that is no longer the current one, since it evolved in time, thus making it no longer on-policy.
The Q values are evaluated based on the future rewards that you will get from a state following the current agent policy.
However, that is no longer true since you are now following a different policy. So they use a common off-policy method that explores based on an epsilon-greedy approach. | Why experience replay requires off-policy algorithm? | The on-policy methods, like SARSA, expects that the actions in every state are chosen based on the current policy of the agent, that usually tends to exploit rewards.
Doing so, the policy gets better | Why experience replay requires off-policy algorithm?
The on-policy methods, like SARSA, expects that the actions in every state are chosen based on the current policy of the agent, that usually tends to exploit rewards.
Doing so, the policy gets better when we update our policy based on the last rewards. Here in particular, they update the parameters of the NN that predicts the value of a certain state/action).
But, if we update our policy based on stored transitions, like in experience replay, we are actually evaluating actions from a policy that is no longer the current one, since it evolved in time, thus making it no longer on-policy.
The Q values are evaluated based on the future rewards that you will get from a state following the current agent policy.
However, that is no longer true since you are now following a different policy. So they use a common off-policy method that explores based on an epsilon-greedy approach. | Why experience replay requires off-policy algorithm?
The on-policy methods, like SARSA, expects that the actions in every state are chosen based on the current policy of the agent, that usually tends to exploit rewards.
Doing so, the policy gets better |
17,804 | Why experience replay requires off-policy algorithm? | David Silver addresses this in this video lecture at 46:10
http://videolectures.net/rldm2015_silver_reinforcement_learning/:
Experience replay chooses $a$ from $s$ using the policy prevailing at the time, and this is one of its advantages - it allows the Q function to learn from previous policies, which breaks up the correlation of recent states and policies and prevents the network from getting "locked in" to a certain behaviour mode. | Why experience replay requires off-policy algorithm? | David Silver addresses this in this video lecture at 46:10
http://videolectures.net/rldm2015_silver_reinforcement_learning/:
Experience replay chooses $a$ from $s$ using the policy prevailing at the t | Why experience replay requires off-policy algorithm?
David Silver addresses this in this video lecture at 46:10
http://videolectures.net/rldm2015_silver_reinforcement_learning/:
Experience replay chooses $a$ from $s$ using the policy prevailing at the time, and this is one of its advantages - it allows the Q function to learn from previous policies, which breaks up the correlation of recent states and policies and prevents the network from getting "locked in" to a certain behaviour mode. | Why experience replay requires off-policy algorithm?
David Silver addresses this in this video lecture at 46:10
http://videolectures.net/rldm2015_silver_reinforcement_learning/:
Experience replay chooses $a$ from $s$ using the policy prevailing at the t |
17,805 | Why experience replay requires off-policy algorithm? | TL;DR: It isn't necessary to have an off-policy method when using experience replay, but it makes your life a lot easier.
When following a given policy $\pi$, an on-policy method (for value function estimation) estimates $V^\pi$ or $Q^\pi$ (respectively), whereas an off-policy method estimates $V^*$ or $Q^*$.
The off-policy case is desirable because it guarantees that the estimate of $V^*$ or $Q^*$ will keep getting more accurate even if the policy being followed changes, i.e., following $\pi_1$ will yield $V^*$, following $\pi_2$ will yield $V^*$ and randomly choosing between $\pi_1$ and $\pi_2$ for each step will still yield $V^*$. (If all state-action pairs are seen often enough, ofc.)
In the on-policy case, however, following $\pi_1$ will yield $V^{\pi_1}$, following $\pi_2$ will yield $V^{\pi_2}$ and randomly choosing between the two policies at each step will yield something that is not immediately obvious - at least to me.
In experience replay, the replay buffer is an amalgamation of experiences gathered by the agent following different policies $\pi_1, \dots, \pi_n$ at different times from which a random subset is drawn and used to improve the function approximation in a batch RL / supervised learning style.
Off-policy methods won't have a problem with this; they will happily take the samples and improve the estimate of $V^*$.
However, as we can see from the above, this scenario is very much not ideal for on-policy methods. The policy represented by $V$ or $Q$ will be a (random) combination of the policies in the replay buffer, and who knows if that policy is at least as good as the previous policy. If it isn't we can't guarantee an improvement once we act $\epsilon$-greedy on it.
Saving and using the $(s_t, a_t, r_{t+1})$-sequence, as you suggest, is being done by algorithms like A3C or PPO. You actually have to do this for on-policy methods, because they won't converge to $V^\pi$ or $Q^\pi$ otherwise. The problem here isn't if on-policy methods will converge when using experience replay, but rather what it is that they converge to, and if that what is still an improvement over the previous iteration.
One way of addressing this problem is to stick to off-policy methods; another is to use on-policy methods, a rolling replay buffer (to "keep the experience fresh"), and carefully tuning parameters (making very small steps). Essentially, we aim to make sure that the $V^\pi$ or $Q^\pi$ we actually learn is close enough to $V^{\pi_n}$ or $Q^{\pi_n}$ (from the latest iteration's $\pi_n$) so that we can guarantee an improvement when acting greedily wrt. $V^\pi$ or $Q^\pi$. | Why experience replay requires off-policy algorithm? | TL;DR: It isn't necessary to have an off-policy method when using experience replay, but it makes your life a lot easier.
When following a given policy $\pi$, an on-policy method (for value function | Why experience replay requires off-policy algorithm?
TL;DR: It isn't necessary to have an off-policy method when using experience replay, but it makes your life a lot easier.
When following a given policy $\pi$, an on-policy method (for value function estimation) estimates $V^\pi$ or $Q^\pi$ (respectively), whereas an off-policy method estimates $V^*$ or $Q^*$.
The off-policy case is desirable because it guarantees that the estimate of $V^*$ or $Q^*$ will keep getting more accurate even if the policy being followed changes, i.e., following $\pi_1$ will yield $V^*$, following $\pi_2$ will yield $V^*$ and randomly choosing between $\pi_1$ and $\pi_2$ for each step will still yield $V^*$. (If all state-action pairs are seen often enough, ofc.)
In the on-policy case, however, following $\pi_1$ will yield $V^{\pi_1}$, following $\pi_2$ will yield $V^{\pi_2}$ and randomly choosing between the two policies at each step will yield something that is not immediately obvious - at least to me.
In experience replay, the replay buffer is an amalgamation of experiences gathered by the agent following different policies $\pi_1, \dots, \pi_n$ at different times from which a random subset is drawn and used to improve the function approximation in a batch RL / supervised learning style.
Off-policy methods won't have a problem with this; they will happily take the samples and improve the estimate of $V^*$.
However, as we can see from the above, this scenario is very much not ideal for on-policy methods. The policy represented by $V$ or $Q$ will be a (random) combination of the policies in the replay buffer, and who knows if that policy is at least as good as the previous policy. If it isn't we can't guarantee an improvement once we act $\epsilon$-greedy on it.
Saving and using the $(s_t, a_t, r_{t+1})$-sequence, as you suggest, is being done by algorithms like A3C or PPO. You actually have to do this for on-policy methods, because they won't converge to $V^\pi$ or $Q^\pi$ otherwise. The problem here isn't if on-policy methods will converge when using experience replay, but rather what it is that they converge to, and if that what is still an improvement over the previous iteration.
One way of addressing this problem is to stick to off-policy methods; another is to use on-policy methods, a rolling replay buffer (to "keep the experience fresh"), and carefully tuning parameters (making very small steps). Essentially, we aim to make sure that the $V^\pi$ or $Q^\pi$ we actually learn is close enough to $V^{\pi_n}$ or $Q^{\pi_n}$ (from the latest iteration's $\pi_n$) so that we can guarantee an improvement when acting greedily wrt. $V^\pi$ or $Q^\pi$. | Why experience replay requires off-policy algorithm?
TL;DR: It isn't necessary to have an off-policy method when using experience replay, but it makes your life a lot easier.
When following a given policy $\pi$, an on-policy method (for value function |
17,806 | Why experience replay requires off-policy algorithm? | One answer is that by definition if you're using past experiences that were obtained using an outdated policy, then your method is off policy.
The question of why using experience replay is 'wrong' if you're using vanilla policy gradient i.e REINFORCE still remains though.
The critical point is that Q-learning methods depend on an expectation that is independent of the policy itself i.e. it does not matter exactly how you found out what is the expected return of a particular series of actions, whether it was by accident or through exploration or by following a policy, it's useful and stable data (see notes for equations 2 and 3 in https://arxiv.org/abs/1509.02971). In contrast, the expectation of the gradient in REINFORCE is dependent on the policy, so if you use the datapoints from an outdated policy to calculate the gradient of the parameters, it simply does not represent the expectation of the gradients of the updated policy anymore e.g. if the optimal theta_1 is 0.5 and then initially it's set to 0.0 and your sampled gradient tells you to increase it by 0.5, you will make the policy actually worse if you increase it yet again by 0.5 to have it equal to 1.
Where this difference shows itself though, is that with experience replay you cannot learn stochastic decision making. It's a trade-off between using fewer data points for converging to a possibly worse determinstic policy (a determinstic policy is a subset of a stochastic policy) against using more datapoints for obtaining a stochastic policy. | Why experience replay requires off-policy algorithm? | One answer is that by definition if you're using past experiences that were obtained using an outdated policy, then your method is off policy.
The question of why using experience replay is 'wrong' if | Why experience replay requires off-policy algorithm?
One answer is that by definition if you're using past experiences that were obtained using an outdated policy, then your method is off policy.
The question of why using experience replay is 'wrong' if you're using vanilla policy gradient i.e REINFORCE still remains though.
The critical point is that Q-learning methods depend on an expectation that is independent of the policy itself i.e. it does not matter exactly how you found out what is the expected return of a particular series of actions, whether it was by accident or through exploration or by following a policy, it's useful and stable data (see notes for equations 2 and 3 in https://arxiv.org/abs/1509.02971). In contrast, the expectation of the gradient in REINFORCE is dependent on the policy, so if you use the datapoints from an outdated policy to calculate the gradient of the parameters, it simply does not represent the expectation of the gradients of the updated policy anymore e.g. if the optimal theta_1 is 0.5 and then initially it's set to 0.0 and your sampled gradient tells you to increase it by 0.5, you will make the policy actually worse if you increase it yet again by 0.5 to have it equal to 1.
Where this difference shows itself though, is that with experience replay you cannot learn stochastic decision making. It's a trade-off between using fewer data points for converging to a possibly worse determinstic policy (a determinstic policy is a subset of a stochastic policy) against using more datapoints for obtaining a stochastic policy. | Why experience replay requires off-policy algorithm?
One answer is that by definition if you're using past experiences that were obtained using an outdated policy, then your method is off policy.
The question of why using experience replay is 'wrong' if |
17,807 | Are we frequentists really just implicit/unwitting Bayesians? | I would argue that frequentists are indeed often "implicit/unwitting Bayesians", as in practice we often want to perform probabilistic reasoning about things that don't have a long run frequency. The classic example being Null Hypothesis Statistical Testing (NHST), where what we really want to know is the relative probabilities of the Null and Research Hypotheses being true, but we cant do this in a frequentist setting as the truth of a particular hypothesis has no (non-trivial) long run frequency - it is either true or it isn't. Frequentist NHSTs get around this by substituting a different question, "what is the probability of observing an outcome at least as extreme under the null hypothesis" and then compare that to a pre-determined threshold. However this procedure does not logically allow us to conclude anything about whether H0 or H1 is true, and in doing so we are actually stepping out of a frequentist framework into a (usually subjective) Bayesian one, where we conclude that the probability of observing such an extreme value under H0 is so low, that we can no longer believe that H0 is likely to be true (note this is implicitly assigning a probablility to a particular hypothesis).
Note it isn't actually true that frequentist procedures don't have subjectivity or priors, in NHSTs the threshold on the p-value, $\alpha$, serves much the same purpose as the priors $p(H_0)$ and $p(H_1)$ in a Bayesian analysis. This is illustrated by the much-discussed XKCD cartoon:
The main reason the frequentists conclusion is unreasonable is that the value of $\alpha$ does not represent a reasonable state of knowledge regarding the detector and/or solar physics (we know that it is extremely unlikely that the sun has exploded, and rather less so that the detector has a false alarm). Note in this case the conclusion that the sun has exploded inferred from a low p-value (a Bayesian inference) but is not logically entailed by it. The subjectivity is still there, but not stated explicitly in the analysis and often neglected.
Arguably confidence intervals are often used (and interpreted as) an interval in which we can expect to see the observations with a given probability, which again is a Bayesian interpretation.
Ideally statisticians ought to be aware of the benefits and disadvantages of both approaches and be prepared to use the right framework for the application at hand. Basically we should aim to use the analysis that provides the most direct answer to the question we actually want answered (and not quietly substitute a different one), so a frequentist approach is probably most efficient where we actually are interested in long-run frequencies and Bayesian methods where that is not the case.
I suspect that most frequentist questions can be answered by a Bayesian as there is nothing to stop a Bayesian from answering questions like "what is the probability of observing a result at least as extreme if $H_0$ is true", however I'll need to do a bit of reading on that one, interesting question. | Are we frequentists really just implicit/unwitting Bayesians? | I would argue that frequentists are indeed often "implicit/unwitting Bayesians", as in practice we often want to perform probabilistic reasoning about things that don't have a long run frequency. The | Are we frequentists really just implicit/unwitting Bayesians?
I would argue that frequentists are indeed often "implicit/unwitting Bayesians", as in practice we often want to perform probabilistic reasoning about things that don't have a long run frequency. The classic example being Null Hypothesis Statistical Testing (NHST), where what we really want to know is the relative probabilities of the Null and Research Hypotheses being true, but we cant do this in a frequentist setting as the truth of a particular hypothesis has no (non-trivial) long run frequency - it is either true or it isn't. Frequentist NHSTs get around this by substituting a different question, "what is the probability of observing an outcome at least as extreme under the null hypothesis" and then compare that to a pre-determined threshold. However this procedure does not logically allow us to conclude anything about whether H0 or H1 is true, and in doing so we are actually stepping out of a frequentist framework into a (usually subjective) Bayesian one, where we conclude that the probability of observing such an extreme value under H0 is so low, that we can no longer believe that H0 is likely to be true (note this is implicitly assigning a probablility to a particular hypothesis).
Note it isn't actually true that frequentist procedures don't have subjectivity or priors, in NHSTs the threshold on the p-value, $\alpha$, serves much the same purpose as the priors $p(H_0)$ and $p(H_1)$ in a Bayesian analysis. This is illustrated by the much-discussed XKCD cartoon:
The main reason the frequentists conclusion is unreasonable is that the value of $\alpha$ does not represent a reasonable state of knowledge regarding the detector and/or solar physics (we know that it is extremely unlikely that the sun has exploded, and rather less so that the detector has a false alarm). Note in this case the conclusion that the sun has exploded inferred from a low p-value (a Bayesian inference) but is not logically entailed by it. The subjectivity is still there, but not stated explicitly in the analysis and often neglected.
Arguably confidence intervals are often used (and interpreted as) an interval in which we can expect to see the observations with a given probability, which again is a Bayesian interpretation.
Ideally statisticians ought to be aware of the benefits and disadvantages of both approaches and be prepared to use the right framework for the application at hand. Basically we should aim to use the analysis that provides the most direct answer to the question we actually want answered (and not quietly substitute a different one), so a frequentist approach is probably most efficient where we actually are interested in long-run frequencies and Bayesian methods where that is not the case.
I suspect that most frequentist questions can be answered by a Bayesian as there is nothing to stop a Bayesian from answering questions like "what is the probability of observing a result at least as extreme if $H_0$ is true", however I'll need to do a bit of reading on that one, interesting question. | Are we frequentists really just implicit/unwitting Bayesians?
I would argue that frequentists are indeed often "implicit/unwitting Bayesians", as in practice we often want to perform probabilistic reasoning about things that don't have a long run frequency. The |
17,808 | Are we frequentists really just implicit/unwitting Bayesians? | Bayesians and Frequentists do not only differ in how they obtain inferences, or how similar or different these inferences can be uncertain certain prior choices. The main difference is how they interpret probability:
Bayesian probability:
Bayesian probability is one interpretation of the concept of probability. In contrast to interpreting probability as frequency or propensity of some phenomenon, Bayesian probability is a quantity that is assigned to represent a state of knowledge, or a state of belief.
Frequentist probability:
Frequentist probability or frequentism is a standard interpretation of probability; it defines an event's probability as the limit of its relative frequency in a large number of trials. This interpretation supports the statistical needs of experimental scientists and pollsters; probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). It does not support all needs; gamblers typically require estimates of the odds without experiments.
These two definitions represent two irreconcilable approaches to defining the concept of probability (at least so far). So, there are more fundamental differences between these two areas than whether you can obtain similar estimators or same conclusions in some parametric or nonparametric models. | Are we frequentists really just implicit/unwitting Bayesians? | Bayesians and Frequentists do not only differ in how they obtain inferences, or how similar or different these inferences can be uncertain certain prior choices. The main difference is how they interp | Are we frequentists really just implicit/unwitting Bayesians?
Bayesians and Frequentists do not only differ in how they obtain inferences, or how similar or different these inferences can be uncertain certain prior choices. The main difference is how they interpret probability:
Bayesian probability:
Bayesian probability is one interpretation of the concept of probability. In contrast to interpreting probability as frequency or propensity of some phenomenon, Bayesian probability is a quantity that is assigned to represent a state of knowledge, or a state of belief.
Frequentist probability:
Frequentist probability or frequentism is a standard interpretation of probability; it defines an event's probability as the limit of its relative frequency in a large number of trials. This interpretation supports the statistical needs of experimental scientists and pollsters; probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). It does not support all needs; gamblers typically require estimates of the odds without experiments.
These two definitions represent two irreconcilable approaches to defining the concept of probability (at least so far). So, there are more fundamental differences between these two areas than whether you can obtain similar estimators or same conclusions in some parametric or nonparametric models. | Are we frequentists really just implicit/unwitting Bayesians?
Bayesians and Frequentists do not only differ in how they obtain inferences, or how similar or different these inferences can be uncertain certain prior choices. The main difference is how they interp |
17,809 | Bootstrap: the issue of overfitting | i am not completely sure i understand your question right... i am assuming you are interested in the order of convergence?
because the empirical cdf has about N parameters. Of course, asymptotically it converges to the population cdf, but what about finite samples?
Have you read any of the basics on bootstrap theory?
The Problem is that it gets pretty wild (mathematically) pretty quickly.
Anyway, i recommend having a look at
van der Vaart "Asymptotic Statistics" chapter 23.
Hall "Bootstrap and Edgeworth expansions" (lengthy but concise and less handwaving than van der Vaart i'd say)
for the basics.
Chernick "Bootstrap Methods" is more aimed at users rather than mathematicians but has a section on "where bootstrap fails".
The classical Efron/Tibshirani has little on why bootstrap actually works... | Bootstrap: the issue of overfitting | i am not completely sure i understand your question right... i am assuming you are interested in the order of convergence?
because the empirical cdf has about N parameters. Of course, asymptotically | Bootstrap: the issue of overfitting
i am not completely sure i understand your question right... i am assuming you are interested in the order of convergence?
because the empirical cdf has about N parameters. Of course, asymptotically it converges to the population cdf, but what about finite samples?
Have you read any of the basics on bootstrap theory?
The Problem is that it gets pretty wild (mathematically) pretty quickly.
Anyway, i recommend having a look at
van der Vaart "Asymptotic Statistics" chapter 23.
Hall "Bootstrap and Edgeworth expansions" (lengthy but concise and less handwaving than van der Vaart i'd say)
for the basics.
Chernick "Bootstrap Methods" is more aimed at users rather than mathematicians but has a section on "where bootstrap fails".
The classical Efron/Tibshirani has little on why bootstrap actually works... | Bootstrap: the issue of overfitting
i am not completely sure i understand your question right... i am assuming you are interested in the order of convergence?
because the empirical cdf has about N parameters. Of course, asymptotically |
17,810 | Bootstrap: the issue of overfitting | Janssen and Pauls showed that bootstrapping a statistic works asymptotically, iff a central limit theorem could also have been applied. So if you compare estimating the parameters of a $\mathcal{N}(\mu,\sigma^2)$ distribution as the distribution of the statistic and estimating the statistic's distribution via bootstrap hits the point.
Intuitively, bootstrapping from finite samples underestimates heavy tails of the underlying distribution. That's clear, since finite samples have a finite range, even if their true distribution's range is infinite or, even worse, has heavy tails. So the bootstrap statistic's behaviour will never be as "wild" as the original statistic. So similar to avoiding overfitting due to too many parameters in (parametric) regression, we could avoid overfitting by using the few-parameter normal distribution.
Edit responding the comments: Remember you don't need the bootstrap to estimate the cdf. You usually use the bootstrap to get the distribution (in the broadest sense including quantiles, moments, whatever needed) of some statistic. So you don't necessarily have an overfitting problem (in terms of "the estimation due to my finite data looks too nice comparing to what I should see with the true wild distribution"). But as it turned out (by the cited paper and by Frank Harrel's comment below), getting such an overfitting problem is linked to problems with parametric estimation of the same statistics.
So as your question implied, bootstrapping is not a panacea against problems with parametric estimation. The hope that the bootstrap would help with parameter problems by controlling the whole distribution is spurious. | Bootstrap: the issue of overfitting | Janssen and Pauls showed that bootstrapping a statistic works asymptotically, iff a central limit theorem could also have been applied. So if you compare estimating the parameters of a $\mathcal{N}(\m | Bootstrap: the issue of overfitting
Janssen and Pauls showed that bootstrapping a statistic works asymptotically, iff a central limit theorem could also have been applied. So if you compare estimating the parameters of a $\mathcal{N}(\mu,\sigma^2)$ distribution as the distribution of the statistic and estimating the statistic's distribution via bootstrap hits the point.
Intuitively, bootstrapping from finite samples underestimates heavy tails of the underlying distribution. That's clear, since finite samples have a finite range, even if their true distribution's range is infinite or, even worse, has heavy tails. So the bootstrap statistic's behaviour will never be as "wild" as the original statistic. So similar to avoiding overfitting due to too many parameters in (parametric) regression, we could avoid overfitting by using the few-parameter normal distribution.
Edit responding the comments: Remember you don't need the bootstrap to estimate the cdf. You usually use the bootstrap to get the distribution (in the broadest sense including quantiles, moments, whatever needed) of some statistic. So you don't necessarily have an overfitting problem (in terms of "the estimation due to my finite data looks too nice comparing to what I should see with the true wild distribution"). But as it turned out (by the cited paper and by Frank Harrel's comment below), getting such an overfitting problem is linked to problems with parametric estimation of the same statistics.
So as your question implied, bootstrapping is not a panacea against problems with parametric estimation. The hope that the bootstrap would help with parameter problems by controlling the whole distribution is spurious. | Bootstrap: the issue of overfitting
Janssen and Pauls showed that bootstrapping a statistic works asymptotically, iff a central limit theorem could also have been applied. So if you compare estimating the parameters of a $\mathcal{N}(\m |
17,811 | Bootstrap: the issue of overfitting | One source of intuition might be to compare rates of convergence for parametric CDFs vs ECDFs, for iid data.
By DKW, the empirical CDF converges to the true CDF at a $n^{-1/2}$ rate (not just at one point, but the supremum of the absolute difference over the whole domain of the CDFs):
https://en.wikipedia.org/wiki/Dvoretzky%E2%80%93Kiefer%E2%80%93Wolfowitz_inequality
http://www.stat.cmu.edu/~larry/=stat705/Lecture12.pdf
And by Berry-Esseen, the CDF of a sampling distribution for a single mean converges to its Normal limit at a $n^{-1/2}$ rate:
https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
(This is not quite what we want---we're wondering about how the estimated parametric CDF of the data converges, not about the sampling distribution. But in the simplest ideal case, where the data are Normal and $\sigma$ is known and we just need to estimate $\mu$, I imagine the rates of convergence should be the same for the data's CDF as for the mean's CDF?)
So in a certain sense, the rate at which you need to acquire more samples is the same, whether you're estimating the CDF using an empirical CDF or whether you're estimating a parameter directly using a sample-mean-type estimator. This might help justify Frank Harrell's comment that "The number of effective parameters is not the same as the sample size."
Of course, that's not the whole story. Although the rates don't differ, the constants do. And there's much more to the nonparametric bootstrap than ECDFs---you still need to do things with the ECDF once you estimate it. | Bootstrap: the issue of overfitting | One source of intuition might be to compare rates of convergence for parametric CDFs vs ECDFs, for iid data.
By DKW, the empirical CDF converges to the true CDF at a $n^{-1/2}$ rate (not just at one p | Bootstrap: the issue of overfitting
One source of intuition might be to compare rates of convergence for parametric CDFs vs ECDFs, for iid data.
By DKW, the empirical CDF converges to the true CDF at a $n^{-1/2}$ rate (not just at one point, but the supremum of the absolute difference over the whole domain of the CDFs):
https://en.wikipedia.org/wiki/Dvoretzky%E2%80%93Kiefer%E2%80%93Wolfowitz_inequality
http://www.stat.cmu.edu/~larry/=stat705/Lecture12.pdf
And by Berry-Esseen, the CDF of a sampling distribution for a single mean converges to its Normal limit at a $n^{-1/2}$ rate:
https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
(This is not quite what we want---we're wondering about how the estimated parametric CDF of the data converges, not about the sampling distribution. But in the simplest ideal case, where the data are Normal and $\sigma$ is known and we just need to estimate $\mu$, I imagine the rates of convergence should be the same for the data's CDF as for the mean's CDF?)
So in a certain sense, the rate at which you need to acquire more samples is the same, whether you're estimating the CDF using an empirical CDF or whether you're estimating a parameter directly using a sample-mean-type estimator. This might help justify Frank Harrell's comment that "The number of effective parameters is not the same as the sample size."
Of course, that's not the whole story. Although the rates don't differ, the constants do. And there's much more to the nonparametric bootstrap than ECDFs---you still need to do things with the ECDF once you estimate it. | Bootstrap: the issue of overfitting
One source of intuition might be to compare rates of convergence for parametric CDFs vs ECDFs, for iid data.
By DKW, the empirical CDF converges to the true CDF at a $n^{-1/2}$ rate (not just at one p |
17,812 | Distribution for percentage data | You are right that the binomial distribution is for discrete proportions that arise from the number of 'successes' from a finite number of Bernoulli trials, and that this makes the distribution inappropriate for your data. You should use the Gamma distribution divided by the sum of that Gamma plus another Gamma. That is, you should use the beta distribution to model continuous proportions.
I have an example of beta regression in my answer here: Remove effect of factor on continuous proportion data using regression in R.
Update:
@DimitriyV.Masterov raises the good point that you mention your data have $0$'s, but the beta distribution is only supported on $(0,\ 1)$. This prompts the question of what should be done with such values. Some ideas can be gleaned from this excellent CV thread: How small a quantity should be added to x to avoid taking the log of 0? | Distribution for percentage data | You are right that the binomial distribution is for discrete proportions that arise from the number of 'successes' from a finite number of Bernoulli trials, and that this makes the distribution inappr | Distribution for percentage data
You are right that the binomial distribution is for discrete proportions that arise from the number of 'successes' from a finite number of Bernoulli trials, and that this makes the distribution inappropriate for your data. You should use the Gamma distribution divided by the sum of that Gamma plus another Gamma. That is, you should use the beta distribution to model continuous proportions.
I have an example of beta regression in my answer here: Remove effect of factor on continuous proportion data using regression in R.
Update:
@DimitriyV.Masterov raises the good point that you mention your data have $0$'s, but the beta distribution is only supported on $(0,\ 1)$. This prompts the question of what should be done with such values. Some ideas can be gleaned from this excellent CV thread: How small a quantity should be added to x to avoid taking the log of 0? | Distribution for percentage data
You are right that the binomial distribution is for discrete proportions that arise from the number of 'successes' from a finite number of Bernoulli trials, and that this makes the distribution inappr |
17,813 | Distribution for percentage data | Percentage values represent rates independent of the number of samples. You would like to use these percentages as dependent variable and satellite imagery as an explanatory variable. However I guess not all of the 50 plots in the inventory had similar number of samples. A suitable model that relates these percentages to other variables should take into account this uncertainty in the measurement, giving more weights on plots with high samples.
Furthermore, the error distribution in the case of your data is clearly binomial. The error variance is smallest at boundaries, this is captured by a binomial distribution.
This all seems to me as the archetypical example of using a GLM with binomial error model.
"Statistics: An Introduction using R", Chapter 14 by Crawley discusses exactly this topic and how to analyze it with R. | Distribution for percentage data | Percentage values represent rates independent of the number of samples. You would like to use these percentages as dependent variable and satellite imagery as an explanatory variable. However I guess | Distribution for percentage data
Percentage values represent rates independent of the number of samples. You would like to use these percentages as dependent variable and satellite imagery as an explanatory variable. However I guess not all of the 50 plots in the inventory had similar number of samples. A suitable model that relates these percentages to other variables should take into account this uncertainty in the measurement, giving more weights on plots with high samples.
Furthermore, the error distribution in the case of your data is clearly binomial. The error variance is smallest at boundaries, this is captured by a binomial distribution.
This all seems to me as the archetypical example of using a GLM with binomial error model.
"Statistics: An Introduction using R", Chapter 14 by Crawley discusses exactly this topic and how to analyze it with R. | Distribution for percentage data
Percentage values represent rates independent of the number of samples. You would like to use these percentages as dependent variable and satellite imagery as an explanatory variable. However I guess |
17,814 | Stationarity in multivariate time series | I've got the same problem and I can understand your thoughts very well!
After dealing with this subject and reading several books I'm also a little bit confused. But as I understand: if the whole VAR system is stationary it follows that EVERY single component is stationary. So if you test the stationary of the VAR system (by means of the determinant of inverse of |I-A|matrix as described) it will be enough and you can proceed.
Currently I'm working with VAR-models, too. In my cases the VAR system is always stationary because the modulus of the eigenvalues are all less than 1. But when I look at the single time series I would think that some series are not stationary. I think, this is your problem, too...
So I think one has to decide which criterion to use. Either looking at the eigenvalue-condition and proceed if all are less than one in modulus or first have a look at single time series and than put the stationary time series (after differencing / polynomial subtraction if needed) in the VAR analysis.
By the way, if it helps, I found one reference which says that the single components do not necassary have to be stationary but only the vector of time series (the VAR system). This is a german reference [B. Schmitz: Einführung in die Zeitreihenanalyse, p. 191]. But in my opinion this conflicts with the proposition that VAR system stationarity results in single component stationarity...
Hoping for more arguments from others. | Stationarity in multivariate time series | I've got the same problem and I can understand your thoughts very well!
After dealing with this subject and reading several books I'm also a little bit confused. But as I understand: if the whole VAR | Stationarity in multivariate time series
I've got the same problem and I can understand your thoughts very well!
After dealing with this subject and reading several books I'm also a little bit confused. But as I understand: if the whole VAR system is stationary it follows that EVERY single component is stationary. So if you test the stationary of the VAR system (by means of the determinant of inverse of |I-A|matrix as described) it will be enough and you can proceed.
Currently I'm working with VAR-models, too. In my cases the VAR system is always stationary because the modulus of the eigenvalues are all less than 1. But when I look at the single time series I would think that some series are not stationary. I think, this is your problem, too...
So I think one has to decide which criterion to use. Either looking at the eigenvalue-condition and proceed if all are less than one in modulus or first have a look at single time series and than put the stationary time series (after differencing / polynomial subtraction if needed) in the VAR analysis.
By the way, if it helps, I found one reference which says that the single components do not necassary have to be stationary but only the vector of time series (the VAR system). This is a german reference [B. Schmitz: Einführung in die Zeitreihenanalyse, p. 191]. But in my opinion this conflicts with the proposition that VAR system stationarity results in single component stationarity...
Hoping for more arguments from others. | Stationarity in multivariate time series
I've got the same problem and I can understand your thoughts very well!
After dealing with this subject and reading several books I'm also a little bit confused. But as I understand: if the whole VAR |
17,815 | Stationarity in multivariate time series | I think I've figured out the possible solution. It all depends on the nature of eigen values. Lets say we have 3 time series in our system. Correspondingly there are different possibilites for eigen values
1) Case 1 : All the eigen values are less than 1 in modulus => VAR model is stationary and can be built and used for forecasting after other diagnostic checks.
2) Case 2 : All the eigen values are > 1 in modulus => VAR is non-stationary, We have to go for a co integration check. If none of them are co-integrated , then differencing or log transformation is the suggested way
3) Case 3: Eigen Value =1 i.e a unit root => We will have to go the VECM (Vector Error Correction Model) approach
4) Case 4 : Now this is interesting, some of the eigen values are < 1 and rest are > 1, none of them being equal to 1 , => System is exploding i.e one of the series is stationary around a mean/variance, while other one is not. In this case either transforming the series via differencing or log transformation , is the logical way or rather dealing only with the non stationary series with univariate methods gives better forecasts.
I sounds logical to me that, if one of the series is non stationary and other is stationary, Then the stationary one might not be impacting the non stationary series at all. But I don't have any rigorous mathematical proof for that | Stationarity in multivariate time series | I think I've figured out the possible solution. It all depends on the nature of eigen values. Lets say we have 3 time series in our system. Correspondingly there are different possibilites for eigen v | Stationarity in multivariate time series
I think I've figured out the possible solution. It all depends on the nature of eigen values. Lets say we have 3 time series in our system. Correspondingly there are different possibilites for eigen values
1) Case 1 : All the eigen values are less than 1 in modulus => VAR model is stationary and can be built and used for forecasting after other diagnostic checks.
2) Case 2 : All the eigen values are > 1 in modulus => VAR is non-stationary, We have to go for a co integration check. If none of them are co-integrated , then differencing or log transformation is the suggested way
3) Case 3: Eigen Value =1 i.e a unit root => We will have to go the VECM (Vector Error Correction Model) approach
4) Case 4 : Now this is interesting, some of the eigen values are < 1 and rest are > 1, none of them being equal to 1 , => System is exploding i.e one of the series is stationary around a mean/variance, while other one is not. In this case either transforming the series via differencing or log transformation , is the logical way or rather dealing only with the non stationary series with univariate methods gives better forecasts.
I sounds logical to me that, if one of the series is non stationary and other is stationary, Then the stationary one might not be impacting the non stationary series at all. But I don't have any rigorous mathematical proof for that | Stationarity in multivariate time series
I think I've figured out the possible solution. It all depends on the nature of eigen values. Lets say we have 3 time series in our system. Correspondingly there are different possibilites for eigen v |
17,816 | Stationarity in multivariate time series | 1) A stationary VAR means that all of its variables are stationary. So I suggest testing each variable individually for stationarity, and thereafter for co-integration if they happen to be non-stationary.
2/3) You should difference the non-stationary components before attempting to use them in a VAR. If there is one non-stationary component, difference it before using it in the VAR, same goes if there are several non-stationary components, or if all are non-stationary, use the differenced series in you model.
You can probably use other methods for analyzing, like machine learning, but that is a field I'm not very familiar with. | Stationarity in multivariate time series | 1) A stationary VAR means that all of its variables are stationary. So I suggest testing each variable individually for stationarity, and thereafter for co-integration if they happen to be non-station | Stationarity in multivariate time series
1) A stationary VAR means that all of its variables are stationary. So I suggest testing each variable individually for stationarity, and thereafter for co-integration if they happen to be non-stationary.
2/3) You should difference the non-stationary components before attempting to use them in a VAR. If there is one non-stationary component, difference it before using it in the VAR, same goes if there are several non-stationary components, or if all are non-stationary, use the differenced series in you model.
You can probably use other methods for analyzing, like machine learning, but that is a field I'm not very familiar with. | Stationarity in multivariate time series
1) A stationary VAR means that all of its variables are stationary. So I suggest testing each variable individually for stationarity, and thereafter for co-integration if they happen to be non-station |
17,817 | Method for generating correlated non-normal data | After much searching, jumping around online forums, consulting with professors and doing A LOT of literature review, I have come to the conclusion that probably THE only way to address this problem is through the use of vine copulas indeed. It gives you some control over the pairwise skewness and kurtosis (or any higher moments) - for a p-variate random vector and the freedom to specify p-1 pair of copulas and the remaining p*(p-1)/2 - (p-1) dimensions can be specified in some kind of conditional copula.
I welcome other methods people might've come across but at least I'm going to leave this pointer towards an answer because i cannot, for the life of me, find any other ways to address this. | Method for generating correlated non-normal data | After much searching, jumping around online forums, consulting with professors and doing A LOT of literature review, I have come to the conclusion that probably THE only way to address this problem is | Method for generating correlated non-normal data
After much searching, jumping around online forums, consulting with professors and doing A LOT of literature review, I have come to the conclusion that probably THE only way to address this problem is through the use of vine copulas indeed. It gives you some control over the pairwise skewness and kurtosis (or any higher moments) - for a p-variate random vector and the freedom to specify p-1 pair of copulas and the remaining p*(p-1)/2 - (p-1) dimensions can be specified in some kind of conditional copula.
I welcome other methods people might've come across but at least I'm going to leave this pointer towards an answer because i cannot, for the life of me, find any other ways to address this. | Method for generating correlated non-normal data
After much searching, jumping around online forums, consulting with professors and doing A LOT of literature review, I have come to the conclusion that probably THE only way to address this problem is |
17,818 | Method for generating correlated non-normal data | You might be able to solve this by modifying Ruscio and Kaczetow's (2008) algorithm. Their paper provides an iterative algorithm (with R code) that minimizes the difference between the actual and intended marginal shapes. You might be able to modify it so that its targeting the multivariate (rather than marginal) moments.
Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43(3), 355‐381. doi:10.1080/00273170802285693 | Method for generating correlated non-normal data | You might be able to solve this by modifying Ruscio and Kaczetow's (2008) algorithm. Their paper provides an iterative algorithm (with R code) that minimizes the difference between the actual and int | Method for generating correlated non-normal data
You might be able to solve this by modifying Ruscio and Kaczetow's (2008) algorithm. Their paper provides an iterative algorithm (with R code) that minimizes the difference between the actual and intended marginal shapes. You might be able to modify it so that its targeting the multivariate (rather than marginal) moments.
Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43(3), 355‐381. doi:10.1080/00273170802285693 | Method for generating correlated non-normal data
You might be able to solve this by modifying Ruscio and Kaczetow's (2008) algorithm. Their paper provides an iterative algorithm (with R code) that minimizes the difference between the actual and int |
17,819 | Method for generating correlated non-normal data | You might want to check the Generalized Elliptical Distribution, which allows for a "classical" shape matrix with flexibility for other features. | Method for generating correlated non-normal data | You might want to check the Generalized Elliptical Distribution, which allows for a "classical" shape matrix with flexibility for other features. | Method for generating correlated non-normal data
You might want to check the Generalized Elliptical Distribution, which allows for a "classical" shape matrix with flexibility for other features. | Method for generating correlated non-normal data
You might want to check the Generalized Elliptical Distribution, which allows for a "classical" shape matrix with flexibility for other features. |
17,820 | Method for generating correlated non-normal data | I know this question was asked several years ago so the response is too late, but in case somebody else has a similar issue the following reference may be useful:
Qu et al. (2020). A method of generating multivariate non-normal random numbers with desired multivariate skewness and kurtosis. Behavior Research Methods volume 52, 939-946
link: https://link.springer.com/article/10.3758/s13428-019-01291-5 | Method for generating correlated non-normal data | I know this question was asked several years ago so the response is too late, but in case somebody else has a similar issue the following reference may be useful:
Qu et al. (2020). A method of generat | Method for generating correlated non-normal data
I know this question was asked several years ago so the response is too late, but in case somebody else has a similar issue the following reference may be useful:
Qu et al. (2020). A method of generating multivariate non-normal random numbers with desired multivariate skewness and kurtosis. Behavior Research Methods volume 52, 939-946
link: https://link.springer.com/article/10.3758/s13428-019-01291-5 | Method for generating correlated non-normal data
I know this question was asked several years ago so the response is too late, but in case somebody else has a similar issue the following reference may be useful:
Qu et al. (2020). A method of generat |
17,821 | Method for generating correlated non-normal data | I have come up with a simple method for doing this that does not involve coplas and other complex designs. I am afraid I do not have any formal reference though the method appears to be highly effective.
The idea is simple.
1. Draw any number of variables from a joint normal distribution.
2. Apply the univariate normal CDF of variables to derive probabilities for each variable.
3. Finally apply the inverse CDF of any distribution to simulate draws from that distribution.
I came up with this method in 2012 and demonstrated using Stata. I have also written a recent post showing the same method using R. | Method for generating correlated non-normal data | I have come up with a simple method for doing this that does not involve coplas and other complex designs. I am afraid I do not have any formal reference though the method appears to be highly effect | Method for generating correlated non-normal data
I have come up with a simple method for doing this that does not involve coplas and other complex designs. I am afraid I do not have any formal reference though the method appears to be highly effective.
The idea is simple.
1. Draw any number of variables from a joint normal distribution.
2. Apply the univariate normal CDF of variables to derive probabilities for each variable.
3. Finally apply the inverse CDF of any distribution to simulate draws from that distribution.
I came up with this method in 2012 and demonstrated using Stata. I have also written a recent post showing the same method using R. | Method for generating correlated non-normal data
I have come up with a simple method for doing this that does not involve coplas and other complex designs. I am afraid I do not have any formal reference though the method appears to be highly effect |
17,822 | Method for generating correlated non-normal data | I believe the method presented in the following papers permits generating random multivariates with any (feasible) combination of mean, variance, skewness, and kurtosis.
Stanfield, P.M., Wilson, J.R., and Mirka, G.A. 1996. Multivariate Input Modeling with Johnson Distributions, Proceedings of the 1996 Winter Simulation Conference, eds. Charnes, J.M, Morrice, D.J., Brunner, D.T., and Swain, J.J., 1457-1464.
Stanfield, P.M., Wilson, J.R., and King, R.E. 2004. Flexible modelling of correlated operation times with application in product re-use facilities, International Journal of Production Research, Vol 42, No 11, 2179–2196.
Disclaimer: I am not one of the authors. | Method for generating correlated non-normal data | I believe the method presented in the following papers permits generating random multivariates with any (feasible) combination of mean, variance, skewness, and kurtosis.
Stanfield, P.M., Wilson, J. | Method for generating correlated non-normal data
I believe the method presented in the following papers permits generating random multivariates with any (feasible) combination of mean, variance, skewness, and kurtosis.
Stanfield, P.M., Wilson, J.R., and Mirka, G.A. 1996. Multivariate Input Modeling with Johnson Distributions, Proceedings of the 1996 Winter Simulation Conference, eds. Charnes, J.M, Morrice, D.J., Brunner, D.T., and Swain, J.J., 1457-1464.
Stanfield, P.M., Wilson, J.R., and King, R.E. 2004. Flexible modelling of correlated operation times with application in product re-use facilities, International Journal of Production Research, Vol 42, No 11, 2179–2196.
Disclaimer: I am not one of the authors. | Method for generating correlated non-normal data
I believe the method presented in the following papers permits generating random multivariates with any (feasible) combination of mean, variance, skewness, and kurtosis.
Stanfield, P.M., Wilson, J. |
17,823 | Distribution with $n$th cumulant given by $\frac 1 n$? | Knowing the values of the cumulants permits us to get an idea of how the graph of this probability distribution will look like. The mean and variance of the distribution is
$$E[Y] = \kappa_1 =1, \;\; \text{Var}[Y] = \kappa_2 = \frac 12$$
while its skewness and excess kurtosis coefficients are
$$\gamma_1 = \frac{\kappa_3}{(\kappa_2)^{3/2}} = \frac{(1/3)}{(1/2)^{3/2}} = \frac{2\sqrt 2}{3}$$
$$\gamma_2 = \frac{\kappa_4}{(\kappa_2)^{2}} = \frac{(1/4)}{(1/2)^{2}} = 1$$
So this could be a familiar looking graph of a positive random variable exhibiting positive skewness.
As for finding the probability distribution, a craftsman's approach could be to specify a generic discrete probability distribution, taking values in $\{0,1,...,m\}$, with corresponding probabilities $\{p_0,p_1,...,p_m\},\; \sum_{k=0}^mp_k =1$, and then use the cumulants to calculate the raw moments, with the purpose of forming a system of linear equations with the probabilities being the unknowns. Cumulants are related to raw moments by
$$\kappa_n=\mu'_n-\sum_{i=1}^{n-1}{n-1 \choose i-1}\kappa_i \mu_{n-i}'$$
Solved for the first five raw moments this gives (the numerical value at the end is specific to the cumulants in our case)
$$\begin{align}
\mu'_1=&\kappa_1 =1\\
\mu'_2=&\kappa_2+\kappa_1^2=3/2\\
\mu'_3=&\kappa_3+3\kappa_2\kappa_1+\kappa_1^3=17/6\\
\mu'_4=&\kappa_4+4\kappa_3\kappa_1+3\kappa_2^2+6\kappa_2\kappa_1^2+\kappa_1^4=19/3\\
\mu'_5=&\kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2+10\kappa_3\kappa_1^2+15\kappa_2^2\kappa_1+10\kappa_2\kappa_1^3+\kappa_1^5=243/15\\
\end{align} $$
If we (momentarily) set $m=5$ we have the system of equations
$$\begin{align}
\sum_{k=0}^5p_k=&1,\qquad \sum_{k=0}^5p_kk=1\\
\sum_{k=0}^5p_kk^2=&3/2,\qquad \sum_{k=0}^5p_kk^3=17/6\\
\sum_{k=0}^5p_kk^4=& 19/3 ,\qquad \sum_{k=0}^5p_kk^5= 243/15\\
&s.t. p_k\ge 0 \;\;\forall k\\
\end{align} $$
Of course we do not want $m$ to be equal to $5$. But increasing gradually $m$ (and obtaining the value of the subsequent moments), we should eventually reach a point where the solution for the probabilities stabilizes. Such an approach cannot be done by hand -but I have neither the software access, nor the programming skills necessary to perform such a task. | Distribution with $n$th cumulant given by $\frac 1 n$? | Knowing the values of the cumulants permits us to get an idea of how the graph of this probability distribution will look like. The mean and variance of the distribution is
$$E[Y] = \kappa_1 =1, \;\; | Distribution with $n$th cumulant given by $\frac 1 n$?
Knowing the values of the cumulants permits us to get an idea of how the graph of this probability distribution will look like. The mean and variance of the distribution is
$$E[Y] = \kappa_1 =1, \;\; \text{Var}[Y] = \kappa_2 = \frac 12$$
while its skewness and excess kurtosis coefficients are
$$\gamma_1 = \frac{\kappa_3}{(\kappa_2)^{3/2}} = \frac{(1/3)}{(1/2)^{3/2}} = \frac{2\sqrt 2}{3}$$
$$\gamma_2 = \frac{\kappa_4}{(\kappa_2)^{2}} = \frac{(1/4)}{(1/2)^{2}} = 1$$
So this could be a familiar looking graph of a positive random variable exhibiting positive skewness.
As for finding the probability distribution, a craftsman's approach could be to specify a generic discrete probability distribution, taking values in $\{0,1,...,m\}$, with corresponding probabilities $\{p_0,p_1,...,p_m\},\; \sum_{k=0}^mp_k =1$, and then use the cumulants to calculate the raw moments, with the purpose of forming a system of linear equations with the probabilities being the unknowns. Cumulants are related to raw moments by
$$\kappa_n=\mu'_n-\sum_{i=1}^{n-1}{n-1 \choose i-1}\kappa_i \mu_{n-i}'$$
Solved for the first five raw moments this gives (the numerical value at the end is specific to the cumulants in our case)
$$\begin{align}
\mu'_1=&\kappa_1 =1\\
\mu'_2=&\kappa_2+\kappa_1^2=3/2\\
\mu'_3=&\kappa_3+3\kappa_2\kappa_1+\kappa_1^3=17/6\\
\mu'_4=&\kappa_4+4\kappa_3\kappa_1+3\kappa_2^2+6\kappa_2\kappa_1^2+\kappa_1^4=19/3\\
\mu'_5=&\kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2+10\kappa_3\kappa_1^2+15\kappa_2^2\kappa_1+10\kappa_2\kappa_1^3+\kappa_1^5=243/15\\
\end{align} $$
If we (momentarily) set $m=5$ we have the system of equations
$$\begin{align}
\sum_{k=0}^5p_k=&1,\qquad \sum_{k=0}^5p_kk=1\\
\sum_{k=0}^5p_kk^2=&3/2,\qquad \sum_{k=0}^5p_kk^3=17/6\\
\sum_{k=0}^5p_kk^4=& 19/3 ,\qquad \sum_{k=0}^5p_kk^5= 243/15\\
&s.t. p_k\ge 0 \;\;\forall k\\
\end{align} $$
Of course we do not want $m$ to be equal to $5$. But increasing gradually $m$ (and obtaining the value of the subsequent moments), we should eventually reach a point where the solution for the probabilities stabilizes. Such an approach cannot be done by hand -but I have neither the software access, nor the programming skills necessary to perform such a task. | Distribution with $n$th cumulant given by $\frac 1 n$?
Knowing the values of the cumulants permits us to get an idea of how the graph of this probability distribution will look like. The mean and variance of the distribution is
$$E[Y] = \kappa_1 =1, \;\; |
17,824 | Choosing complexity parameter in CART | In practice I have seen both approaches taken, and I think that generally your results would not be expected to differ much either way.
That being said, Hastie et al recommend the "one-standard error" rule in the Elements of Statistical Learning, and I tend to trust their judgment (Section 7.10, pg. 244 in my version). The relevant quote is:
Often a "one-standard error" rule is used with cross-validation, in which we choose the most parsimonious model whose error is no more than one standard error above the error of the best model."
Your intuition for why one would follow the one-standard error rule is right - you would do that to avoid selecting a model that overfits the data. | Choosing complexity parameter in CART | In practice I have seen both approaches taken, and I think that generally your results would not be expected to differ much either way.
That being said, Hastie et al recommend the "one-standard error" | Choosing complexity parameter in CART
In practice I have seen both approaches taken, and I think that generally your results would not be expected to differ much either way.
That being said, Hastie et al recommend the "one-standard error" rule in the Elements of Statistical Learning, and I tend to trust their judgment (Section 7.10, pg. 244 in my version). The relevant quote is:
Often a "one-standard error" rule is used with cross-validation, in which we choose the most parsimonious model whose error is no more than one standard error above the error of the best model."
Your intuition for why one would follow the one-standard error rule is right - you would do that to avoid selecting a model that overfits the data. | Choosing complexity parameter in CART
In practice I have seen both approaches taken, and I think that generally your results would not be expected to differ much either way.
That being said, Hastie et al recommend the "one-standard error" |
17,825 | Choosing complexity parameter in CART | You should first start by using the arguments minsplit=0 and cp=0 (complexity parameter) then use the functions plotcp(T.max) and printcp(T.max) choose the value of cp corresponding the minimum relative error and prune the tree by the function prune.rpart(T.max, cp=....)
This should get you the optimal classification tree as they tend to be over-optimistic. | Choosing complexity parameter in CART | You should first start by using the arguments minsplit=0 and cp=0 (complexity parameter) then use the functions plotcp(T.max) and printcp(T.max) choose the value of cp corresponding the minimum relati | Choosing complexity parameter in CART
You should first start by using the arguments minsplit=0 and cp=0 (complexity parameter) then use the functions plotcp(T.max) and printcp(T.max) choose the value of cp corresponding the minimum relative error and prune the tree by the function prune.rpart(T.max, cp=....)
This should get you the optimal classification tree as they tend to be over-optimistic. | Choosing complexity parameter in CART
You should first start by using the arguments minsplit=0 and cp=0 (complexity parameter) then use the functions plotcp(T.max) and printcp(T.max) choose the value of cp corresponding the minimum relati |
17,826 | Understanding the k lag in R's augmented Dickey Fuller test | It's been a while since I looked at ADF tests, however I do remember at least two versions of the adf test.
http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html
http://cran.r-project.org/web/packages/fUnitRoots/
The fUnitRoots package has a function called adfTest(). I think the "trend" issue is handled differently in those packages.
Edit ------ From page 14 of the following link, there were 4 versions (uroot discontinued) of the adf test:
http://math.uncc.edu/~zcai/FinTS.pdf
One more link. Read section 6.3 in the following link. It does a far btter job than I could do in explaining the lag term:
http://www.yats.com/doc/cointegration-en.html
Also, I would be careful with any seasonal model. Unless you're sure there's some seasonality present, I would avoid using seasonal terms. Why? Anything can be broken down into seasonal terms, even if it's not. Here are two examples:
#First example: White noise
x <- rnorm(200)
#Use stl() to separate the trend and seasonal term
x.ts <- ts(x, freq=4)
x.stl <- stl(x.ts, s.window = "periodic")
plot(x.stl)
#Use decompose() to separate the trend and seasonal term
x.dec <- decompose(x.ts)
plot(x.dec)
#===========================================
#Second example, MA process
x1 <- cumsum(x)
#Use stl() to separate the trend and seasonal term
x1.ts <- ts(x1, freq=4)
x1.stl <- stl(x1.ts, s.window = "periodic")
plot(x1.stl)
#Use decompose() to separate the trend and seasonal term
x1.dec <- decompose(x1.ts)
plot(x1.dec)
The graph below is from the above plot(x.stl) statement. stl() found a small seasonal term in white noise. You might say that term is so small that it's really not an issue. The problem is, in real data, you don't know if that term is a problem or not. In the example below, notice that the trend data series has segments where it looks like a filtered version of the raw data, and other segments where it might be considered significantly different than the raw data. | Understanding the k lag in R's augmented Dickey Fuller test | It's been a while since I looked at ADF tests, however I do remember at least two versions of the adf test.
http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html
http://cran. | Understanding the k lag in R's augmented Dickey Fuller test
It's been a while since I looked at ADF tests, however I do remember at least two versions of the adf test.
http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html
http://cran.r-project.org/web/packages/fUnitRoots/
The fUnitRoots package has a function called adfTest(). I think the "trend" issue is handled differently in those packages.
Edit ------ From page 14 of the following link, there were 4 versions (uroot discontinued) of the adf test:
http://math.uncc.edu/~zcai/FinTS.pdf
One more link. Read section 6.3 in the following link. It does a far btter job than I could do in explaining the lag term:
http://www.yats.com/doc/cointegration-en.html
Also, I would be careful with any seasonal model. Unless you're sure there's some seasonality present, I would avoid using seasonal terms. Why? Anything can be broken down into seasonal terms, even if it's not. Here are two examples:
#First example: White noise
x <- rnorm(200)
#Use stl() to separate the trend and seasonal term
x.ts <- ts(x, freq=4)
x.stl <- stl(x.ts, s.window = "periodic")
plot(x.stl)
#Use decompose() to separate the trend and seasonal term
x.dec <- decompose(x.ts)
plot(x.dec)
#===========================================
#Second example, MA process
x1 <- cumsum(x)
#Use stl() to separate the trend and seasonal term
x1.ts <- ts(x1, freq=4)
x1.stl <- stl(x1.ts, s.window = "periodic")
plot(x1.stl)
#Use decompose() to separate the trend and seasonal term
x1.dec <- decompose(x1.ts)
plot(x1.dec)
The graph below is from the above plot(x.stl) statement. stl() found a small seasonal term in white noise. You might say that term is so small that it's really not an issue. The problem is, in real data, you don't know if that term is a problem or not. In the example below, notice that the trend data series has segments where it looks like a filtered version of the raw data, and other segments where it might be considered significantly different than the raw data. | Understanding the k lag in R's augmented Dickey Fuller test
It's been a while since I looked at ADF tests, however I do remember at least two versions of the adf test.
http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html
http://cran. |
17,827 | Understanding the k lag in R's augmented Dickey Fuller test | The k parameter is a set of lags added to address serial correlation. The A in ADF means that the test is augmented by the addition of lags. The selection of the number of lags in ADF can be done a variety of ways. A common way is to start with a large number of lags selected a priori and reduce the number of lags sequentially until the longest lag is statistically significant.
You could test for serial correlation in the residuals after applying the lags in ADF. | Understanding the k lag in R's augmented Dickey Fuller test | The k parameter is a set of lags added to address serial correlation. The A in ADF means that the test is augmented by the addition of lags. The selection of the number of lags in ADF can be done a va | Understanding the k lag in R's augmented Dickey Fuller test
The k parameter is a set of lags added to address serial correlation. The A in ADF means that the test is augmented by the addition of lags. The selection of the number of lags in ADF can be done a variety of ways. A common way is to start with a large number of lags selected a priori and reduce the number of lags sequentially until the longest lag is statistically significant.
You could test for serial correlation in the residuals after applying the lags in ADF. | Understanding the k lag in R's augmented Dickey Fuller test
The k parameter is a set of lags added to address serial correlation. The A in ADF means that the test is augmented by the addition of lags. The selection of the number of lags in ADF can be done a va |
17,828 | What does VC dimension tell us about deep learning? | The rule of thumb you talk about cannot be applied to a neural network.
A neural network has some basic parameters, i.e. its weights and biases. The number of weights are dependent on the number of connections between the network layers and the number of biases are dependent on the number of neurons.
The size of data required highly depends on -
The type of neural network used.
The regularization techniques used in the net.
The learning rate used in training the net.
This being said, the more proper and sure way to know whether the model is overfitting is to check if the validation error is close to the training error. If yes, then the model is working fine. If no, then the model is most likely overfitting and that means that you need to reduce the size of your model or introduce regularization techniques. | What does VC dimension tell us about deep learning? | The rule of thumb you talk about cannot be applied to a neural network.
A neural network has some basic parameters, i.e. its weights and biases. The number of weights are dependent on the number of co | What does VC dimension tell us about deep learning?
The rule of thumb you talk about cannot be applied to a neural network.
A neural network has some basic parameters, i.e. its weights and biases. The number of weights are dependent on the number of connections between the network layers and the number of biases are dependent on the number of neurons.
The size of data required highly depends on -
The type of neural network used.
The regularization techniques used in the net.
The learning rate used in training the net.
This being said, the more proper and sure way to know whether the model is overfitting is to check if the validation error is close to the training error. If yes, then the model is working fine. If no, then the model is most likely overfitting and that means that you need to reduce the size of your model or introduce regularization techniques. | What does VC dimension tell us about deep learning?
The rule of thumb you talk about cannot be applied to a neural network.
A neural network has some basic parameters, i.e. its weights and biases. The number of weights are dependent on the number of co |
17,829 | Why would one use `random' confidence or credible intervals? | Randomized procedures is used sometimes in theory because it simplifies the theory. In typical statistical problems, it does not make sense in practice, while in game-theory settings it can make sense.
The only reason I can see to use it in practice, is if it somehow simplifies calculations.
Theoretically, one can argue it should not be used, from the sufficiency principle: statistical conclusions should only be based on sufficient summaries of the data, and randomization introduces dependence of an extraneous random
$ U $ which is not part of a sufficient summary of the data.
UPDATE
To answer whuber's comments below, quoted here: "Why do randomized procedures "not make sense in practice"? As others have noted, experimenters are perfectly willing to use randomization in the construction of their experimental data, such as randomized assignment of treatment and control, so what is so different (and impractical or objectionable) about using randomization in the ensuing analysis of the data? "
Well, randomization of the experiment to get the data is done for a purpose, mainly to break causality chains. If and when that is effective is another discussion. What could be the purpose for using randomization as part of the analysis? The only reason I have ever seen is that it makes the mathematical theory more complete! That's OK as long as it goes. In game-theory contexts, when there is an actual adversary, randomization my help to confuse him. In real decision contexts (sell, or not sell?) a decision must be taken, and if there is not evidence in the data, maybe one could just throw a coin. But in a scientific context, where the question is what we can learn from the data, randomization seems out of place. I cannot see any real advantage from it! If you disagree, do you have an argument which could convince a biologist or a chemist? (And here I do not think about simulation as part of bootstrap or MCMC.) | Why would one use `random' confidence or credible intervals? | Randomized procedures is used sometimes in theory because it simplifies the theory. In typical statistical problems, it does not make sense in practice, while in game-theory settings it can make sen | Why would one use `random' confidence or credible intervals?
Randomized procedures is used sometimes in theory because it simplifies the theory. In typical statistical problems, it does not make sense in practice, while in game-theory settings it can make sense.
The only reason I can see to use it in practice, is if it somehow simplifies calculations.
Theoretically, one can argue it should not be used, from the sufficiency principle: statistical conclusions should only be based on sufficient summaries of the data, and randomization introduces dependence of an extraneous random
$ U $ which is not part of a sufficient summary of the data.
UPDATE
To answer whuber's comments below, quoted here: "Why do randomized procedures "not make sense in practice"? As others have noted, experimenters are perfectly willing to use randomization in the construction of their experimental data, such as randomized assignment of treatment and control, so what is so different (and impractical or objectionable) about using randomization in the ensuing analysis of the data? "
Well, randomization of the experiment to get the data is done for a purpose, mainly to break causality chains. If and when that is effective is another discussion. What could be the purpose for using randomization as part of the analysis? The only reason I have ever seen is that it makes the mathematical theory more complete! That's OK as long as it goes. In game-theory contexts, when there is an actual adversary, randomization my help to confuse him. In real decision contexts (sell, or not sell?) a decision must be taken, and if there is not evidence in the data, maybe one could just throw a coin. But in a scientific context, where the question is what we can learn from the data, randomization seems out of place. I cannot see any real advantage from it! If you disagree, do you have an argument which could convince a biologist or a chemist? (And here I do not think about simulation as part of bootstrap or MCMC.) | Why would one use `random' confidence or credible intervals?
Randomized procedures is used sometimes in theory because it simplifies the theory. In typical statistical problems, it does not make sense in practice, while in game-theory settings it can make sen |
17,830 | Why would one use `random' confidence or credible intervals? | The idea refers to testing, but in view of the duality of testing and confidence intervals, the same logic applies to CIs.
Basically, randomized tests ensure that a given size of a test can be obtained for discrete-valued experiments, too.
Suppose you want to test, at level $\alpha=0.05$, the fairness of a coin (insert any example of your choice here that can be modelled with a Binomial experiment) using the probability $p$ of heads. That is, you test $H_0:p=0.5$ against (say) $H_1:p<0.5$. Suppose you have tossed the coin $n=10$ times.
Obviously, few heads are evidence agaist $H_0$. For $k=2$ successes, we may compute the $p$-value of the test by pbinom(2,10,.5) in R, yielding 0.054. For $k=1$, we get 0.0107. Hence, there is no way to reject a true $H_0$ with probability 5% without randomization.
If we randomize over rejection and acceptance when observing $k=2$, we may still achieve this goal. | Why would one use `random' confidence or credible intervals? | The idea refers to testing, but in view of the duality of testing and confidence intervals, the same logic applies to CIs.
Basically, randomized tests ensure that a given size of a test can be obtain | Why would one use `random' confidence or credible intervals?
The idea refers to testing, but in view of the duality of testing and confidence intervals, the same logic applies to CIs.
Basically, randomized tests ensure that a given size of a test can be obtained for discrete-valued experiments, too.
Suppose you want to test, at level $\alpha=0.05$, the fairness of a coin (insert any example of your choice here that can be modelled with a Binomial experiment) using the probability $p$ of heads. That is, you test $H_0:p=0.5$ against (say) $H_1:p<0.5$. Suppose you have tossed the coin $n=10$ times.
Obviously, few heads are evidence agaist $H_0$. For $k=2$ successes, we may compute the $p$-value of the test by pbinom(2,10,.5) in R, yielding 0.054. For $k=1$, we get 0.0107. Hence, there is no way to reject a true $H_0$ with probability 5% without randomization.
If we randomize over rejection and acceptance when observing $k=2$, we may still achieve this goal. | Why would one use `random' confidence or credible intervals?
The idea refers to testing, but in view of the duality of testing and confidence intervals, the same logic applies to CIs.
Basically, randomized tests ensure that a given size of a test can be obtain |
17,831 | In Kneser-Ney smoothing, how are unseen words handled? | Dan Jurafsky and Jim Martin have published a chapter on N-Gram models which talks a bit about this problem:
At the termination of the recursion, unigrams are interpolated with the uniform distribution:
$
\begin{align}
P_{KN}(w) = \frac{\max(c_{KN}(w)-d,0)}{\sum_{w'}c_{KN}(w')}+\lambda(\epsilon)\frac{1}{|V|}
\end{align}
$
If we want to include an unknown word <UNK>, it’s just included as a regular vocabulary entry with count zero, and hence its probability will be:
$
\begin{align}
\frac{\lambda(\epsilon)}{|V|}
\end{align}
$
I've tried to find out what this means, but am not sure if $\epsilon$ just means $\lim_{x\rightarrow0}x$. If this is the case, and you assume that as the count goes to zero, maybe $\lambda(\epsilon)$ goes to $d$, according to:
$
\begin{align}
\lambda(w_{i-1}) = \frac{d}{c(w_{i-1})}\vert\{w:c(w_{i-1},w)>0\}\vert
\end{align}
$
then the unknown word just gets assigned a fraction of the discount, i.e.:
$
\begin{align}
\frac{\lambda(\epsilon)}{|V|} = \frac{d}{|V|}
\end{align}
$
I'm not confident about this answer at all, but wanted to get it out there in case it sparks some more thoughts.
Update:
Digging around some more, it seems like $\epsilon$ is typically used to denote the empty string (""), but it's still not clear how this affects the calculation of $\lambda$. $\frac{d}{|V|}$ is still my best guess | In Kneser-Ney smoothing, how are unseen words handled? | Dan Jurafsky and Jim Martin have published a chapter on N-Gram models which talks a bit about this problem:
At the termination of the recursion, unigrams are interpolated with the uniform distributio | In Kneser-Ney smoothing, how are unseen words handled?
Dan Jurafsky and Jim Martin have published a chapter on N-Gram models which talks a bit about this problem:
At the termination of the recursion, unigrams are interpolated with the uniform distribution:
$
\begin{align}
P_{KN}(w) = \frac{\max(c_{KN}(w)-d,0)}{\sum_{w'}c_{KN}(w')}+\lambda(\epsilon)\frac{1}{|V|}
\end{align}
$
If we want to include an unknown word <UNK>, it’s just included as a regular vocabulary entry with count zero, and hence its probability will be:
$
\begin{align}
\frac{\lambda(\epsilon)}{|V|}
\end{align}
$
I've tried to find out what this means, but am not sure if $\epsilon$ just means $\lim_{x\rightarrow0}x$. If this is the case, and you assume that as the count goes to zero, maybe $\lambda(\epsilon)$ goes to $d$, according to:
$
\begin{align}
\lambda(w_{i-1}) = \frac{d}{c(w_{i-1})}\vert\{w:c(w_{i-1},w)>0\}\vert
\end{align}
$
then the unknown word just gets assigned a fraction of the discount, i.e.:
$
\begin{align}
\frac{\lambda(\epsilon)}{|V|} = \frac{d}{|V|}
\end{align}
$
I'm not confident about this answer at all, but wanted to get it out there in case it sparks some more thoughts.
Update:
Digging around some more, it seems like $\epsilon$ is typically used to denote the empty string (""), but it's still not clear how this affects the calculation of $\lambda$. $\frac{d}{|V|}$ is still my best guess | In Kneser-Ney smoothing, how are unseen words handled?
Dan Jurafsky and Jim Martin have published a chapter on N-Gram models which talks a bit about this problem:
At the termination of the recursion, unigrams are interpolated with the uniform distributio |
17,832 | In Kneser-Ney smoothing, how are unseen words handled? | There are many ways to train a model with <UNK> though Jurafsky suggests to choose those words that occur very few times in training and simply change them to <UNK>.
Then simply train the probabilities as you normally would.
See this video starting at 3:40 –
https://class.coursera.org/nlp/lecture/19
Another approach is to simply consider a word as <UNK> the very first time it is seen in training, though from my experience this approach assigns too much of the probability mass to <UNK>. | In Kneser-Ney smoothing, how are unseen words handled? | There are many ways to train a model with <UNK> though Jurafsky suggests to choose those words that occur very few times in training and simply change them to <UNK>.
Then simply train the probabilitie | In Kneser-Ney smoothing, how are unseen words handled?
There are many ways to train a model with <UNK> though Jurafsky suggests to choose those words that occur very few times in training and simply change them to <UNK>.
Then simply train the probabilities as you normally would.
See this video starting at 3:40 –
https://class.coursera.org/nlp/lecture/19
Another approach is to simply consider a word as <UNK> the very first time it is seen in training, though from my experience this approach assigns too much of the probability mass to <UNK>. | In Kneser-Ney smoothing, how are unseen words handled?
There are many ways to train a model with <UNK> though Jurafsky suggests to choose those words that occur very few times in training and simply change them to <UNK>.
Then simply train the probabilitie |
17,833 | In Kneser-Ney smoothing, how are unseen words handled? | Just a few thoughts, I am far from being an expert on the matter so I do not intend to provide an answer to the question but to analyze it.
The simple thing to do would be to calculate $\lambda(\epsilon)$ by forcing the sum to be one. This is reasonable since the empty string is never seen in the training set (nothing can be predicted out of nothing) and the sum has to be one.
If this is the case, $\lambda(\epsilon)$ can be estimated by:
$$\lambda(\epsilon)=1-\frac{\sum_w{max(C_{KN}(w) - d, 0)}}{\sum_{w'}{C_{KN}(w)}}$$
Remember that here $C_{KN}(w)$ is obtained from the bigram model.
Another option would be to estimate the <unk> probability with the methods mentioned by Randy and treating it as a regular token.
I think this step is made to ensure that the formulas are consistent. Notice that
the term $\frac{\lambda(\epsilon)}{|V|}$ does not depend on the context and assigns fixed values to the probabilities of every token. If you want to predict the next word you can prescind this term, on the other hand if you want to compare the Kneser - Ney probability assigned to each token under two or more different contexts you might want to use it. | In Kneser-Ney smoothing, how are unseen words handled? | Just a few thoughts, I am far from being an expert on the matter so I do not intend to provide an answer to the question but to analyze it.
The simple thing to do would be to calculate $\lambda(\epsil | In Kneser-Ney smoothing, how are unseen words handled?
Just a few thoughts, I am far from being an expert on the matter so I do not intend to provide an answer to the question but to analyze it.
The simple thing to do would be to calculate $\lambda(\epsilon)$ by forcing the sum to be one. This is reasonable since the empty string is never seen in the training set (nothing can be predicted out of nothing) and the sum has to be one.
If this is the case, $\lambda(\epsilon)$ can be estimated by:
$$\lambda(\epsilon)=1-\frac{\sum_w{max(C_{KN}(w) - d, 0)}}{\sum_{w'}{C_{KN}(w)}}$$
Remember that here $C_{KN}(w)$ is obtained from the bigram model.
Another option would be to estimate the <unk> probability with the methods mentioned by Randy and treating it as a regular token.
I think this step is made to ensure that the formulas are consistent. Notice that
the term $\frac{\lambda(\epsilon)}{|V|}$ does not depend on the context and assigns fixed values to the probabilities of every token. If you want to predict the next word you can prescind this term, on the other hand if you want to compare the Kneser - Ney probability assigned to each token under two or more different contexts you might want to use it. | In Kneser-Ney smoothing, how are unseen words handled?
Just a few thoughts, I am far from being an expert on the matter so I do not intend to provide an answer to the question but to analyze it.
The simple thing to do would be to calculate $\lambda(\epsil |
17,834 | What does it mean to explain variance? | A main issue here is that the measure of "variation" in regression analysis is related to the squared differences of observed variables from their predicted mean values. This is a useful choice of a measure of variation, both for theoretical analysis and in practical work, because squared differences from the mean are related to the variance of a random variable, and the variance of the sum of two independent random variables is simply the sum of their individual variances.
$R^2$ in multiple regression represents the fraction of "variation" in the observed variable that is accounted for by the regression model when squared differences from predicted means are used as the measure of variation. The Multiple R is simply the square root of $R^2$.
I'm afraid that I've never understood the usefulness of specifying the value of the Multiple R rather than $R^2$. Unlike the correlation coefficient $r$ in a univariate regression, which shows both the direction and strength of the relation between 2 variables, specifying the Multiple R doesn't seem to add much beyond a chance for additional confusion. | What does it mean to explain variance? | A main issue here is that the measure of "variation" in regression analysis is related to the squared differences of observed variables from their predicted mean values. This is a useful choice of a m | What does it mean to explain variance?
A main issue here is that the measure of "variation" in regression analysis is related to the squared differences of observed variables from their predicted mean values. This is a useful choice of a measure of variation, both for theoretical analysis and in practical work, because squared differences from the mean are related to the variance of a random variable, and the variance of the sum of two independent random variables is simply the sum of their individual variances.
$R^2$ in multiple regression represents the fraction of "variation" in the observed variable that is accounted for by the regression model when squared differences from predicted means are used as the measure of variation. The Multiple R is simply the square root of $R^2$.
I'm afraid that I've never understood the usefulness of specifying the value of the Multiple R rather than $R^2$. Unlike the correlation coefficient $r$ in a univariate regression, which shows both the direction and strength of the relation between 2 variables, specifying the Multiple R doesn't seem to add much beyond a chance for additional confusion. | What does it mean to explain variance?
A main issue here is that the measure of "variation" in regression analysis is related to the squared differences of observed variables from their predicted mean values. This is a useful choice of a m |
17,835 | Relationship between McNemar's test and conditional logistic regression | Sorry, it's an old issue, I came across this by chance.
There is a mistake in your code for the mcnemar test. Try with:
n <- 100
do.one <- function(n) {
id <- rep(1:n, each=2)
case <- rep(0:1, times=n)
rs <- rbinom(n*2, 1, 0.5)
c(
'pclogit' = coef(summary(clogit(case ~ rs + strata(id))))[5],
'pmctest' = mcnemar.test(table(rs[case == 0], rs[case == 1]))$p.value
)
}
out <- replicate(1000, do.one(n)) | Relationship between McNemar's test and conditional logistic regression | Sorry, it's an old issue, I came across this by chance.
There is a mistake in your code for the mcnemar test. Try with:
n <- 100
do.one <- function(n) {
id <- rep(1:n, each=2)
case <- rep(0:1, ti | Relationship between McNemar's test and conditional logistic regression
Sorry, it's an old issue, I came across this by chance.
There is a mistake in your code for the mcnemar test. Try with:
n <- 100
do.one <- function(n) {
id <- rep(1:n, each=2)
case <- rep(0:1, times=n)
rs <- rbinom(n*2, 1, 0.5)
c(
'pclogit' = coef(summary(clogit(case ~ rs + strata(id))))[5],
'pmctest' = mcnemar.test(table(rs[case == 0], rs[case == 1]))$p.value
)
}
out <- replicate(1000, do.one(n)) | Relationship between McNemar's test and conditional logistic regression
Sorry, it's an old issue, I came across this by chance.
There is a mistake in your code for the mcnemar test. Try with:
n <- 100
do.one <- function(n) {
id <- rep(1:n, each=2)
case <- rep(0:1, ti |
17,836 | Relationship between McNemar's test and conditional logistic regression | There are 2 competing statistical models. Model #1 (null hypothesis, McNemar): probability correct to incorrect = probability of incorrect to correct = 0.5 or equivalent b=c. Model #2: probability correct to incorrect < probability of incorrect to correct or equivalent b > c. For model #2 we use maximum likelihood method and logistic regression to determine model parameters representing model 2. Statistical methods look different because each method reflects a different model. | Relationship between McNemar's test and conditional logistic regression | There are 2 competing statistical models. Model #1 (null hypothesis, McNemar): probability correct to incorrect = probability of incorrect to correct = 0.5 or equivalent b=c. Model #2: probability c | Relationship between McNemar's test and conditional logistic regression
There are 2 competing statistical models. Model #1 (null hypothesis, McNemar): probability correct to incorrect = probability of incorrect to correct = 0.5 or equivalent b=c. Model #2: probability correct to incorrect < probability of incorrect to correct or equivalent b > c. For model #2 we use maximum likelihood method and logistic regression to determine model parameters representing model 2. Statistical methods look different because each method reflects a different model. | Relationship between McNemar's test and conditional logistic regression
There are 2 competing statistical models. Model #1 (null hypothesis, McNemar): probability correct to incorrect = probability of incorrect to correct = 0.5 or equivalent b=c. Model #2: probability c |
17,837 | training approaches for highly-imbalanced data set | From a recent post on reddit, the reply by datapraxis will be of interest.
edit: the paper mentioned is Haibo He, Edwardo A. Garcia, "Learning from Imbalanced Data," IEEE Transactions on Knowledge and Data Engineering, pp. 1263-1284, September, 2009 (PDF) | training approaches for highly-imbalanced data set | From a recent post on reddit, the reply by datapraxis will be of interest.
edit: the paper mentioned is Haibo He, Edwardo A. Garcia, "Learning from Imbalanced Data," IEEE Transactions on Knowledge and | training approaches for highly-imbalanced data set
From a recent post on reddit, the reply by datapraxis will be of interest.
edit: the paper mentioned is Haibo He, Edwardo A. Garcia, "Learning from Imbalanced Data," IEEE Transactions on Knowledge and Data Engineering, pp. 1263-1284, September, 2009 (PDF) | training approaches for highly-imbalanced data set
From a recent post on reddit, the reply by datapraxis will be of interest.
edit: the paper mentioned is Haibo He, Edwardo A. Garcia, "Learning from Imbalanced Data," IEEE Transactions on Knowledge and |
17,838 | training approaches for highly-imbalanced data set | Pairwise Expanded Logistic Regression, ROC-based learning, Boosting and Bagging (Bootstrap aggregating), Link-based cluster ensemble (LCE), Bayesian Network, Nearest centroid classifiers, Bayesian Techniques, Weighted rough set, k-NN
and a lot of sampling methods to handle imbalance. | training approaches for highly-imbalanced data set | Pairwise Expanded Logistic Regression, ROC-based learning, Boosting and Bagging (Bootstrap aggregating), Link-based cluster ensemble (LCE), Bayesian Network, Nearest centroid classifiers, Bayesian Tec | training approaches for highly-imbalanced data set
Pairwise Expanded Logistic Regression, ROC-based learning, Boosting and Bagging (Bootstrap aggregating), Link-based cluster ensemble (LCE), Bayesian Network, Nearest centroid classifiers, Bayesian Techniques, Weighted rough set, k-NN
and a lot of sampling methods to handle imbalance. | training approaches for highly-imbalanced data set
Pairwise Expanded Logistic Regression, ROC-based learning, Boosting and Bagging (Bootstrap aggregating), Link-based cluster ensemble (LCE), Bayesian Network, Nearest centroid classifiers, Bayesian Tec |
17,839 | PCA model selection using AIC (or BIC) | The works of Minka (Automatic choice of dimensionality for PCA, 2000) and of Tipping & Bishop (Probabilistic Principal Component Analysis) regarding a probabilistic view of PCA might provide you with the framework you interested in.
Minka's work provides an approximation of the log-likelihood $\mathrm{log}\: p(D|k)$ where $k$ is the latent dimensionality of your dataset $D$ by using a Laplace approximation; as stated explicitly : "A simplification of Laplace's method is the BIC approximation."
Clearly this takes a Bayesian viewpoint of your problem that is not based on the information theory criteria (KL-divergence) used by AIC.
Regarding the original "determination of parameters' number" question I also think @whuber's comment carries the correct intuition. | PCA model selection using AIC (or BIC) | The works of Minka (Automatic choice of dimensionality for PCA, 2000) and of Tipping & Bishop (Probabilistic Principal Component Analysis) regarding a probabilistic view of PCA might provide you with | PCA model selection using AIC (or BIC)
The works of Minka (Automatic choice of dimensionality for PCA, 2000) and of Tipping & Bishop (Probabilistic Principal Component Analysis) regarding a probabilistic view of PCA might provide you with the framework you interested in.
Minka's work provides an approximation of the log-likelihood $\mathrm{log}\: p(D|k)$ where $k$ is the latent dimensionality of your dataset $D$ by using a Laplace approximation; as stated explicitly : "A simplification of Laplace's method is the BIC approximation."
Clearly this takes a Bayesian viewpoint of your problem that is not based on the information theory criteria (KL-divergence) used by AIC.
Regarding the original "determination of parameters' number" question I also think @whuber's comment carries the correct intuition. | PCA model selection using AIC (or BIC)
The works of Minka (Automatic choice of dimensionality for PCA, 2000) and of Tipping & Bishop (Probabilistic Principal Component Analysis) regarding a probabilistic view of PCA might provide you with |
17,840 | PCA model selection using AIC (or BIC) | Selecting an "appropriate" number of components in PCA can be performed elegantly with Horn's Parallel Analysis (PA). Papers show that this criterion consistently outperforms rules of thumb such as the elbow criterion or Kaiser's rule. The R package "paran" has an implementation of PA that requires only a couple of mouse clicks.
Of course, how many components you retain depends on the goals of the data reduction. If you only wish to retain variance that is "meaningful", PA will give an optimal reduction. If you wish to minimize the information loss of the original data, however, you should retain enough components to cover 95% explained variance. This will obviously keep many more components than PA, although for high-dimensional datasets, the dimensionality reduction will still be considerable.
One final note about PCA as a "model selection" problem. I don't fully agree with Peter's reply. There have been a number of papers that reformulated PCA as a regression-type problem, such as Sparse PCA, Sparse Probabilistic PCA, or ScotLASS. In these "model-based" PCA solutions, loadings are parameters that can be set to 0 with appropriate penalty terms. Presumably, in this context, it would also be possible to calculate AIC or BIC type statistics for the model under consideration.
This approach could theoretically include a model where, for example, two PCs are unrestricted (all loadings non-zero), versus a model where PC1 is unrestricted and PC2 has all loadings set to 0. This would be equivalent to inferring whether PC2 is redundant on the whole.
References (PA):
Dinno, A. (2012). paran: Horn's Test of Principal Components/Factors. R package version 1.5.1. http://CRAN.R-project.org/package=paran
Horn J.L. 1965. A rationale and a test for the number of factors in factor analysis. Psychometrika. 30: 179–185
Hubbard, R. & Allen S.J. (1987). An empirical comparison of alternative methods for principal component extraction. Journal of Business Research, 15, 173-190.
Zwick, W.R. & Velicer, W.F. 1986. Comparison of Five Rules for Determining the Number of Components to Retain. Psychological Bulletin. 99: 432–442 | PCA model selection using AIC (or BIC) | Selecting an "appropriate" number of components in PCA can be performed elegantly with Horn's Parallel Analysis (PA). Papers show that this criterion consistently outperforms rules of thumb such as th | PCA model selection using AIC (or BIC)
Selecting an "appropriate" number of components in PCA can be performed elegantly with Horn's Parallel Analysis (PA). Papers show that this criterion consistently outperforms rules of thumb such as the elbow criterion or Kaiser's rule. The R package "paran" has an implementation of PA that requires only a couple of mouse clicks.
Of course, how many components you retain depends on the goals of the data reduction. If you only wish to retain variance that is "meaningful", PA will give an optimal reduction. If you wish to minimize the information loss of the original data, however, you should retain enough components to cover 95% explained variance. This will obviously keep many more components than PA, although for high-dimensional datasets, the dimensionality reduction will still be considerable.
One final note about PCA as a "model selection" problem. I don't fully agree with Peter's reply. There have been a number of papers that reformulated PCA as a regression-type problem, such as Sparse PCA, Sparse Probabilistic PCA, or ScotLASS. In these "model-based" PCA solutions, loadings are parameters that can be set to 0 with appropriate penalty terms. Presumably, in this context, it would also be possible to calculate AIC or BIC type statistics for the model under consideration.
This approach could theoretically include a model where, for example, two PCs are unrestricted (all loadings non-zero), versus a model where PC1 is unrestricted and PC2 has all loadings set to 0. This would be equivalent to inferring whether PC2 is redundant on the whole.
References (PA):
Dinno, A. (2012). paran: Horn's Test of Principal Components/Factors. R package version 1.5.1. http://CRAN.R-project.org/package=paran
Horn J.L. 1965. A rationale and a test for the number of factors in factor analysis. Psychometrika. 30: 179–185
Hubbard, R. & Allen S.J. (1987). An empirical comparison of alternative methods for principal component extraction. Journal of Business Research, 15, 173-190.
Zwick, W.R. & Velicer, W.F. 1986. Comparison of Five Rules for Determining the Number of Components to Retain. Psychological Bulletin. 99: 432–442 | PCA model selection using AIC (or BIC)
Selecting an "appropriate" number of components in PCA can be performed elegantly with Horn's Parallel Analysis (PA). Papers show that this criterion consistently outperforms rules of thumb such as th |
17,841 | PCA model selection using AIC (or BIC) | AIC is designed for model selection. This is not really a model selection problem and maybe you would be better off taking a different approach. An alternative could be to specify a certain total percentage of variance explained (like say 75%) and stop when the percentage reaches 75% if it ever does. | PCA model selection using AIC (or BIC) | AIC is designed for model selection. This is not really a model selection problem and maybe you would be better off taking a different approach. An alternative could be to specify a certain total pe | PCA model selection using AIC (or BIC)
AIC is designed for model selection. This is not really a model selection problem and maybe you would be better off taking a different approach. An alternative could be to specify a certain total percentage of variance explained (like say 75%) and stop when the percentage reaches 75% if it ever does. | PCA model selection using AIC (or BIC)
AIC is designed for model selection. This is not really a model selection problem and maybe you would be better off taking a different approach. An alternative could be to specify a certain total pe |
17,842 | PCA model selection using AIC (or BIC) | AIC is not appropriate here. You are not selecting among models with varying numbers of parameters - a principal component is not a parameter.
There are a number of methods of deciding on the number of factors or components from a factor analysis or principal component analysis - scree test, eigenvalue > 1, etc. But the real test is substantive: What number of factors makes sense? Look at the factors, consider the weights, figure out which is best suited to your data.
Like other things in statistics, this is not something that can easily be automated. | PCA model selection using AIC (or BIC) | AIC is not appropriate here. You are not selecting among models with varying numbers of parameters - a principal component is not a parameter.
There are a number of methods of deciding on the number o | PCA model selection using AIC (or BIC)
AIC is not appropriate here. You are not selecting among models with varying numbers of parameters - a principal component is not a parameter.
There are a number of methods of deciding on the number of factors or components from a factor analysis or principal component analysis - scree test, eigenvalue > 1, etc. But the real test is substantive: What number of factors makes sense? Look at the factors, consider the weights, figure out which is best suited to your data.
Like other things in statistics, this is not something that can easily be automated. | PCA model selection using AIC (or BIC)
AIC is not appropriate here. You are not selecting among models with varying numbers of parameters - a principal component is not a parameter.
There are a number of methods of deciding on the number o |
17,843 | Encoding categorical features to numbers for machine learning | You can always treat your user ids as bag of words: most text classifiers can deal with hundreds of thousands of dimensions when the data is sparse (many zeros that you do not need to store explicitly in memory, for instance if you use Compressed Sparse Rows representation for your data matrix).
However the question is: does it make sense w.r.t. you specific problem to treat user ids as features? Would not it make more sense to denormalize your relation data and use user features (age, location, char-ngrams of the online nickname, transaction history...) instead of their ids?
You could also perform clustering of your raw user vectors and use the top N closest centers ids as activated features for instead of the user ids. | Encoding categorical features to numbers for machine learning | You can always treat your user ids as bag of words: most text classifiers can deal with hundreds of thousands of dimensions when the data is sparse (many zeros that you do not need to store explicitly | Encoding categorical features to numbers for machine learning
You can always treat your user ids as bag of words: most text classifiers can deal with hundreds of thousands of dimensions when the data is sparse (many zeros that you do not need to store explicitly in memory, for instance if you use Compressed Sparse Rows representation for your data matrix).
However the question is: does it make sense w.r.t. you specific problem to treat user ids as features? Would not it make more sense to denormalize your relation data and use user features (age, location, char-ngrams of the online nickname, transaction history...) instead of their ids?
You could also perform clustering of your raw user vectors and use the top N closest centers ids as activated features for instead of the user ids. | Encoding categorical features to numbers for machine learning
You can always treat your user ids as bag of words: most text classifiers can deal with hundreds of thousands of dimensions when the data is sparse (many zeros that you do not need to store explicitly |
17,844 | Encoding categorical features to numbers for machine learning | Equilateral encoding is probably what you are looking for when trying to encode classes into a neural network. It tends to work better than "1 of n" encoding referenced in other posts. For reference may I suggest: http://www.heatonresearch.com/wiki/Equilateral | Encoding categorical features to numbers for machine learning | Equilateral encoding is probably what you are looking for when trying to encode classes into a neural network. It tends to work better than "1 of n" encoding referenced in other posts. For reference | Encoding categorical features to numbers for machine learning
Equilateral encoding is probably what you are looking for when trying to encode classes into a neural network. It tends to work better than "1 of n" encoding referenced in other posts. For reference may I suggest: http://www.heatonresearch.com/wiki/Equilateral | Encoding categorical features to numbers for machine learning
Equilateral encoding is probably what you are looking for when trying to encode classes into a neural network. It tends to work better than "1 of n" encoding referenced in other posts. For reference |
17,845 | What is Deborah Mayo's "severity"? | Yes the severity of a statistical claim C is always in relation to a test and an outcome. It's a measure of how well a claim's flaws are put to a test and found absent. A hypothesis C severely passes a test with result x to the extent that a result that is more discordant from C than is x would probability have occurred were C false. Say that a null hypothesis is rejected in a one-sided Normal test of the mean with an outcome that just reaches the significance level of .025. The significant result indicates some discrepancy from the null, but there is a worry someone will make mountains out of molehills. Spoze the power against an alternative mu' is high. Then the severity for inferring mu> mu' is LOW. That's because the probability of observing a larger difference than observed is probable assuming mu' is true. So severity goes in the direction opposite of power when the data lead to a rejection of a null My new book explains all this in clear detail: Statistical Inference as Severe Testing: How to Get beyond the Statistics Wars. | What is Deborah Mayo's "severity"? | Yes the severity of a statistical claim C is always in relation to a test and an outcome. It's a measure of how well a claim's flaws are put to a test and found absent. A hypothesis C severely passes | What is Deborah Mayo's "severity"?
Yes the severity of a statistical claim C is always in relation to a test and an outcome. It's a measure of how well a claim's flaws are put to a test and found absent. A hypothesis C severely passes a test with result x to the extent that a result that is more discordant from C than is x would probability have occurred were C false. Say that a null hypothesis is rejected in a one-sided Normal test of the mean with an outcome that just reaches the significance level of .025. The significant result indicates some discrepancy from the null, but there is a worry someone will make mountains out of molehills. Spoze the power against an alternative mu' is high. Then the severity for inferring mu> mu' is LOW. That's because the probability of observing a larger difference than observed is probable assuming mu' is true. So severity goes in the direction opposite of power when the data lead to a rejection of a null My new book explains all this in clear detail: Statistical Inference as Severe Testing: How to Get beyond the Statistics Wars. | What is Deborah Mayo's "severity"?
Yes the severity of a statistical claim C is always in relation to a test and an outcome. It's a measure of how well a claim's flaws are put to a test and found absent. A hypothesis C severely passes |
17,846 | Comparing Laplace Approximation and Variational Inference | I am not aware of any general results, but in this paper the authors have some thoughts for Gaussian variational approximations (GVAs) for generalized linear mixed model (GLMMs). Let $\vec y$ be the observed outcomes, $X$ be a fixed effect design matrix, $Z$ be a random effect design, denote an unknown random effect $\vec U$, and consider a GLMM with densities:
$$
\begin{align*}
f_{\vec Y\mid\vec U} (\vec y;\vec u) &=
\exp\left(\vec y^\top(X\vec\beta + Z\vec u)
- \vec 1^\top b(X\vec\beta + Z\vec u)
+ \vec 1^\top c(\vec y)\right) \\
f_{\vec U}(\vec u) &= \phi^{(K)}(\vec u;\vec 0, \Sigma) \\
f(\vec y,\vec u) &= f_{\vec Y\mid\vec U} (\vec y;\vec u)f_{\vec U}(\vec u)
\end{align*}
$$
where I use the same notation as in the paper and $\phi^{(K)}$ is a $K$-dimensional multivariate normal distribution density function.
Using a Laplace Approximation
Let
$$
g(\vec u) = \log f(\vec y,\vec u).
$$
Then we use the approximation
$$
\log\int \exp(g(\vec u)) d\vec u \approx
\frac K2\log{2\pi - \frac 12\log\lvert-g''(\widehat u)\rvert}
+ g(\widehat u)
$$
where
$$
\widehat u = \text{argmax}_{\vec u} g(\vec u).
$$
Using a Gaussian Variational Approximation
The lower bound in the GVA with a mean
$\vec\mu$ and covariance matrix $\Lambda$ is:
$$
\begin{align*}
\int \exp(g(\vec u)) d\vec u &\approx
\vec y^\top(X\vec\beta + Z\vec\mu)
- \vec 1^\top B(X\vec\beta + Z\vec\mu, \text{diag}(Z\Lambda Z^\top)) \\
&\hspace{25pt}+ \vec 1^\top c(\vec y) + \frac 12 \Big(
\log\lvert\Sigma^{-1}\rvert + \log\lvert\Lambda\rvert
-\vec\mu^\top\Sigma^{-1}\vec\mu \\
&\hspace{25pt} - \text{trace}(\Sigma^{-1}\Lambda)
+ K \Big) \\
B(\mu,\sigma^2) &= \int b(\sigma x + \mu)\phi(x) d x
\end{align*}
$$
where $\text{diag}(\cdot)$ returns a diagonal matrix.
Comparing the Two
Suppose that we can show that $\Lambda\rightarrow 0$ (the estimated conditional covariance matrix of the random effects tends towards zero). Then the lower bound (disregarding a determinant) tends towards:
$$
\begin{align*}
\int \exp(g(\vec u)) d\vec u &\approx
\vec y^\top(X\vec\beta + Z\vec\mu)
- \vec 1^\top b(X\vec\beta + Z\vec\mu) \\
&\hspace{25pt}+ \vec 1^\top c(\vec y) + \frac 12 \Big(
\log\lvert\Sigma^{-1}\rvert
-\vec\mu^\top\Sigma^{-1}\vec\mu + K\Big) \\
&= g(\vec\mu) + \dots
\end{align*}
$$
where the dots do not depend on the model parameters, $\vec\beta$ and $\Sigma$. Thus, maximizing over $\vec\mu$ yields $\vec\mu\rightarrow \widehat u$. Then the only difference between the Laplace approximation and the GVA is a
$$
- \frac 12\log\lvert -g''(\widehat u)\rvert
$$
term. We have that
$$
-g''(\widehat u) = \Sigma^{-1} + Z^\top b''(X\vec\beta + Z\vec u)Z
$$
where the derivatives are with respect to $\vec\eta = X\vec\beta + Z\vec u$. This does not tend towards zero as the conditional distribution of the random effects becomes more peaked. However, still very hand wavy, it may cancel out with the
$$
\frac 12\log\lvert\Lambda\rvert = -\frac 12\log\lvert\Lambda^{-1}\rvert
$$
term we disregarded in the lower bound. The first order condition for $\Lambda$ is:
$$
\Lambda^{-1} = \Sigma^{-1} + Z^\top B^{(2)}(X\vec\beta + Z\vec\mu, \text{diag}(Z\Lambda Z^\top)Z
$$
where
$$
B^{(2)}(\mu,\sigma^2) = \int b''(\sigma x+ \mu)\phi(x) dx.
$$
Thus, if $\vec\mu \approx \widehat u$ and $\Lambda \approx 0$ then:
$$
\Lambda^{-1} \approx \Sigma^{-1} + Z^\top b''(X\vec\beta + Z\vec u)Z
$$
and the Laplace approximation and the GVA yield the same approximation of the log marginal likelihood.
Notes
Do also see the annals paper Ryan Warnick mentions. | Comparing Laplace Approximation and Variational Inference | I am not aware of any general results, but in this paper the authors have some thoughts for Gaussian variational approximations (GVAs) for generalized linear mixed model (GLMMs). Let $\vec y$ be the o | Comparing Laplace Approximation and Variational Inference
I am not aware of any general results, but in this paper the authors have some thoughts for Gaussian variational approximations (GVAs) for generalized linear mixed model (GLMMs). Let $\vec y$ be the observed outcomes, $X$ be a fixed effect design matrix, $Z$ be a random effect design, denote an unknown random effect $\vec U$, and consider a GLMM with densities:
$$
\begin{align*}
f_{\vec Y\mid\vec U} (\vec y;\vec u) &=
\exp\left(\vec y^\top(X\vec\beta + Z\vec u)
- \vec 1^\top b(X\vec\beta + Z\vec u)
+ \vec 1^\top c(\vec y)\right) \\
f_{\vec U}(\vec u) &= \phi^{(K)}(\vec u;\vec 0, \Sigma) \\
f(\vec y,\vec u) &= f_{\vec Y\mid\vec U} (\vec y;\vec u)f_{\vec U}(\vec u)
\end{align*}
$$
where I use the same notation as in the paper and $\phi^{(K)}$ is a $K$-dimensional multivariate normal distribution density function.
Using a Laplace Approximation
Let
$$
g(\vec u) = \log f(\vec y,\vec u).
$$
Then we use the approximation
$$
\log\int \exp(g(\vec u)) d\vec u \approx
\frac K2\log{2\pi - \frac 12\log\lvert-g''(\widehat u)\rvert}
+ g(\widehat u)
$$
where
$$
\widehat u = \text{argmax}_{\vec u} g(\vec u).
$$
Using a Gaussian Variational Approximation
The lower bound in the GVA with a mean
$\vec\mu$ and covariance matrix $\Lambda$ is:
$$
\begin{align*}
\int \exp(g(\vec u)) d\vec u &\approx
\vec y^\top(X\vec\beta + Z\vec\mu)
- \vec 1^\top B(X\vec\beta + Z\vec\mu, \text{diag}(Z\Lambda Z^\top)) \\
&\hspace{25pt}+ \vec 1^\top c(\vec y) + \frac 12 \Big(
\log\lvert\Sigma^{-1}\rvert + \log\lvert\Lambda\rvert
-\vec\mu^\top\Sigma^{-1}\vec\mu \\
&\hspace{25pt} - \text{trace}(\Sigma^{-1}\Lambda)
+ K \Big) \\
B(\mu,\sigma^2) &= \int b(\sigma x + \mu)\phi(x) d x
\end{align*}
$$
where $\text{diag}(\cdot)$ returns a diagonal matrix.
Comparing the Two
Suppose that we can show that $\Lambda\rightarrow 0$ (the estimated conditional covariance matrix of the random effects tends towards zero). Then the lower bound (disregarding a determinant) tends towards:
$$
\begin{align*}
\int \exp(g(\vec u)) d\vec u &\approx
\vec y^\top(X\vec\beta + Z\vec\mu)
- \vec 1^\top b(X\vec\beta + Z\vec\mu) \\
&\hspace{25pt}+ \vec 1^\top c(\vec y) + \frac 12 \Big(
\log\lvert\Sigma^{-1}\rvert
-\vec\mu^\top\Sigma^{-1}\vec\mu + K\Big) \\
&= g(\vec\mu) + \dots
\end{align*}
$$
where the dots do not depend on the model parameters, $\vec\beta$ and $\Sigma$. Thus, maximizing over $\vec\mu$ yields $\vec\mu\rightarrow \widehat u$. Then the only difference between the Laplace approximation and the GVA is a
$$
- \frac 12\log\lvert -g''(\widehat u)\rvert
$$
term. We have that
$$
-g''(\widehat u) = \Sigma^{-1} + Z^\top b''(X\vec\beta + Z\vec u)Z
$$
where the derivatives are with respect to $\vec\eta = X\vec\beta + Z\vec u$. This does not tend towards zero as the conditional distribution of the random effects becomes more peaked. However, still very hand wavy, it may cancel out with the
$$
\frac 12\log\lvert\Lambda\rvert = -\frac 12\log\lvert\Lambda^{-1}\rvert
$$
term we disregarded in the lower bound. The first order condition for $\Lambda$ is:
$$
\Lambda^{-1} = \Sigma^{-1} + Z^\top B^{(2)}(X\vec\beta + Z\vec\mu, \text{diag}(Z\Lambda Z^\top)Z
$$
where
$$
B^{(2)}(\mu,\sigma^2) = \int b''(\sigma x+ \mu)\phi(x) dx.
$$
Thus, if $\vec\mu \approx \widehat u$ and $\Lambda \approx 0$ then:
$$
\Lambda^{-1} \approx \Sigma^{-1} + Z^\top b''(X\vec\beta + Z\vec u)Z
$$
and the Laplace approximation and the GVA yield the same approximation of the log marginal likelihood.
Notes
Do also see the annals paper Ryan Warnick mentions. | Comparing Laplace Approximation and Variational Inference
I am not aware of any general results, but in this paper the authors have some thoughts for Gaussian variational approximations (GVAs) for generalized linear mixed model (GLMMs). Let $\vec y$ be the o |
17,847 | Comparing Laplace Approximation and Variational Inference | There's a nice old Neural Computation paper on the relationship between the Laplace approximation and variational inference with a Gaussian proxy posterior:
http://www0.cs.ucl.ac.uk/staff/c.archambeau/publ/neco_mo09_web.pdf
In fine, the variational approximation is equivalent to requiring the Laplace approximation to hold on average, where the average is taken under the proxy posterior, as opposed to just "locally." Thus, the mean of the proxy posterior under a Laplace approximation is the point (assuming there's only one) where the gradient of the true log-posterior is zero; whereas the mean of the proxy posterior under the variational Gaussian approximation is the point that renders the average of the gradient of the true log-posterior zero. Similarly for the covariance matrix. | Comparing Laplace Approximation and Variational Inference | There's a nice old Neural Computation paper on the relationship between the Laplace approximation and variational inference with a Gaussian proxy posterior:
http://www0.cs.ucl.ac.uk/staff/c.archambeau | Comparing Laplace Approximation and Variational Inference
There's a nice old Neural Computation paper on the relationship between the Laplace approximation and variational inference with a Gaussian proxy posterior:
http://www0.cs.ucl.ac.uk/staff/c.archambeau/publ/neco_mo09_web.pdf
In fine, the variational approximation is equivalent to requiring the Laplace approximation to hold on average, where the average is taken under the proxy posterior, as opposed to just "locally." Thus, the mean of the proxy posterior under a Laplace approximation is the point (assuming there's only one) where the gradient of the true log-posterior is zero; whereas the mean of the proxy posterior under the variational Gaussian approximation is the point that renders the average of the gradient of the true log-posterior zero. Similarly for the covariance matrix. | Comparing Laplace Approximation and Variational Inference
There's a nice old Neural Computation paper on the relationship between the Laplace approximation and variational inference with a Gaussian proxy posterior:
http://www0.cs.ucl.ac.uk/staff/c.archambeau |
17,848 | Using LASSO on random forest | This sounds somewhat like gradient tree boosting. The idea of boosting is to find the best linear combination of a class of models. If we fit a tree to the data, we are trying to find the tree that best explains the outcome variable. If we instead use boosting, we are trying to find the best linear combination of trees.
However, using boosting we are a little more efficient as we don't have a collection of random trees, but we try to build new trees that work on the examples we cannot predict well yet.
For more on this, I'd suggest reading chapter 10 of Elements of Statistical Learning:
http://statweb.stanford.edu/~tibs/ElemStatLearn/
While this isn't a complete answer of your question, I hope it helps. | Using LASSO on random forest | This sounds somewhat like gradient tree boosting. The idea of boosting is to find the best linear combination of a class of models. If we fit a tree to the data, we are trying to find the tree that be | Using LASSO on random forest
This sounds somewhat like gradient tree boosting. The idea of boosting is to find the best linear combination of a class of models. If we fit a tree to the data, we are trying to find the tree that best explains the outcome variable. If we instead use boosting, we are trying to find the best linear combination of trees.
However, using boosting we are a little more efficient as we don't have a collection of random trees, but we try to build new trees that work on the examples we cannot predict well yet.
For more on this, I'd suggest reading chapter 10 of Elements of Statistical Learning:
http://statweb.stanford.edu/~tibs/ElemStatLearn/
While this isn't a complete answer of your question, I hope it helps. | Using LASSO on random forest
This sounds somewhat like gradient tree boosting. The idea of boosting is to find the best linear combination of a class of models. If we fit a tree to the data, we are trying to find the tree that be |
17,849 | Exponential weighted moving skewness/kurtosis | The formulas are straightforward but they are not as simple as intimated in the question.
Let $Y$ be the previous EWMA and let $X = x_n$, which is presumed independent of $Y$. By definition, the new weighted average is $Z = \alpha X + (1 - \alpha)Y$ for a constant value $\alpha$. For notational convenience, set $\beta = 1-\alpha$. Let $F$ denote the CDF of a random variable and $\phi$ denote its moment generating function, so that
$$\phi_X(t) = \mathbb{E}_F[\exp(t X)] = \int_\mathbb{R}{\exp(t x) dF_X(x)}.$$
With Kendall and Stuart, let $\mu_k^{'}(Z)$ denote the non-central moment of order $k$ for the random variable $Z$; that is, $\mu_k^{'}(Z) = \mathbb{E}[Z^k]$. The skewness and kurtosis are expressible in terms of the $\mu_k^{'}$ for $k = 1,2,3,4$; for example, the skewness is defined as $\mu_3 / \mu_2^{3/2}$ where
$$\mu_3 = \mu_3^{'} - 3 \mu_2^{'}\mu_1^{'} + 2{\mu_1^{'}}^3 \text{ and }\mu_2 = \mu_2^{'} - {\mu_1^{'}}^2$$
are the third and second central moments, respectively.
By standard elementary results,
$$\eqalign{
&1 + \mu_1^{'}(Z) t + \frac{1}{2!} \mu_2^{'}(Z) t^2 + \frac{1}{3!} \mu_3^{'}(Z) t^3 + \frac{1}{4!} \mu_4^{'}(Z) t^4 +O(t^5) \cr
&= \phi_Z(t) \cr
&= \phi_{\alpha X}(t) \phi_{\beta Y}(t) \cr
&= \phi_X(\alpha t) \phi_Y(\beta t) \cr
&= (1 + \mu_1^{'}(X) \alpha t + \frac{1}{2!} \mu_2^{'}(X) \alpha^2 t^2 + \cdots)
(1 + \mu_1^{'}(Y) \beta t + \frac{1}{2!} \mu_2^{'}(Y) \beta^2 t^2 + \cdots).
}
$$
To obtain the desired non-central moments, multiply the latter power series through fourth order in $t$ and equate the result term-by-term with the terms in $\phi_Z(t)$. | Exponential weighted moving skewness/kurtosis | The formulas are straightforward but they are not as simple as intimated in the question.
Let $Y$ be the previous EWMA and let $X = x_n$, which is presumed independent of $Y$. By definition, the new | Exponential weighted moving skewness/kurtosis
The formulas are straightforward but they are not as simple as intimated in the question.
Let $Y$ be the previous EWMA and let $X = x_n$, which is presumed independent of $Y$. By definition, the new weighted average is $Z = \alpha X + (1 - \alpha)Y$ for a constant value $\alpha$. For notational convenience, set $\beta = 1-\alpha$. Let $F$ denote the CDF of a random variable and $\phi$ denote its moment generating function, so that
$$\phi_X(t) = \mathbb{E}_F[\exp(t X)] = \int_\mathbb{R}{\exp(t x) dF_X(x)}.$$
With Kendall and Stuart, let $\mu_k^{'}(Z)$ denote the non-central moment of order $k$ for the random variable $Z$; that is, $\mu_k^{'}(Z) = \mathbb{E}[Z^k]$. The skewness and kurtosis are expressible in terms of the $\mu_k^{'}$ for $k = 1,2,3,4$; for example, the skewness is defined as $\mu_3 / \mu_2^{3/2}$ where
$$\mu_3 = \mu_3^{'} - 3 \mu_2^{'}\mu_1^{'} + 2{\mu_1^{'}}^3 \text{ and }\mu_2 = \mu_2^{'} - {\mu_1^{'}}^2$$
are the third and second central moments, respectively.
By standard elementary results,
$$\eqalign{
&1 + \mu_1^{'}(Z) t + \frac{1}{2!} \mu_2^{'}(Z) t^2 + \frac{1}{3!} \mu_3^{'}(Z) t^3 + \frac{1}{4!} \mu_4^{'}(Z) t^4 +O(t^5) \cr
&= \phi_Z(t) \cr
&= \phi_{\alpha X}(t) \phi_{\beta Y}(t) \cr
&= \phi_X(\alpha t) \phi_Y(\beta t) \cr
&= (1 + \mu_1^{'}(X) \alpha t + \frac{1}{2!} \mu_2^{'}(X) \alpha^2 t^2 + \cdots)
(1 + \mu_1^{'}(Y) \beta t + \frac{1}{2!} \mu_2^{'}(Y) \beta^2 t^2 + \cdots).
}
$$
To obtain the desired non-central moments, multiply the latter power series through fourth order in $t$ and equate the result term-by-term with the terms in $\phi_Z(t)$. | Exponential weighted moving skewness/kurtosis
The formulas are straightforward but they are not as simple as intimated in the question.
Let $Y$ be the previous EWMA and let $X = x_n$, which is presumed independent of $Y$. By definition, the new |
17,850 | Exponential weighted moving skewness/kurtosis | I think that the following updating formula works for the third moment, although I'd be glad to have someone check it:
$M_{3,n} = (1-\alpha)M_{3,n-1} + \alpha \Big[ x_n(x_n-\mu_n)(x_n-2\mu_n) - x_n\mu_{n-1}(\mu_{n-1}-2\mu_n) - \dots$
$\dots - \mu_{n-1}(\mu_n-\mu_{n-1})^2 - 3(x_n-\mu_n) \sigma_{n-1}^2 \Big]$
Updating formula for the kurtosis still open... | Exponential weighted moving skewness/kurtosis | I think that the following updating formula works for the third moment, although I'd be glad to have someone check it:
$M_{3,n} = (1-\alpha)M_{3,n-1} + \alpha \Big[ x_n(x_n-\mu_n)(x_n-2\mu_n) - x_n\mu | Exponential weighted moving skewness/kurtosis
I think that the following updating formula works for the third moment, although I'd be glad to have someone check it:
$M_{3,n} = (1-\alpha)M_{3,n-1} + \alpha \Big[ x_n(x_n-\mu_n)(x_n-2\mu_n) - x_n\mu_{n-1}(\mu_{n-1}-2\mu_n) - \dots$
$\dots - \mu_{n-1}(\mu_n-\mu_{n-1})^2 - 3(x_n-\mu_n) \sigma_{n-1}^2 \Big]$
Updating formula for the kurtosis still open... | Exponential weighted moving skewness/kurtosis
I think that the following updating formula works for the third moment, although I'd be glad to have someone check it:
$M_{3,n} = (1-\alpha)M_{3,n-1} + \alpha \Big[ x_n(x_n-\mu_n)(x_n-2\mu_n) - x_n\mu |
17,851 | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? | Absent a response from the 6 (sets of) authors themselves, a response
by the econometricians/statisticians on this forum might be the next
best thing. My hope is that people can respond to the issues raised by
Chen & Pearl (repeated by me below) and hopefully a consensus view
will emerge that can be of use to students using these textbooks.
I can propose you my perspective about this great point. My analysis is not complete yet but most conclusions are outlined. I can defend what I will say, even if I don't have time and space enough here. Naturally I can go wrong, after all I'm not a Professor. If my points are wrong forgive me. I stay here for read more opinion too, and learn something about that.
I started to faced the problem of causality in econometrics some years ago and, also before to read Chen and Pearl (2013), it seemed me that some problems appeared.
I surveyed several econometrics manuals, all six considered in Chen and Pearl (2013) and several others. Moreover I studied many articles, slide and related material in general. My conclusion is that, too often, causal questions was not properly addressed in econometric literature.
The story can be very long but we can start to noting that: is hard to find two manuals that share exactly the same assumptions and/or implications, this factor affects primarily causal questions. Obviously most concepts are shared but some differences are relevant and not always can be easily solved. Anyway, keeping aside specific points/comparisons, is hard to find among econometrics books the exact set of assumptions that justifies causal interpretation for regression. This is quite puzzling because causal questions are, or should be, tremendously important in econometrics. Initially I thought that the problems I encountered, if any, boiled down in some details. Over time the situation seemed more serious to me.
When I encountered Chen and Pearl (2013) my perplexities was confirmed and increased and go deeper. Time ago I considered simply not possible that several econometric Masters made so serious mistakes. Today I’m convinced that, surprisingly, things is so. Therefore most generalistic econometric books should be revised; in some case completely rewritten.
It seems me that all problems come from conflations between causal and statistical concepts, as used in most econometric manuals and literature in general.
Indeed, statistical concepts and assumptions must be clearly separated from causal one. All statistical assumptions can be considered as restrictions on some joint probability distributions, but joint probability distributions cannot encode causal assumptions.
In my view clear position about that and related proposed remedies represent the most important contribution of Pearl’s literature. Several people critic his works, but in my opinion most issues are bad posed. Read here for my perspective about that: Criticism of Pearl's theory of causality
Several of my questions and answers on this site swing around “regression and causation” and most things are summarized here:
Under which assumptions a regression can be interpreted causally?
Most conflation problems swing around the controversial concepts of exogeneity, error terms and true model (see more below).
Now, the problems are far from uniformly distributed among materials and, presumably, among peoples understandings. However, in some extent, the problems are widely shared and reveal that, at general level, both type of concepts, or at least the causal ones, are badly understood. As consequence, in short, we can says that, in general, the “current Econometric Theory about causality” is flawed.
Note that the problems underscored in Chen and Pearl (2013) do not stay only there but emerged also elsewhere. One relevant article about it is:
Trygve Haavelmo and the Emergence of causal calculus - Pearl; Econometric Theory (2015)
Some econometricians replied to Pearl:
CAUSAL ANALYSIS AFTER HAAVELMO - Heckman and Pinto
and Pearl replied to them
Reflections on Heckman and Pinto's “Causal Analysis After Haavelmo" – Pearl
Moreover two of most eminent econometricians focused of causal part of econometrics are Angrist and Pischke. Indeed in my opinion, and not only, the best econometrics book about causality is their: Mostly Harmless Econometrics: An Empiricist's Companion - Angrist and Pischke (2009). Them can be considered as eminent Authors of “experimental school”.
Relevant to see that Angrist and Pischke too are critics about how Econometrics, and his causal part in particular, is teached. See here: Undergraduate Econometrics Instruction: Through Our Classes, Darkly - Journal of Economic Perspectives—Volume 31, Number 2—Spring 2017—Pages 125–144
The core of problems, not by chance, swing around error terms and, then, exogeneity:
For the most part, legacy texts have a uniform structure: they begin by introducing a linear model for an economic outcome variable,
followed closely by stating that the error term is assumed to be
either mean-independent of, or uncorrelated with, regressors. The
purpose of this model—whether it is a causal relationship in the sense
of describing the consequences of regressor manipulation, a
statistical forecasting tool, or a parameterized conditional
expectation function—is usually unclear. Pag 138
Some problems underscored in Chen and Pearl (2013) are avoided in Angrist and Pischke (2009), but no Pearl's tools are used or suggested. Indeed Pearl is critic about them too as we can see here:
https://p-hunermund.com/2017/02/22/judea-pearl-on-angrist-and-pischke/
… the debate about causality in econometrics seems far from close.
Focusing on your specific points:
Ideally the response to Chen and Pearl would come from these 6 (sets
of) authors themselves. Did they respond? In the 7 years since the
article, did any of the 6 (sets of) authors make any changes to their
books which are in line with the recommendations by Chen & Pearl
(2013)?
I don't know if some private replies has been given. However the article of Chen and Pearl is public, therefore public should be the replies. At the best of my knowledge no public reply exist yet. Obviously this is not good for a defence of econometric manuals. Me too are waiting for public reply of Authors.
What I can says you is that for several manuals involved, new versions has been released. For example in Greene 8th edition (2018) the critics and suggestions of Chen and Pearl (2013), that analyze Green 7th edition (2012), are completely not considered; no words are spent in the direction suggested by Chen and Pearl. For Wooldridge the analyzed version is the 4th edition (2009); others has been released, today we are at 7th edition (note that the publication date can depend on the translation or others detail but the edition number should be always consistent). As in Greene's manual case, suggestions are not considered at all. For Stock and Watson the analyzed version is the 3rd and the 4th has been released recently. Interesting to note that is SW case some detail about machine learning topics was added and for casual concepts some things was modified. However these adding/modifications seems follow the Angrist and Pischke suggestions more than the Pearl ones; note that the name of Pearl appear in the acknowledgment (in the others books Pearl is not cited at all). At the best of my knowledge no one econometric book has yet take seriously Pearl’s suggestions.
Said that, I’m grateful to Chen and Pearl for their article, however I do not appreciate so much the table used there. All therein concepts can be mixed and, worse, all related problems can be easily masked if considered separately, as any table suggest. My analysis is not complete yet but I think that we have to give at Authors the possibility to explain exhaustively what they mean and without force them to use Pearl language and tools. Therefore I do not consider points like Q10 and Q11. If we consider the Pearl language and tools as mandatory the analysis is easy and fast and bring us to conclude: no one econometrics book can be saved. Indeed this seems the Pearl opinion, I heard it by he in a published lesson on Youtube.
I’m open mind with econometrics Authors strategy, but them must demonstrate the consistency of their arguments. I think that eventual inconsistency of the theory presented from Authors can be revealed from incorrect/contradictory/ambiguous statements along the books. For this scrutiny, semantics and examples matters a lot. In Chen and Pearl (2013) some very important points are reported, I follow the same idea but giving more time to the Authors. Anyway, as already said before, It seems me that the argument spent in most manuals are inconsistent about causality. I share most points and conclusions of Chen and Pearl (2013), maybe something more should be added.
Now some comments more:
Q2. Does the author present example problems that require prediction
alone? Wooldridge does not present any examples that require
prediction alone.
This question give me the possibility to show briefly what I intend with “my analysis”. I consider here 7th edition, the last. What is sure is that prediction and causal scope for an econometric model is not clearly separated and well treated. Let me report some parts of this book.
The distinction between causation and prediction seems recognized:
Even when economic theories are not most naturally described in terms of causality, they often have predictions that can be tested using
econometric methods. The following example demonstrates this
approach. Pag 14
However the distinction is not clearly transposed in the assumptions and, then, in the econometric theory presented.
Indeed we have:
MLR.1: Introduce the population model and interpret the population parameters (which we hope to estimate). … MLR.4: Assume that, in the
population, the mean of the unobservable error does not depend on the
values of the explanatory variables; this is the “mean independence”
assumption combined with a zero population mean for the error, and it
is the key assumption that delivers unbiasedness of OLS. From Preface
Note that that both unobservable and population concepts are used in MRL.4
Moreover is said that MLR.1 is about "linear parameter population model" (true model) pag 80.
Moreover is added
When Assumption MLR.4 holds, we often say that we have exogenous explanatory variables. Pag 82
Later is introduced an important paragraph “Several Scenarios for Applying Multiple Regression”. There (pag 98/101) It is said that
MLR.4 is true by construction once we assume linearity. Pag 98
this fact is strange because mean that MLR.1 imply MRL.4; redundant assumption. Worse, the fact that MLR.4 hold by construction under linearity exclude all possibility to consider the error term, then the true model in general, as something like structural/causal. Said that, at this point, MLR.4/MLR.1 are used for justifies that linear regression estimated with OLS is good for prediction.
Moreover
Multiple regression can be used to test the efficient markets
hypothesis because MLR.4 holds by construction once we assume a linear
model
The redundant argument go ahead, and MLR.4 is used here to demonstrate that linear regression estimated with OLS is good for testing economic theory, a causal concept. Honestly we can argue here that, even if come from economic theory, efficient markets hypothesis can be consider a predictive more than causal concept. However in this case we can ask: why this example is presented in a different scenario from prediction?
Later other two “scenarios” for regression are presented: Testing for Ceteris Paribus Group Differences and Potential Outcomes, Treatment Effects, and Policy Analysis
So all these seems presented as notably different scopes (scenarios). Moreover the argument
… and so MLR.4 holds by construction. OLS can be used to obtain an
unbiased estimator of $\beta$ (and the other coefficients).
is used for the former case. For the latter [pag 100/101] an ad hoc conditional independence assumption is introduced; no mention for MRL.1 to MRL.4.
Later at the paragraph “Revisiting Causal Effects and Policy” pag 151, it is said
In Section 3-7e [pag 100/101] we showed how multiple regression can be used to obtain unbiased estimators of causal, or treatment, effects
in the context of policy interventions … We know that the OLS
estimator of $\tau$ [treatment effect] is unbiased because MLR.1 and
MLR.4 hold (and we have random sampling from the population [=MRL.2])
then … even if before was not considered … now the assumptions MRL.1 to MRL.4 are enough for causation … even if them are pure statistical assumptions … and even if them are the same good for prediction … even if prediction and causation are different goals
Moreover the structural equation concept is introduced much after, at pag 505 in the context of IV and 2SLS estimator (Chapter 15)
We call this a structural equation to emphasize that we are interested in the $\beta_j$, which simply means that the equation is
supposed to measure a causal relationship
also concepts like reduced form and identification are introduced there. Those concepts are used again in Chapter 16 that is about simultaneous equation models.
This strategy give the impression that structural concepts, and related ones, are a special subject, tolerably different from what exposed before. But if so, why causality is used and justified also before? Why only now structural and related concepts are needed?
Moreover, in the introduction of Chapter 16 is said that all remedies presented in the book deal with endogeneity problem; used in the same chapter as causal concept. Among others, this give the impression that endogeneity is the core problem for any scenarios/scope, prediction included. Worse, before is affirmed that under linearity MLR.4 hold, then exogeneity hold, then endogeneity go away, then causal conclusions are permitted by construction.
All this story, and so this book in general, seems me strongly problematic. Books like this bring the readers in insurmountable confusions.
Today I’m convinced that the bad treatment of causality go together with not so good treatment of prediction. In some extent this seems me true for most generalistic econometric books. For this reason too, most of them should be revised if not completely rewritten. Read also here: What is the relationship between minimizing prediciton error versus parameter estimation error?
these points are usually not well recognized in most generalistic econometric manuals.
Now, you ask
Stock and Watson motivate their simple regression model with the
following statement. "If she reduces the average class size by two
students, what will the effect be on standardized test scores in her
district?" There also is no "ceteris paribus" statement, which in my
mind suggests that the model is intended to be merely predictive
rather than causal. … My commentary: It seems to me that Stock &
Watson introduced both simple linear regression and multiple linear
regression in a predictive context, which is why they did not mention
ceteris paribus. Is that accurate?
No. Stock and Watson spent most of their time to speak about causality, and them recognize tolerably well the distinction between forecasting and causality (read paragraph 9.3). Even in Chen and Pearl (2013) this fact is underscored. Luckily I faced econometrics for the first time with SW manual.
Class size effect on standardized test score results is used precisely as clear causal question. Staying at 3th edition, assumptions presented in the Chapter 4 to 9, the core of the book, must be intended as justifications for causal interpretation of regression. Them follow the as if experimental paradigm. Less time is spent for pure prediction, mainly Chapter 14. Unfortunately SW conflate causal and statistical concepts and seems that causal conclusions is founded on statistical assumptions.
Q8. Does the author assume that exogeneity of X is inherent to the
model?
As said before most problems swing around exogeneity assumption. All six books in argument use exogeneity, in some form considered, as crucial assumption. However no one of them use exogeneity concept properly. This remain true for most econometrics books. Them use exogeneity ambiguously, like a concept that flip from statistical to causal side and/or mix them.
I started my analysis precisely since exogeneity assumption problems. Today I have no doubt more about that. Pearl is right about exogeneity, it is a causal concept. Only if we accept this perspective, and work consistently with that, all ambiguities and contradictions come to solve.
Read also here:
Zero conditional expectation of error in OLS regression
Does homoscedasticity imply that the regressor variables and the errors are uncorrelated?
Multiple Linear Regression Zero Conditional Mean Assumption
Another great and related problem swing around the concept of true model. In most econometric books it is erroneously conflated with something like population regression or, worse, some ambiguous statistical object. If we take the so called true model as a structural linear causal model, all problems come to solve. Read also here:
Regression and the CEF
What is a 'true' model?
linear causal model | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? | Absent a response from the 6 (sets of) authors themselves, a response
by the econometricians/statisticians on this forum might be the next
best thing. My hope is that people can respond to the issues | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Absent a response from the 6 (sets of) authors themselves, a response
by the econometricians/statisticians on this forum might be the next
best thing. My hope is that people can respond to the issues raised by
Chen & Pearl (repeated by me below) and hopefully a consensus view
will emerge that can be of use to students using these textbooks.
I can propose you my perspective about this great point. My analysis is not complete yet but most conclusions are outlined. I can defend what I will say, even if I don't have time and space enough here. Naturally I can go wrong, after all I'm not a Professor. If my points are wrong forgive me. I stay here for read more opinion too, and learn something about that.
I started to faced the problem of causality in econometrics some years ago and, also before to read Chen and Pearl (2013), it seemed me that some problems appeared.
I surveyed several econometrics manuals, all six considered in Chen and Pearl (2013) and several others. Moreover I studied many articles, slide and related material in general. My conclusion is that, too often, causal questions was not properly addressed in econometric literature.
The story can be very long but we can start to noting that: is hard to find two manuals that share exactly the same assumptions and/or implications, this factor affects primarily causal questions. Obviously most concepts are shared but some differences are relevant and not always can be easily solved. Anyway, keeping aside specific points/comparisons, is hard to find among econometrics books the exact set of assumptions that justifies causal interpretation for regression. This is quite puzzling because causal questions are, or should be, tremendously important in econometrics. Initially I thought that the problems I encountered, if any, boiled down in some details. Over time the situation seemed more serious to me.
When I encountered Chen and Pearl (2013) my perplexities was confirmed and increased and go deeper. Time ago I considered simply not possible that several econometric Masters made so serious mistakes. Today I’m convinced that, surprisingly, things is so. Therefore most generalistic econometric books should be revised; in some case completely rewritten.
It seems me that all problems come from conflations between causal and statistical concepts, as used in most econometric manuals and literature in general.
Indeed, statistical concepts and assumptions must be clearly separated from causal one. All statistical assumptions can be considered as restrictions on some joint probability distributions, but joint probability distributions cannot encode causal assumptions.
In my view clear position about that and related proposed remedies represent the most important contribution of Pearl’s literature. Several people critic his works, but in my opinion most issues are bad posed. Read here for my perspective about that: Criticism of Pearl's theory of causality
Several of my questions and answers on this site swing around “regression and causation” and most things are summarized here:
Under which assumptions a regression can be interpreted causally?
Most conflation problems swing around the controversial concepts of exogeneity, error terms and true model (see more below).
Now, the problems are far from uniformly distributed among materials and, presumably, among peoples understandings. However, in some extent, the problems are widely shared and reveal that, at general level, both type of concepts, or at least the causal ones, are badly understood. As consequence, in short, we can says that, in general, the “current Econometric Theory about causality” is flawed.
Note that the problems underscored in Chen and Pearl (2013) do not stay only there but emerged also elsewhere. One relevant article about it is:
Trygve Haavelmo and the Emergence of causal calculus - Pearl; Econometric Theory (2015)
Some econometricians replied to Pearl:
CAUSAL ANALYSIS AFTER HAAVELMO - Heckman and Pinto
and Pearl replied to them
Reflections on Heckman and Pinto's “Causal Analysis After Haavelmo" – Pearl
Moreover two of most eminent econometricians focused of causal part of econometrics are Angrist and Pischke. Indeed in my opinion, and not only, the best econometrics book about causality is their: Mostly Harmless Econometrics: An Empiricist's Companion - Angrist and Pischke (2009). Them can be considered as eminent Authors of “experimental school”.
Relevant to see that Angrist and Pischke too are critics about how Econometrics, and his causal part in particular, is teached. See here: Undergraduate Econometrics Instruction: Through Our Classes, Darkly - Journal of Economic Perspectives—Volume 31, Number 2—Spring 2017—Pages 125–144
The core of problems, not by chance, swing around error terms and, then, exogeneity:
For the most part, legacy texts have a uniform structure: they begin by introducing a linear model for an economic outcome variable,
followed closely by stating that the error term is assumed to be
either mean-independent of, or uncorrelated with, regressors. The
purpose of this model—whether it is a causal relationship in the sense
of describing the consequences of regressor manipulation, a
statistical forecasting tool, or a parameterized conditional
expectation function—is usually unclear. Pag 138
Some problems underscored in Chen and Pearl (2013) are avoided in Angrist and Pischke (2009), but no Pearl's tools are used or suggested. Indeed Pearl is critic about them too as we can see here:
https://p-hunermund.com/2017/02/22/judea-pearl-on-angrist-and-pischke/
… the debate about causality in econometrics seems far from close.
Focusing on your specific points:
Ideally the response to Chen and Pearl would come from these 6 (sets
of) authors themselves. Did they respond? In the 7 years since the
article, did any of the 6 (sets of) authors make any changes to their
books which are in line with the recommendations by Chen & Pearl
(2013)?
I don't know if some private replies has been given. However the article of Chen and Pearl is public, therefore public should be the replies. At the best of my knowledge no public reply exist yet. Obviously this is not good for a defence of econometric manuals. Me too are waiting for public reply of Authors.
What I can says you is that for several manuals involved, new versions has been released. For example in Greene 8th edition (2018) the critics and suggestions of Chen and Pearl (2013), that analyze Green 7th edition (2012), are completely not considered; no words are spent in the direction suggested by Chen and Pearl. For Wooldridge the analyzed version is the 4th edition (2009); others has been released, today we are at 7th edition (note that the publication date can depend on the translation or others detail but the edition number should be always consistent). As in Greene's manual case, suggestions are not considered at all. For Stock and Watson the analyzed version is the 3rd and the 4th has been released recently. Interesting to note that is SW case some detail about machine learning topics was added and for casual concepts some things was modified. However these adding/modifications seems follow the Angrist and Pischke suggestions more than the Pearl ones; note that the name of Pearl appear in the acknowledgment (in the others books Pearl is not cited at all). At the best of my knowledge no one econometric book has yet take seriously Pearl’s suggestions.
Said that, I’m grateful to Chen and Pearl for their article, however I do not appreciate so much the table used there. All therein concepts can be mixed and, worse, all related problems can be easily masked if considered separately, as any table suggest. My analysis is not complete yet but I think that we have to give at Authors the possibility to explain exhaustively what they mean and without force them to use Pearl language and tools. Therefore I do not consider points like Q10 and Q11. If we consider the Pearl language and tools as mandatory the analysis is easy and fast and bring us to conclude: no one econometrics book can be saved. Indeed this seems the Pearl opinion, I heard it by he in a published lesson on Youtube.
I’m open mind with econometrics Authors strategy, but them must demonstrate the consistency of their arguments. I think that eventual inconsistency of the theory presented from Authors can be revealed from incorrect/contradictory/ambiguous statements along the books. For this scrutiny, semantics and examples matters a lot. In Chen and Pearl (2013) some very important points are reported, I follow the same idea but giving more time to the Authors. Anyway, as already said before, It seems me that the argument spent in most manuals are inconsistent about causality. I share most points and conclusions of Chen and Pearl (2013), maybe something more should be added.
Now some comments more:
Q2. Does the author present example problems that require prediction
alone? Wooldridge does not present any examples that require
prediction alone.
This question give me the possibility to show briefly what I intend with “my analysis”. I consider here 7th edition, the last. What is sure is that prediction and causal scope for an econometric model is not clearly separated and well treated. Let me report some parts of this book.
The distinction between causation and prediction seems recognized:
Even when economic theories are not most naturally described in terms of causality, they often have predictions that can be tested using
econometric methods. The following example demonstrates this
approach. Pag 14
However the distinction is not clearly transposed in the assumptions and, then, in the econometric theory presented.
Indeed we have:
MLR.1: Introduce the population model and interpret the population parameters (which we hope to estimate). … MLR.4: Assume that, in the
population, the mean of the unobservable error does not depend on the
values of the explanatory variables; this is the “mean independence”
assumption combined with a zero population mean for the error, and it
is the key assumption that delivers unbiasedness of OLS. From Preface
Note that that both unobservable and population concepts are used in MRL.4
Moreover is said that MLR.1 is about "linear parameter population model" (true model) pag 80.
Moreover is added
When Assumption MLR.4 holds, we often say that we have exogenous explanatory variables. Pag 82
Later is introduced an important paragraph “Several Scenarios for Applying Multiple Regression”. There (pag 98/101) It is said that
MLR.4 is true by construction once we assume linearity. Pag 98
this fact is strange because mean that MLR.1 imply MRL.4; redundant assumption. Worse, the fact that MLR.4 hold by construction under linearity exclude all possibility to consider the error term, then the true model in general, as something like structural/causal. Said that, at this point, MLR.4/MLR.1 are used for justifies that linear regression estimated with OLS is good for prediction.
Moreover
Multiple regression can be used to test the efficient markets
hypothesis because MLR.4 holds by construction once we assume a linear
model
The redundant argument go ahead, and MLR.4 is used here to demonstrate that linear regression estimated with OLS is good for testing economic theory, a causal concept. Honestly we can argue here that, even if come from economic theory, efficient markets hypothesis can be consider a predictive more than causal concept. However in this case we can ask: why this example is presented in a different scenario from prediction?
Later other two “scenarios” for regression are presented: Testing for Ceteris Paribus Group Differences and Potential Outcomes, Treatment Effects, and Policy Analysis
So all these seems presented as notably different scopes (scenarios). Moreover the argument
… and so MLR.4 holds by construction. OLS can be used to obtain an
unbiased estimator of $\beta$ (and the other coefficients).
is used for the former case. For the latter [pag 100/101] an ad hoc conditional independence assumption is introduced; no mention for MRL.1 to MRL.4.
Later at the paragraph “Revisiting Causal Effects and Policy” pag 151, it is said
In Section 3-7e [pag 100/101] we showed how multiple regression can be used to obtain unbiased estimators of causal, or treatment, effects
in the context of policy interventions … We know that the OLS
estimator of $\tau$ [treatment effect] is unbiased because MLR.1 and
MLR.4 hold (and we have random sampling from the population [=MRL.2])
then … even if before was not considered … now the assumptions MRL.1 to MRL.4 are enough for causation … even if them are pure statistical assumptions … and even if them are the same good for prediction … even if prediction and causation are different goals
Moreover the structural equation concept is introduced much after, at pag 505 in the context of IV and 2SLS estimator (Chapter 15)
We call this a structural equation to emphasize that we are interested in the $\beta_j$, which simply means that the equation is
supposed to measure a causal relationship
also concepts like reduced form and identification are introduced there. Those concepts are used again in Chapter 16 that is about simultaneous equation models.
This strategy give the impression that structural concepts, and related ones, are a special subject, tolerably different from what exposed before. But if so, why causality is used and justified also before? Why only now structural and related concepts are needed?
Moreover, in the introduction of Chapter 16 is said that all remedies presented in the book deal with endogeneity problem; used in the same chapter as causal concept. Among others, this give the impression that endogeneity is the core problem for any scenarios/scope, prediction included. Worse, before is affirmed that under linearity MLR.4 hold, then exogeneity hold, then endogeneity go away, then causal conclusions are permitted by construction.
All this story, and so this book in general, seems me strongly problematic. Books like this bring the readers in insurmountable confusions.
Today I’m convinced that the bad treatment of causality go together with not so good treatment of prediction. In some extent this seems me true for most generalistic econometric books. For this reason too, most of them should be revised if not completely rewritten. Read also here: What is the relationship between minimizing prediciton error versus parameter estimation error?
these points are usually not well recognized in most generalistic econometric manuals.
Now, you ask
Stock and Watson motivate their simple regression model with the
following statement. "If she reduces the average class size by two
students, what will the effect be on standardized test scores in her
district?" There also is no "ceteris paribus" statement, which in my
mind suggests that the model is intended to be merely predictive
rather than causal. … My commentary: It seems to me that Stock &
Watson introduced both simple linear regression and multiple linear
regression in a predictive context, which is why they did not mention
ceteris paribus. Is that accurate?
No. Stock and Watson spent most of their time to speak about causality, and them recognize tolerably well the distinction between forecasting and causality (read paragraph 9.3). Even in Chen and Pearl (2013) this fact is underscored. Luckily I faced econometrics for the first time with SW manual.
Class size effect on standardized test score results is used precisely as clear causal question. Staying at 3th edition, assumptions presented in the Chapter 4 to 9, the core of the book, must be intended as justifications for causal interpretation of regression. Them follow the as if experimental paradigm. Less time is spent for pure prediction, mainly Chapter 14. Unfortunately SW conflate causal and statistical concepts and seems that causal conclusions is founded on statistical assumptions.
Q8. Does the author assume that exogeneity of X is inherent to the
model?
As said before most problems swing around exogeneity assumption. All six books in argument use exogeneity, in some form considered, as crucial assumption. However no one of them use exogeneity concept properly. This remain true for most econometrics books. Them use exogeneity ambiguously, like a concept that flip from statistical to causal side and/or mix them.
I started my analysis precisely since exogeneity assumption problems. Today I have no doubt more about that. Pearl is right about exogeneity, it is a causal concept. Only if we accept this perspective, and work consistently with that, all ambiguities and contradictions come to solve.
Read also here:
Zero conditional expectation of error in OLS regression
Does homoscedasticity imply that the regressor variables and the errors are uncorrelated?
Multiple Linear Regression Zero Conditional Mean Assumption
Another great and related problem swing around the concept of true model. In most econometric books it is erroneously conflated with something like population regression or, worse, some ambiguous statistical object. If we take the so called true model as a structural linear causal model, all problems come to solve. Read also here:
Regression and the CEF
What is a 'true' model?
linear causal model | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Absent a response from the 6 (sets of) authors themselves, a response
by the econometricians/statisticians on this forum might be the next
best thing. My hope is that people can respond to the issues |
17,852 | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? | Different subjects indeed treat causality differently. It will largely affect the coherence of an econometrics textbook if thoroughly adopting the statistical causality. Actually economists reveal the causality, including how to specify a regression equation, both in structural form or reduced form (i.e., statistical causality) empirically. Hansen from UWM did a comprehensive job for both methodologies, and Joshua Angrist wrote famous MHE for the reduced form. | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? | Different subjects indeed treat causality differently. It will largely affect the coherence of an econometrics textbook if thoroughly adopting the statistical causality. Actually economists reveal the | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Different subjects indeed treat causality differently. It will largely affect the coherence of an econometrics textbook if thoroughly adopting the statistical causality. Actually economists reveal the causality, including how to specify a regression equation, both in structural form or reduced form (i.e., statistical causality) empirically. Hansen from UWM did a comprehensive job for both methodologies, and Joshua Angrist wrote famous MHE for the reduced form. | How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?
Different subjects indeed treat causality differently. It will largely affect the coherence of an econometrics textbook if thoroughly adopting the statistical causality. Actually economists reveal the |
17,853 | Variable selection vs Model selection | Sometimes modelers separate variable selection into a distinct step in model development. For instance, they would first perform exploratory analysis, research the academic literature and industry practices then come up with a list of candidate variables. They'd call this step variable selection.
Next, they'd run a bunch of different specifications with many different variable combinations such as OLS model:
$$y_i=\sum_{j_m} X_{ij_m}\beta_{j_m}+\varepsilon_i,$$
where $j_m$ denotes variable $j$ in a model $m$. They'd pick the best model out of all models $m$ manually or in an automated routines. So, these people would call the latter stage model selection.
This is similar to how in machine learning people talk about feature engineering, when they come up with variables. You plug the features into LASSO or similar frameworks where you build a model using these features (variables). In this context it makes a sense to separate out the variable selection into a distinct step, because you let the algorithm to pick the right coefficients for variables, and don't eliminate any variables. Your judgment (in regard to which variable goes into a model) is isolated in the variable selection step, then the rest is up to the fitting algorithm.
In the context of the paper you cited, this is all irrelevant. The paper uses BIC or AIC to select between different model specifications. It doesn't matter whether you had the variable selection as a separate step in this case. All that matters is which variables are in any particular model specification $m$, then you look at their BIC/AIC to pick the best. They account for sample sizes and number of variables. | Variable selection vs Model selection | Sometimes modelers separate variable selection into a distinct step in model development. For instance, they would first perform exploratory analysis, research the academic literature and industry pra | Variable selection vs Model selection
Sometimes modelers separate variable selection into a distinct step in model development. For instance, they would first perform exploratory analysis, research the academic literature and industry practices then come up with a list of candidate variables. They'd call this step variable selection.
Next, they'd run a bunch of different specifications with many different variable combinations such as OLS model:
$$y_i=\sum_{j_m} X_{ij_m}\beta_{j_m}+\varepsilon_i,$$
where $j_m$ denotes variable $j$ in a model $m$. They'd pick the best model out of all models $m$ manually or in an automated routines. So, these people would call the latter stage model selection.
This is similar to how in machine learning people talk about feature engineering, when they come up with variables. You plug the features into LASSO or similar frameworks where you build a model using these features (variables). In this context it makes a sense to separate out the variable selection into a distinct step, because you let the algorithm to pick the right coefficients for variables, and don't eliminate any variables. Your judgment (in regard to which variable goes into a model) is isolated in the variable selection step, then the rest is up to the fitting algorithm.
In the context of the paper you cited, this is all irrelevant. The paper uses BIC or AIC to select between different model specifications. It doesn't matter whether you had the variable selection as a separate step in this case. All that matters is which variables are in any particular model specification $m$, then you look at their BIC/AIC to pick the best. They account for sample sizes and number of variables. | Variable selection vs Model selection
Sometimes modelers separate variable selection into a distinct step in model development. For instance, they would first perform exploratory analysis, research the academic literature and industry pra |
17,854 | Alternate distance metrics for two time series | Answering question 1:
You critic of DTW is met by introducing global constraints to the warping path. This effectively restrains both computational effort (since warping paths which are not allowed do not have to be computed) and prevents pathological warping.
Therefore the answer is: DTW with global constraints
There are several variants of such constraints such as the Sakoe-Chiba band and the Itakura Parallelogram as you can see in the following image. The image originates from a presentation, which is available online in a presentation done by Chotirat Ratanamahatana and Eamonn Keogh.
Another possibly relevant time series distance measure is:
LCSS - Longest Common Subsequence - has been originally developed to analyse string similarity but can also be used for numerical time series. | Alternate distance metrics for two time series | Answering question 1:
You critic of DTW is met by introducing global constraints to the warping path. This effectively restrains both computational effort (since warping paths which are not allowed do | Alternate distance metrics for two time series
Answering question 1:
You critic of DTW is met by introducing global constraints to the warping path. This effectively restrains both computational effort (since warping paths which are not allowed do not have to be computed) and prevents pathological warping.
Therefore the answer is: DTW with global constraints
There are several variants of such constraints such as the Sakoe-Chiba band and the Itakura Parallelogram as you can see in the following image. The image originates from a presentation, which is available online in a presentation done by Chotirat Ratanamahatana and Eamonn Keogh.
Another possibly relevant time series distance measure is:
LCSS - Longest Common Subsequence - has been originally developed to analyse string similarity but can also be used for numerical time series. | Alternate distance metrics for two time series
Answering question 1:
You critic of DTW is met by introducing global constraints to the warping path. This effectively restrains both computational effort (since warping paths which are not allowed do |
17,855 | Alternate distance metrics for two time series | To most users, this "outlier" is a remarkable difference and should yield a measurable difference.
But compared to an entirely different series, it should still only contribute a little, unless you did not preprocess your data well.
We cannot give you better recommendations, because it is impossible to tell what you want. We don't have your data, and we don't know your problem. In order to figure out how to solve this, you need to formalize your requirements, i.e. what should be similar, what should be different, and what should be more similar than the other. Just complaining that you did not "like" the results of the measures is not enough, you need to be much more precise. | Alternate distance metrics for two time series | To most users, this "outlier" is a remarkable difference and should yield a measurable difference.
But compared to an entirely different series, it should still only contribute a little, unless you di | Alternate distance metrics for two time series
To most users, this "outlier" is a remarkable difference and should yield a measurable difference.
But compared to an entirely different series, it should still only contribute a little, unless you did not preprocess your data well.
We cannot give you better recommendations, because it is impossible to tell what you want. We don't have your data, and we don't know your problem. In order to figure out how to solve this, you need to formalize your requirements, i.e. what should be similar, what should be different, and what should be more similar than the other. Just complaining that you did not "like" the results of the measures is not enough, you need to be much more precise. | Alternate distance metrics for two time series
To most users, this "outlier" is a remarkable difference and should yield a measurable difference.
But compared to an entirely different series, it should still only contribute a little, unless you di |
17,856 | Ratios in Regression, aka Questions on Kronmal | You should really have linked to the Kronmal paper (and explained your notation, which is taken directly from the paper.) Your reading of the paper is too literal. Specifically, he does not give advice about weighting, rather saying that weighting can be done the usual ways, so no need to discuss. It is only mentioned as a possibility. Read your cases more like examples, especially as examples of how to analyze such situations.
In section 6 he does give some general advice, which I will cite here:
The message of this paper is that ratio variables should only be used
in the context of a full linear model in which the variables that make
up the ratio are included and the intercept term is also present. The
common practice of using ratios for either the dependent or the
independent variable in regression analysis can lead to misleading
inferences, and rarely results in any gain. This practice is
widespread and entrenched, however, and it may be difficult to
convince some researchers that they should give up their most prized
ratio or index.
The paper uses the (fictitious) example by Neyman on births and storks. To play with that example, you can access it from R by
data(stork, package="TeachingDemos")
I will leave the fun for the readers, but one interesting plot is this coplot: | Ratios in Regression, aka Questions on Kronmal | You should really have linked to the Kronmal paper (and explained your notation, which is taken directly from the paper.) Your reading of the paper is too literal. Specifically, he does not give adv | Ratios in Regression, aka Questions on Kronmal
You should really have linked to the Kronmal paper (and explained your notation, which is taken directly from the paper.) Your reading of the paper is too literal. Specifically, he does not give advice about weighting, rather saying that weighting can be done the usual ways, so no need to discuss. It is only mentioned as a possibility. Read your cases more like examples, especially as examples of how to analyze such situations.
In section 6 he does give some general advice, which I will cite here:
The message of this paper is that ratio variables should only be used
in the context of a full linear model in which the variables that make
up the ratio are included and the intercept term is also present. The
common practice of using ratios for either the dependent or the
independent variable in regression analysis can lead to misleading
inferences, and rarely results in any gain. This practice is
widespread and entrenched, however, and it may be difficult to
convince some researchers that they should give up their most prized
ratio or index.
The paper uses the (fictitious) example by Neyman on births and storks. To play with that example, you can access it from R by
data(stork, package="TeachingDemos")
I will leave the fun for the readers, but one interesting plot is this coplot: | Ratios in Regression, aka Questions on Kronmal
You should really have linked to the Kronmal paper (and explained your notation, which is taken directly from the paper.) Your reading of the paper is too literal. Specifically, he does not give adv |
17,857 | Mixed model vs. Pooling Standard Errors for Multi-site Studies - Why is a Mixed Model So Much More Efficient? | I know this is an old question, but it's relatively popular and has a simple answer, so hopefully it'll be helpful to others in the future. For a more in-depth take, take a look at Christoph Lippert's course on Linear Mixed Models which examines them in the context of genome-wide association studies here. In particular see Lecture 5.
The reason that the mixed model works so much better is that it's designed to take into account exactly what you're trying to control for: population structure. The "populations" in your study are the different sites using, for example, slightly different but consistent implementations of the same protocol. Also, if the subjects of your study are people, people pooled from different sites are less likely to be related than people from the same site, so blood-relatedness may play a role as well.
As opposed to the standard maximum-likelihood linear model where we have $\mathcal{N}(Y|X\beta,\sigma^2) $, linear mixed models add in an additional matrix called the kernel matrix $K$, which estimates the similarity between individuals, and fits the "random effects" so that similar individuals will have similar random effects. This gives rise to the model $\mathcal{N}(Y|X\beta + Zu,\sigma^2I + \sigma_g^2K)$.
Because you are trying to control for population structure explicitly, it's therefore no surprise that the linear mixed model outperformed other regression techniques. | Mixed model vs. Pooling Standard Errors for Multi-site Studies - Why is a Mixed Model So Much More E | I know this is an old question, but it's relatively popular and has a simple answer, so hopefully it'll be helpful to others in the future. For a more in-depth take, take a look at Christoph Lippert' | Mixed model vs. Pooling Standard Errors for Multi-site Studies - Why is a Mixed Model So Much More Efficient?
I know this is an old question, but it's relatively popular and has a simple answer, so hopefully it'll be helpful to others in the future. For a more in-depth take, take a look at Christoph Lippert's course on Linear Mixed Models which examines them in the context of genome-wide association studies here. In particular see Lecture 5.
The reason that the mixed model works so much better is that it's designed to take into account exactly what you're trying to control for: population structure. The "populations" in your study are the different sites using, for example, slightly different but consistent implementations of the same protocol. Also, if the subjects of your study are people, people pooled from different sites are less likely to be related than people from the same site, so blood-relatedness may play a role as well.
As opposed to the standard maximum-likelihood linear model where we have $\mathcal{N}(Y|X\beta,\sigma^2) $, linear mixed models add in an additional matrix called the kernel matrix $K$, which estimates the similarity between individuals, and fits the "random effects" so that similar individuals will have similar random effects. This gives rise to the model $\mathcal{N}(Y|X\beta + Zu,\sigma^2I + \sigma_g^2K)$.
Because you are trying to control for population structure explicitly, it's therefore no surprise that the linear mixed model outperformed other regression techniques. | Mixed model vs. Pooling Standard Errors for Multi-site Studies - Why is a Mixed Model So Much More E
I know this is an old question, but it's relatively popular and has a simple answer, so hopefully it'll be helpful to others in the future. For a more in-depth take, take a look at Christoph Lippert' |
17,858 | How do test whether two multivariate distributions are sampled from the same underlying population? | http://131.95.113.139/courses/multivariate/mantel.pdf
Discusses two possible ways of doing just that if your datasets are the same size.
The basic approach is to compute a distance metric between your two observed matrixes. Then to determine if that distance is significant, you use a permutation test.
If your datasets are not the same size then you can use the cross-match test although it does not appear to be very popular. Instead of the cross-match test you can try up or down sampling your data so they are the same size, then using one of the approaches mentioned in the first paper. | How do test whether two multivariate distributions are sampled from the same underlying population? | http://131.95.113.139/courses/multivariate/mantel.pdf
Discusses two possible ways of doing just that if your datasets are the same size.
The basic approach is to compute a distance metric between you | How do test whether two multivariate distributions are sampled from the same underlying population?
http://131.95.113.139/courses/multivariate/mantel.pdf
Discusses two possible ways of doing just that if your datasets are the same size.
The basic approach is to compute a distance metric between your two observed matrixes. Then to determine if that distance is significant, you use a permutation test.
If your datasets are not the same size then you can use the cross-match test although it does not appear to be very popular. Instead of the cross-match test you can try up or down sampling your data so they are the same size, then using one of the approaches mentioned in the first paper. | How do test whether two multivariate distributions are sampled from the same underlying population?
http://131.95.113.139/courses/multivariate/mantel.pdf
Discusses two possible ways of doing just that if your datasets are the same size.
The basic approach is to compute a distance metric between you |
17,859 | How do test whether two multivariate distributions are sampled from the same underlying population? | Look up Hotelling's $T^2,$ or if you have really high-dim data, look at this. | How do test whether two multivariate distributions are sampled from the same underlying population? | Look up Hotelling's $T^2,$ or if you have really high-dim data, look at this. | How do test whether two multivariate distributions are sampled from the same underlying population?
Look up Hotelling's $T^2,$ or if you have really high-dim data, look at this. | How do test whether two multivariate distributions are sampled from the same underlying population?
Look up Hotelling's $T^2,$ or if you have really high-dim data, look at this. |
17,860 | What is the point of univariate regression before multivariate regression? | The causal context of your analysis is a key qualifier in your question. In forecasting, running univariate regressions before multiple regressions in the spirit of the "purposeful selection method" suggested by Hosmer and Lemenshow has one goal. In your case, where you are building a causal model, running univariate regressions before running multiple regression has a completely different goal. Let me expand on the latter.
You and your instructor must have in mind a certain causal graph. Causal graphs have testable implications. Your mission is to start with the dataset that you have, and reason back to the causal model that might have generated it. The univariate regressions he suggested that you run most likely constitute the first step in the process of testing the implications of the causal graph you have in mind. Suppose that you believe that your data was generated by the causal model depicted in the graph below. Suppose you are interested in the causal effect of D on E. The graph below suggests a host of testable implications, such as:
E are D are likely dependent
E and A are likely dependent
E and C are likely dependent
E and B are likely dependent
E and N are likely independent
I mentioned that this is only the first step in the causal search process because the real fun starts once you start running multiple regressions, conditioning of different variables and testing whether the result of the regression is consistent with the implication of the graph. For example, the graph above suggest that E and A must be independent once you condition on D. In other words, if you regress E on D and A and find that the coefficient on A is not equal to zero, you'll conclude that E depends on A, after you condition on D, and therefore that the causal graph must be wrong. It will even give you hints as to how to alter your causal graph, because the result of this regression suggests that there must be a path between A and E that is not d-separated by D. It will become important to know the testable dependence implications that chains, forks, and colliders have. | What is the point of univariate regression before multivariate regression? | The causal context of your analysis is a key qualifier in your question. In forecasting, running univariate regressions before multiple regressions in the spirit of the "purposeful selection method" s | What is the point of univariate regression before multivariate regression?
The causal context of your analysis is a key qualifier in your question. In forecasting, running univariate regressions before multiple regressions in the spirit of the "purposeful selection method" suggested by Hosmer and Lemenshow has one goal. In your case, where you are building a causal model, running univariate regressions before running multiple regression has a completely different goal. Let me expand on the latter.
You and your instructor must have in mind a certain causal graph. Causal graphs have testable implications. Your mission is to start with the dataset that you have, and reason back to the causal model that might have generated it. The univariate regressions he suggested that you run most likely constitute the first step in the process of testing the implications of the causal graph you have in mind. Suppose that you believe that your data was generated by the causal model depicted in the graph below. Suppose you are interested in the causal effect of D on E. The graph below suggests a host of testable implications, such as:
E are D are likely dependent
E and A are likely dependent
E and C are likely dependent
E and B are likely dependent
E and N are likely independent
I mentioned that this is only the first step in the causal search process because the real fun starts once you start running multiple regressions, conditioning of different variables and testing whether the result of the regression is consistent with the implication of the graph. For example, the graph above suggest that E and A must be independent once you condition on D. In other words, if you regress E on D and A and find that the coefficient on A is not equal to zero, you'll conclude that E depends on A, after you condition on D, and therefore that the causal graph must be wrong. It will even give you hints as to how to alter your causal graph, because the result of this regression suggests that there must be a path between A and E that is not d-separated by D. It will become important to know the testable dependence implications that chains, forks, and colliders have. | What is the point of univariate regression before multivariate regression?
The causal context of your analysis is a key qualifier in your question. In forecasting, running univariate regressions before multiple regressions in the spirit of the "purposeful selection method" s |
17,861 | What is the point of univariate regression before multivariate regression? | Before I try to answer I'd like to point out that type of data and its distribution can affect the way you evaluate/regress/classify it.
Also you might want to look here for the method that your advisor might want you to use.
A bit of background.
While using a model selection tool is a possibility, you still need to be able to say why a predictor was used or left out. Those tools can be a black box. You should fully understand your data and be able to state why a particular predictor was selected. (Especially, I'm assuming for a thesis/master's project.)
For example, look at the price of houses and age. The price of houses generally decreases with age. Therefore when you see an old house with a high price in your data it would look like an outlier to be removed but that's not the case.
As to
(NB: my advisor has said we are NOT using p-values as a cutoff, but that we want to consider "everything".)
p-values aren't the be all and end all of everything but they can be helpful. Recall algorithms/programs are limited and cannot view the whole picture.
As to why you might univariate regression on each predictor/treatment assignment.
This could be to aid in selecting the predictors to include in the basic multivariate model. From that basic model, you would then look to see if those predictors are significant and should remain or if they should be removed with the aim to get a parsimonious model.
Or it could be for you to better get an understanding of the data. | What is the point of univariate regression before multivariate regression? | Before I try to answer I'd like to point out that type of data and its distribution can affect the way you evaluate/regress/classify it.
Also you might want to look here for the method that your advi | What is the point of univariate regression before multivariate regression?
Before I try to answer I'd like to point out that type of data and its distribution can affect the way you evaluate/regress/classify it.
Also you might want to look here for the method that your advisor might want you to use.
A bit of background.
While using a model selection tool is a possibility, you still need to be able to say why a predictor was used or left out. Those tools can be a black box. You should fully understand your data and be able to state why a particular predictor was selected. (Especially, I'm assuming for a thesis/master's project.)
For example, look at the price of houses and age. The price of houses generally decreases with age. Therefore when you see an old house with a high price in your data it would look like an outlier to be removed but that's not the case.
As to
(NB: my advisor has said we are NOT using p-values as a cutoff, but that we want to consider "everything".)
p-values aren't the be all and end all of everything but they can be helpful. Recall algorithms/programs are limited and cannot view the whole picture.
As to why you might univariate regression on each predictor/treatment assignment.
This could be to aid in selecting the predictors to include in the basic multivariate model. From that basic model, you would then look to see if those predictors are significant and should remain or if they should be removed with the aim to get a parsimonious model.
Or it could be for you to better get an understanding of the data. | What is the point of univariate regression before multivariate regression?
Before I try to answer I'd like to point out that type of data and its distribution can affect the way you evaluate/regress/classify it.
Also you might want to look here for the method that your advi |
17,862 | What is the point of univariate regression before multivariate regression? | I think your supervisor is asking you to perform a first analysis of the data with the objective of identifying if any of the variables can explain a significant fraction of the variance in the data.
Once you concluded if any of the variables can explain some of the variability, then you will be able to assess how they work together, if they are colinear, or correlated between each other, etc. In a purely exploratory phase to have a multivariate analysis could make a first assessment harder, because by construction each variable you would be removing the effect of the others. It could be harder to assess if any of the variables could explain any of the variation. | What is the point of univariate regression before multivariate regression? | I think your supervisor is asking you to perform a first analysis of the data with the objective of identifying if any of the variables can explain a significant fraction of the variance in the data. | What is the point of univariate regression before multivariate regression?
I think your supervisor is asking you to perform a first analysis of the data with the objective of identifying if any of the variables can explain a significant fraction of the variance in the data.
Once you concluded if any of the variables can explain some of the variability, then you will be able to assess how they work together, if they are colinear, or correlated between each other, etc. In a purely exploratory phase to have a multivariate analysis could make a first assessment harder, because by construction each variable you would be removing the effect of the others. It could be harder to assess if any of the variables could explain any of the variation. | What is the point of univariate regression before multivariate regression?
I think your supervisor is asking you to perform a first analysis of the data with the objective of identifying if any of the variables can explain a significant fraction of the variance in the data. |
17,863 | What is the point of univariate regression before multivariate regression? | That may be an approach to understand data, but experience shows that predictions will vary when you use all predictors combined and each one predictor one by one.
That's just something we do understand predictability of data and understand what needs to be done for future steps.
I have seen many times when with all variables the p-value says some variables are not significant but with those non-significant variables alone, they were significant enough. That's due to mixed effect: it's not that your supervisor is wrong, but to understand data we have to do this. | What is the point of univariate regression before multivariate regression? | That may be an approach to understand data, but experience shows that predictions will vary when you use all predictors combined and each one predictor one by one.
That's just something we do underst | What is the point of univariate regression before multivariate regression?
That may be an approach to understand data, but experience shows that predictions will vary when you use all predictors combined and each one predictor one by one.
That's just something we do understand predictability of data and understand what needs to be done for future steps.
I have seen many times when with all variables the p-value says some variables are not significant but with those non-significant variables alone, they were significant enough. That's due to mixed effect: it's not that your supervisor is wrong, but to understand data we have to do this. | What is the point of univariate regression before multivariate regression?
That may be an approach to understand data, but experience shows that predictions will vary when you use all predictors combined and each one predictor one by one.
That's just something we do underst |
17,864 | Confidence interval of precision / recall and F1 score | To give some quick answers to the points raised:
The additional "$+4$"observed when calculated the "adjusted version of recall". This comes from the viewing the occurrence of a True Positive as a success and the occurrence of a False Negative as a failure. Using this rationale, we follow the general recommendation from Agresti & Coull (1998) "Approximate is Better than 'Exact' for Interval Estimation of Binomial Proportions" where we "add two successes and two failures" to get adjusted Wald interval. As $2 + 2 = 4$, our total sample size increases from $N$ to $N+4$. As the authors also explain it is "identical to Bayes estimate (mean of posterior distribution) with parameters 2 and 2". (I will revisit this point in a end).
The basis for the standard error formula shown. This is the also motivated by the "add two successes and two failures" rationale. This drives the $+4$ on the denominator. Regarding the numerator, please note that the estimate $\hat{p}$ should be adjusted too as mentioned above. A bit more background: assuming a 0.95 CI with $z^2 = 1.96^2 \approx 4$, then the midpoint of this adjust interval $\frac{(X+\frac{z^2}{2})}{(n +z^2)} \approx \frac{X+2}{n+4}$. Interestingly is also nearly identical to the midpoint of the 0.95 Wilson score interval.
How this carries forward to the calculations of $F_1$. This correction in itself ($+2$ and $+4$) is somewhat trivial to be also to the calculation of the $F_1$ score;we can apply it on how we calculate Precision (or Recall) and use the result. That said, calculating the standard error of the $F_1$ is more involved. While working with the linear combination of independent random variables (RVs) is quite straightforward, $F_1$ is not a linear combination of Precision and Recall. Luckily we can express it as their harmonic mean though (i.e. $F_1 = \frac{2 * Prec * Rec}{Prec + Rec}$. It is preferable to use the harmonic mean of Precision and Recall as the way of expressing $F_1$ because it allows us to work with a product distribution when it comes to the numerator of the $F_1$ score and a ratio distribution when it comes to the overall calculation. (Convenient fact: The mean calculations are straightforward as the expectation of the product of two RVs is the product of their expectations.)
As mentioned A&C (1998) also suggests that this correction is "identical to Bayes estimate (mean of posterior distribution) with parameters 2 and 2". This idea is fully explored in Goutte & Gaussier (2005) A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. Effectively we can view the whole confusion matrix as the sample realisation from a multinomial distribution. We can bootstrap it, as well as assume priors on it. To simplify things a bit a will just focus on Recall which can be assumed to be just the realisation of a Binomial distribution so we can use a Beta distribution as the prior. If we wanted to use the whole matrix we would use a Dirichlet distribution (i.e. a multivariate beta distribution).
For the bootstrap also keep in mind that as Hastie et al. commented in "Elements of Statistical Learning" (Sect. 8.4) directly: "we might think of the bootstrap distribution as a "poor man's" Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out."
OK, some code to make these concrete.
# Set seed for reproducibility
set.seed(123)
# Define our observed sample
mySample =c( rep("TP", 250), rep("TN",550), rep("FN", 50), rep("FP", 150))
# Define our "Recall" sampling function
getRec = function(){xx = sample(mySample, replace=TRUE);
sum(xx=="TP")/(sum("TP"==xx) + sum("FN" == xx))}
# Create our bootstrap Recall sample (Give it ~ 50")
myRecalls = replicate(n = 1000000, getRec())
mean(myRecalls) # 0.8333322
# Get our empirical density and calculate the mean
theKDE = density(myRecalls)
plot(theKDE, lty=2, main= "Distribution of Recall")
grid()
abline(v = mean(myRecalls), lty=2)
theSupport = seq(min(theKDE$x), max(theKDE$x), by = 0.0001)
# Explore different priors
# Haldane prior (Complete uncertainty)
lines(col='green', theSupport, dbeta(theSupport, shape1=250+0, shape2=50+0))
maxHP = theSupport[which.max(dbeta(theSupport, shape1=250+0, shape2=50+0))]
abline(v = maxHP, col='green')
# Flat prior
lines(col='cyan', theSupport, dbeta(theSupport, shape1=250+1, shape2=50+1))
maxFP = theSupport[which.max(dbeta(theSupport, shape1=250+1, shape2=50+1))]
abline(v = maxFP, col='cyan')
# A&C suggestion
lines(col='red', theSupport, dbeta(theSupport, shape1=250+2, shape2=50+2))
maxAC = theSupport[which.max(dbeta(theSupport, shape1=250+2, shape2=50+2))]
abline(v = maxAC, col='red')
legend( "topright", lty=c(2,1,1,1),
legend = c(paste0("Boostrap Mean: (", signif(mean(myRecalls),4), ")"),
paste0("HP Posterior MAP: (", signif(maxHP,4), ")"),
paste0("FP Posterior MAP: (", signif(maxFP,4), ")"),
paste0("A&C Posterior MAP: (", signif(maxAC,4), ")")),
col=c("black","green","cyan", "red"))
As it can be seen our flat prior's (Beta distribution $B(1,1)$) posterior MAP and our bootstrap estimates effectively coincide. Similarly, our A&C posterior MAP shows the strongest shrinkage towards a mean of 0.50 while the Haldane prior's (Beta distribution $B(0,0)$) posterior MAP is the most optimistic for our Recall. Notice that if we accept a particular posterior, the actual 0.95 CI calculation becomes trivial as we can directly get it from the quantile functions. For example, assuming a flat prior qbeta(1-0.025, 251,51) will give us the upper 0.95 CI for the posterior as 0.8711659. Similarly the posterior mean is estimated as $\frac{\alpha}{\alpha + \beta}$ or in the case of a flat prior as 0.8311 (251/(251+51)). | Confidence interval of precision / recall and F1 score | To give some quick answers to the points raised:
The additional "$+4$"observed when calculated the "adjusted version of recall". This comes from the viewing the occurrence of a True Positive as a succ | Confidence interval of precision / recall and F1 score
To give some quick answers to the points raised:
The additional "$+4$"observed when calculated the "adjusted version of recall". This comes from the viewing the occurrence of a True Positive as a success and the occurrence of a False Negative as a failure. Using this rationale, we follow the general recommendation from Agresti & Coull (1998) "Approximate is Better than 'Exact' for Interval Estimation of Binomial Proportions" where we "add two successes and two failures" to get adjusted Wald interval. As $2 + 2 = 4$, our total sample size increases from $N$ to $N+4$. As the authors also explain it is "identical to Bayes estimate (mean of posterior distribution) with parameters 2 and 2". (I will revisit this point in a end).
The basis for the standard error formula shown. This is the also motivated by the "add two successes and two failures" rationale. This drives the $+4$ on the denominator. Regarding the numerator, please note that the estimate $\hat{p}$ should be adjusted too as mentioned above. A bit more background: assuming a 0.95 CI with $z^2 = 1.96^2 \approx 4$, then the midpoint of this adjust interval $\frac{(X+\frac{z^2}{2})}{(n +z^2)} \approx \frac{X+2}{n+4}$. Interestingly is also nearly identical to the midpoint of the 0.95 Wilson score interval.
How this carries forward to the calculations of $F_1$. This correction in itself ($+2$ and $+4$) is somewhat trivial to be also to the calculation of the $F_1$ score;we can apply it on how we calculate Precision (or Recall) and use the result. That said, calculating the standard error of the $F_1$ is more involved. While working with the linear combination of independent random variables (RVs) is quite straightforward, $F_1$ is not a linear combination of Precision and Recall. Luckily we can express it as their harmonic mean though (i.e. $F_1 = \frac{2 * Prec * Rec}{Prec + Rec}$. It is preferable to use the harmonic mean of Precision and Recall as the way of expressing $F_1$ because it allows us to work with a product distribution when it comes to the numerator of the $F_1$ score and a ratio distribution when it comes to the overall calculation. (Convenient fact: The mean calculations are straightforward as the expectation of the product of two RVs is the product of their expectations.)
As mentioned A&C (1998) also suggests that this correction is "identical to Bayes estimate (mean of posterior distribution) with parameters 2 and 2". This idea is fully explored in Goutte & Gaussier (2005) A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. Effectively we can view the whole confusion matrix as the sample realisation from a multinomial distribution. We can bootstrap it, as well as assume priors on it. To simplify things a bit a will just focus on Recall which can be assumed to be just the realisation of a Binomial distribution so we can use a Beta distribution as the prior. If we wanted to use the whole matrix we would use a Dirichlet distribution (i.e. a multivariate beta distribution).
For the bootstrap also keep in mind that as Hastie et al. commented in "Elements of Statistical Learning" (Sect. 8.4) directly: "we might think of the bootstrap distribution as a "poor man's" Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out."
OK, some code to make these concrete.
# Set seed for reproducibility
set.seed(123)
# Define our observed sample
mySample =c( rep("TP", 250), rep("TN",550), rep("FN", 50), rep("FP", 150))
# Define our "Recall" sampling function
getRec = function(){xx = sample(mySample, replace=TRUE);
sum(xx=="TP")/(sum("TP"==xx) + sum("FN" == xx))}
# Create our bootstrap Recall sample (Give it ~ 50")
myRecalls = replicate(n = 1000000, getRec())
mean(myRecalls) # 0.8333322
# Get our empirical density and calculate the mean
theKDE = density(myRecalls)
plot(theKDE, lty=2, main= "Distribution of Recall")
grid()
abline(v = mean(myRecalls), lty=2)
theSupport = seq(min(theKDE$x), max(theKDE$x), by = 0.0001)
# Explore different priors
# Haldane prior (Complete uncertainty)
lines(col='green', theSupport, dbeta(theSupport, shape1=250+0, shape2=50+0))
maxHP = theSupport[which.max(dbeta(theSupport, shape1=250+0, shape2=50+0))]
abline(v = maxHP, col='green')
# Flat prior
lines(col='cyan', theSupport, dbeta(theSupport, shape1=250+1, shape2=50+1))
maxFP = theSupport[which.max(dbeta(theSupport, shape1=250+1, shape2=50+1))]
abline(v = maxFP, col='cyan')
# A&C suggestion
lines(col='red', theSupport, dbeta(theSupport, shape1=250+2, shape2=50+2))
maxAC = theSupport[which.max(dbeta(theSupport, shape1=250+2, shape2=50+2))]
abline(v = maxAC, col='red')
legend( "topright", lty=c(2,1,1,1),
legend = c(paste0("Boostrap Mean: (", signif(mean(myRecalls),4), ")"),
paste0("HP Posterior MAP: (", signif(maxHP,4), ")"),
paste0("FP Posterior MAP: (", signif(maxFP,4), ")"),
paste0("A&C Posterior MAP: (", signif(maxAC,4), ")")),
col=c("black","green","cyan", "red"))
As it can be seen our flat prior's (Beta distribution $B(1,1)$) posterior MAP and our bootstrap estimates effectively coincide. Similarly, our A&C posterior MAP shows the strongest shrinkage towards a mean of 0.50 while the Haldane prior's (Beta distribution $B(0,0)$) posterior MAP is the most optimistic for our Recall. Notice that if we accept a particular posterior, the actual 0.95 CI calculation becomes trivial as we can directly get it from the quantile functions. For example, assuming a flat prior qbeta(1-0.025, 251,51) will give us the upper 0.95 CI for the posterior as 0.8711659. Similarly the posterior mean is estimated as $\frac{\alpha}{\alpha + \beta}$ or in the case of a flat prior as 0.8311 (251/(251+51)). | Confidence interval of precision / recall and F1 score
To give some quick answers to the points raised:
The additional "$+4$"observed when calculated the "adjusted version of recall". This comes from the viewing the occurrence of a True Positive as a succ |
17,865 | LASSO relationship between $\lambda$ and $t$ | This is the standard solution for ridge regression:
$$
\beta = \left( X'X + \lambda I \right) ^{-1} X'y
$$
We also know that $\| \beta \| = t$, so it must be true that
$$
\| \left( X'X + \lambda I \right) ^{-1} X'y \| = t
$$.
which is possible, but not easy to solve for $\lambda$.
Your best bet is to just keep doing what you're doing: compute $t$ on the same sub-sample of the data across multiple $\lambda$ values. | LASSO relationship between $\lambda$ and $t$ | This is the standard solution for ridge regression:
$$
\beta = \left( X'X + \lambda I \right) ^{-1} X'y
$$
We also know that $\| \beta \| = t$, so it must be true that
$$
\| \left( X'X + \lambda I \r | LASSO relationship between $\lambda$ and $t$
This is the standard solution for ridge regression:
$$
\beta = \left( X'X + \lambda I \right) ^{-1} X'y
$$
We also know that $\| \beta \| = t$, so it must be true that
$$
\| \left( X'X + \lambda I \right) ^{-1} X'y \| = t
$$.
which is possible, but not easy to solve for $\lambda$.
Your best bet is to just keep doing what you're doing: compute $t$ on the same sub-sample of the data across multiple $\lambda$ values. | LASSO relationship between $\lambda$ and $t$
This is the standard solution for ridge regression:
$$
\beta = \left( X'X + \lambda I \right) ^{-1} X'y
$$
We also know that $\| \beta \| = t$, so it must be true that
$$
\| \left( X'X + \lambda I \r |
17,866 | LASSO relationship between $\lambda$ and $t$ | This question relates to Is the magnitude coefficient vector in Ridge regression monotonic in lambda? which sketches a situation for ridge regression, but it is similar for Lasso.
Consider the relationship of the optimal RSS as a function of the value of $t = \vert \beta \vert$. Say that this function is $RSS = f(t)$.
The goal of lasso is to find the $\beta$ which minimizes $$\text{Cost}(\beta) = RSS(\beta) + \lambda \vert\beta\vert$$
We could describe the cost as well as a function of the magnitude of the coefficients $t$
$$\text{Cost}(t) = f(t) + \lambda t$$
this is minimized when
$$\frac\partial{\partial t} \text{Cost}(t) = \frac\partial{\partial t} f(t) + \lambda = 0 $$
And the relationship between $\lambda$ and $t$ is
$$\lambda = - \frac\partial{\partial t} f(t)$$
This function $f(t)$, the size of the RSS for a given size of the estimates of the coefficients, is dependent on the data. | LASSO relationship between $\lambda$ and $t$ | This question relates to Is the magnitude coefficient vector in Ridge regression monotonic in lambda? which sketches a situation for ridge regression, but it is similar for Lasso.
Consider the relati | LASSO relationship between $\lambda$ and $t$
This question relates to Is the magnitude coefficient vector in Ridge regression monotonic in lambda? which sketches a situation for ridge regression, but it is similar for Lasso.
Consider the relationship of the optimal RSS as a function of the value of $t = \vert \beta \vert$. Say that this function is $RSS = f(t)$.
The goal of lasso is to find the $\beta$ which minimizes $$\text{Cost}(\beta) = RSS(\beta) + \lambda \vert\beta\vert$$
We could describe the cost as well as a function of the magnitude of the coefficients $t$
$$\text{Cost}(t) = f(t) + \lambda t$$
this is minimized when
$$\frac\partial{\partial t} \text{Cost}(t) = \frac\partial{\partial t} f(t) + \lambda = 0 $$
And the relationship between $\lambda$ and $t$ is
$$\lambda = - \frac\partial{\partial t} f(t)$$
This function $f(t)$, the size of the RSS for a given size of the estimates of the coefficients, is dependent on the data. | LASSO relationship between $\lambda$ and $t$
This question relates to Is the magnitude coefficient vector in Ridge regression monotonic in lambda? which sketches a situation for ridge regression, but it is similar for Lasso.
Consider the relati |
17,867 | Derivation of normalizing transform for GLMs | The slides you link to are somewhat confusing, leaving out steps and making a few typos, but they are ultimately correct. It will help to answer question 2 first, then 1, and then finally derive the symmetrizing transformation $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$.
Question 2. We are analyzing $\bar{X}$ as it the mean of a sample of size $N$ of i.i.d. random variables $X_1, ..., X_N$. This is an important quantity because sampling the same distribution and taking the mean happens all the time in science. We want to know how close $\bar{X}$ is to the true mean $\mu$. The Central Limit Theorem says it will converge to $\mu$ as $N \to \infty$ but we would like to know the variance and skewness of $\bar{X}$.
Question 1. Your Taylor series approximation is not incorrect, but we need to be careful about keeping track of $\bar{X}$ vs. $X_i$ and powers of $N$ to get to the same conclusion as the slides. We'll start with the definitions of $\bar{X}$ and central moments of $X_i$ and derive the formula for $\kappa_3(h(\bar{X}))$:
$\bar{X} = \frac{1}{N}\sum_{i=1}^N X_i$
$\mathbb{E}[X_i] = \mu$
$V(X_i) = \mathbb{E}[(X_i - \mu)^2] = \sigma^2$
$\kappa_3(X_i) = \mathbb{E}[(X_i - \mu)^3]$
Now, the central moments of $\bar{X}$:
$\mathbb{E}[\bar{X}] = \frac{1}{N}\sum_{i=1}^N \mathbb{E}[X_i] = \frac{1}{N}(N\mu) = \mu$
$\begin{align}
V(\bar{X}) &=\mathbb{E}[(\bar{X} - \mu)^2]\\
&=\mathbb{E}[\Big((\frac{1}{N}\sum_{i=1}^N X_i) - \mu\Big)^2]\\
&=\mathbb{E}[\Big(\frac{1}{N}\sum_{i=1}^N (X_i - \mu)\Big)^2]\\
&=\frac{1}{N^2}\Big(N\mathbb{E}[(X_i - \mu)^2] + N(N-1)\mathbb{E}[X_i - \mu]\mathbb{E}[X_j - \mu]\Big)\\
&= \frac{1}{N}\sigma^2
\end{align}$
The last step follows since $\mathbb{E}[X_i - \mu] = 0$, and $\mathbb{E}[(X_i - \mu)^2] = \sigma^2$. This might not have been the easiest derivation of $V(\bar{X})$, but it is the same process we need to do to find $\kappa_3(\bar{X})$ and $\kappa_3(h(\bar{X}))$, where we break up a product of a summation and count the number of terms with powers of different variables. In the above case, there were $N$ terms that were of the form $(X_i - \mu)^2$ and $N(N-1)$ terms of the form $(X_i - \mu)(X_j - \mu)$.
$\begin{align}
\kappa_3(\bar{X}) &= \mathbb{E}[(\bar{X}-\mu)^3)]\\
&= \mathbb{E}[\Big((\frac{1}{N}\sum_{i=1}^N X_i) - \mu\Big)^3]\\
&= \mathbb{E}[\Big(\frac{1}{N}\sum_{i=1}^N (X_i - \mu)\Big)^3]\\
&= \frac{1}{N^3}\Big(N\mathbb{E}[(X_i - \mu)^3] + 3N(N-1)\mathbb{E}[(X_i - \mu)\mathbb{E}[(X_j - \mu)^2]+N(N-1)(N-2)\mathbb{E}[(X_i - \mu)]\mathbb{E}[(X_j - \mu)]\mathbb{E}[(X_k - \mu)]\\
&= \frac{1}{N^2}\mathbb{E}[(X_i - \mu)^3]\\
&= \frac{\kappa_3(X_i)}{N^2}
\end{align}$
Next, we will expand $h(\bar{X})$ in a Taylor series as you have:
$h(\bar{X}) = h(\mu) + h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 + \frac{1}{3}h'''(\mu)(\bar{X}-\mu)^3 + ...$
$\begin{align}
\mathbb{E}[h(\bar{X})] &= h(\mu) + h'(\mu)\mathbb{E}[\bar{X} - \mu] + \frac{1}{2}h''(\mu)\mathbb{E}[(\bar{X}-\mu)^2] + \frac{1}{3}h'''(\mu)\mathbb{E}[(\bar{X}-\mu)^3] + ...\\
&= h(\mu) + \frac{1}{2}h''(\mu)\frac{\sigma^2}{N} + \frac{1}{3}h'''(\mu)\frac{\kappa_3(X_i)}{N^2} + ...\\
\end{align}$
With some more effort you could prove the rest of the terms are $O(N^{-3})$. Finally, since $\kappa_3(h(\bar{X})) = \mathbb{E}[(h(\bar{X})-\mathbb{E}[h(\bar{X})])^3]$, (which is not the same as $\mathbb{E}[(h(\bar{X})-h(\mu))^3]$), we again make a similar computation:
$\begin{align}
\kappa_3(h(\bar{X})) &= \mathbb{E}[(h(\bar{X})-\mathbb{E}[h(\bar{X})])^3]\\
&=\mathbb{E}\Big[\Big(h(\mu) + h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 + O((\bar{X}-\mu)^3) - h(\mu) - \frac{1}{2}h''(\mu)\frac{\sigma^2}{N} - O(N^{-2})\Big)^3\Big]
\end{align}$
We are only interested in the terms resulting in order $O(N^{-2})$, and with extra work you could show that you do not need the terms "$O((\bar{X}-\mu)^3)$" or "$- O(N^{-2})$" before taking the third power, as they will only result in terms of order $O(N^{-3})$. So, simplifying, we get
$\begin{align}
\kappa_3(h(\bar{X})) &= \mathbb{E}\Big[\Big(h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 - \frac{1}{2}h''(\mu)\frac{\sigma^2}{N})\Big)^3\Big]\\
&=\mathbb{E}\Big[h'(\mu)^3(\bar{X} - \mu)^3 + \frac{1}{8}h''(\mu)^3(\bar{X}-\mu)^6 - \frac{1}{8}h''(\mu)^3\frac{\sigma^6}{N^3} + \frac{3}{2}h'(\mu)^2h''(\mu)(\bar{X}-\mu)^4 + \frac{3}{4}h'(\mu)h''(\mu)(\bar{X}-\mu)^5 - \frac{3}{2}h'(\mu)^2h''(\mu)(\bar{X} - \mu)^2\frac{\sigma^2}{N} + O(N^{-3})\Big]
\end{align}$
I left off some terms that were obviously $O(N^{-3})$ in this product. You'll have to convince yourself that the terms $\mathbb{E}[(\bar{X}-\mu)^5]$ and $\mathbb{E}[(\bar{X}-\mu)^6]$ are $O(N^{-3})$ as well. However,
$\begin{align}
\mathbb{E}[(\bar{X}-\mu)^4] &= \mathbb{E}[\frac{1}{N^4}\Big(\sum_{i=1}^N(\bar{X}-\mu)\Big)^4]\\
&=\frac{1}{N^4}\Big(N\mathbb{E}[(X_i-\mu)^4] + 3N(N-1)\mathbb{E}[(X_i-\mu)^2]\mathbb{E}[(X_j-\mu)^2] + 0\Big)\\
&=\frac{3}{N^2}\sigma^4 + O(N^{-3})
\end{align}$
Then distributing the expectation on our equation for $\kappa_3(h(\bar{X}))$, we have
$\begin{align}\kappa_3(h(\bar{X})) &= h'(\mu)^3\mathbb{E}[(\bar{X} - \mu)^3] + \frac{3}{2}h'(\mu)^2h''(\mu)\mathbb{E}[(\bar{X}-\mu)^4] - \frac{3}{2}h'(\mu)^2h''(\mu)\mathbb{E}[(\bar{X} - \mu)^2]\frac{\sigma^2}{N} + O(N^{-3})\\
&= h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + \frac{9}{2}h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} - \frac{3}{2}h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} + O(N^{-3})\\
&=h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + 3h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} + O(N^{-3})
\end{align}$
This concludes the derivation of $\kappa_3(h(\bar{X}))$. Now, at last, we will derive the symmetrizing transform $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$.
For this transformation, it is important that $X_i$ is from an exponential family distribution, and in particular a natural exponential family (or it has been transformed into this distribution), of the form $f_{X_i}(x;\theta) = h(x)\exp(\theta x - b(\theta))$
In this case, the cumulants of the distribution are given by $\kappa_k = b^{(k)}(\theta)$. So $\mu = b'(\theta)$, $\sigma^2 = V(\theta) = b''(\theta)$, and $\kappa_3 = b'''(\theta)$. We can write the parameter $\theta$ as a function of $\mu$ just taking the inverse of $b'$, writing $\theta(\mu) = (b')^{-1}(\mu)$. Then
$\theta'(\mu) = \frac{1}{b''((b')^{-1}(\mu))} = \frac{1}{b''(\theta))} = \frac{1}{\sigma^2}$
Next we can write the variance as a function of $\mu$, and call this function $\bar{V}$:
$\bar{V}(\mu) = V(\theta(\mu)) = b''(\theta(\mu))$
Then
$\frac{d}{d\mu}\bar{V}(\mu) = V'(\theta(\mu))\theta'(\mu) = b'''(\theta)\frac{1}{\sigma^2} = \frac{\kappa_3}{\sigma^2}$
So as a function of $\mu$, $\kappa_3(\mu) = \bar{V}'(\mu)\bar{V}(\mu)$.
Now, for the symmetrizing transformation, we want to reduce the skewness of $h(\bar{X})$ by making $h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + 3h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} = 0$ so that $h(\bar{X})$ is $O(N^{-3})$. Thus, we want
$h'(\mu)^3\kappa_3(X_i) + 3h'(\mu)^2h''(\mu)\sigma^4 = 0$
Substituting our expressions for $\sigma^2$ and $\kappa_3$ as functions of $\mu$, we have:
$h'(\mu)^3\bar{V}'(\mu)\bar{V}(\mu) + 3h'(\mu)^2h''(\mu)\bar{V}(\mu)^2 = 0$
So $h'(\mu)^3\bar{V}'(\mu) + 3h'(\mu)^2h''(\mu)\bar{V}(\mu) = 0$, leading to $\frac{d}{d\mu}(h'(\mu)^3\bar{V}(\mu)) = 0$.
One solution to this differential equation is:
$h'(\mu)^3\bar{V}(\mu) = 1$,
$h'(\mu) = \frac{1}{[\bar{V}(\mu)]^{1/3}}$
So, $h(\mu) = \int_c^\mu \frac{1}{[\bar{V}(\theta)]^{1/3}} d\theta$, for any constant, $c$. This gives us the symmetrizing transformation $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$, where $V$ is the variance as a function of the mean in a natural exponential family. | Derivation of normalizing transform for GLMs | The slides you link to are somewhat confusing, leaving out steps and making a few typos, but they are ultimately correct. It will help to answer question 2 first, then 1, and then finally derive the s | Derivation of normalizing transform for GLMs
The slides you link to are somewhat confusing, leaving out steps and making a few typos, but they are ultimately correct. It will help to answer question 2 first, then 1, and then finally derive the symmetrizing transformation $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$.
Question 2. We are analyzing $\bar{X}$ as it the mean of a sample of size $N$ of i.i.d. random variables $X_1, ..., X_N$. This is an important quantity because sampling the same distribution and taking the mean happens all the time in science. We want to know how close $\bar{X}$ is to the true mean $\mu$. The Central Limit Theorem says it will converge to $\mu$ as $N \to \infty$ but we would like to know the variance and skewness of $\bar{X}$.
Question 1. Your Taylor series approximation is not incorrect, but we need to be careful about keeping track of $\bar{X}$ vs. $X_i$ and powers of $N$ to get to the same conclusion as the slides. We'll start with the definitions of $\bar{X}$ and central moments of $X_i$ and derive the formula for $\kappa_3(h(\bar{X}))$:
$\bar{X} = \frac{1}{N}\sum_{i=1}^N X_i$
$\mathbb{E}[X_i] = \mu$
$V(X_i) = \mathbb{E}[(X_i - \mu)^2] = \sigma^2$
$\kappa_3(X_i) = \mathbb{E}[(X_i - \mu)^3]$
Now, the central moments of $\bar{X}$:
$\mathbb{E}[\bar{X}] = \frac{1}{N}\sum_{i=1}^N \mathbb{E}[X_i] = \frac{1}{N}(N\mu) = \mu$
$\begin{align}
V(\bar{X}) &=\mathbb{E}[(\bar{X} - \mu)^2]\\
&=\mathbb{E}[\Big((\frac{1}{N}\sum_{i=1}^N X_i) - \mu\Big)^2]\\
&=\mathbb{E}[\Big(\frac{1}{N}\sum_{i=1}^N (X_i - \mu)\Big)^2]\\
&=\frac{1}{N^2}\Big(N\mathbb{E}[(X_i - \mu)^2] + N(N-1)\mathbb{E}[X_i - \mu]\mathbb{E}[X_j - \mu]\Big)\\
&= \frac{1}{N}\sigma^2
\end{align}$
The last step follows since $\mathbb{E}[X_i - \mu] = 0$, and $\mathbb{E}[(X_i - \mu)^2] = \sigma^2$. This might not have been the easiest derivation of $V(\bar{X})$, but it is the same process we need to do to find $\kappa_3(\bar{X})$ and $\kappa_3(h(\bar{X}))$, where we break up a product of a summation and count the number of terms with powers of different variables. In the above case, there were $N$ terms that were of the form $(X_i - \mu)^2$ and $N(N-1)$ terms of the form $(X_i - \mu)(X_j - \mu)$.
$\begin{align}
\kappa_3(\bar{X}) &= \mathbb{E}[(\bar{X}-\mu)^3)]\\
&= \mathbb{E}[\Big((\frac{1}{N}\sum_{i=1}^N X_i) - \mu\Big)^3]\\
&= \mathbb{E}[\Big(\frac{1}{N}\sum_{i=1}^N (X_i - \mu)\Big)^3]\\
&= \frac{1}{N^3}\Big(N\mathbb{E}[(X_i - \mu)^3] + 3N(N-1)\mathbb{E}[(X_i - \mu)\mathbb{E}[(X_j - \mu)^2]+N(N-1)(N-2)\mathbb{E}[(X_i - \mu)]\mathbb{E}[(X_j - \mu)]\mathbb{E}[(X_k - \mu)]\\
&= \frac{1}{N^2}\mathbb{E}[(X_i - \mu)^3]\\
&= \frac{\kappa_3(X_i)}{N^2}
\end{align}$
Next, we will expand $h(\bar{X})$ in a Taylor series as you have:
$h(\bar{X}) = h(\mu) + h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 + \frac{1}{3}h'''(\mu)(\bar{X}-\mu)^3 + ...$
$\begin{align}
\mathbb{E}[h(\bar{X})] &= h(\mu) + h'(\mu)\mathbb{E}[\bar{X} - \mu] + \frac{1}{2}h''(\mu)\mathbb{E}[(\bar{X}-\mu)^2] + \frac{1}{3}h'''(\mu)\mathbb{E}[(\bar{X}-\mu)^3] + ...\\
&= h(\mu) + \frac{1}{2}h''(\mu)\frac{\sigma^2}{N} + \frac{1}{3}h'''(\mu)\frac{\kappa_3(X_i)}{N^2} + ...\\
\end{align}$
With some more effort you could prove the rest of the terms are $O(N^{-3})$. Finally, since $\kappa_3(h(\bar{X})) = \mathbb{E}[(h(\bar{X})-\mathbb{E}[h(\bar{X})])^3]$, (which is not the same as $\mathbb{E}[(h(\bar{X})-h(\mu))^3]$), we again make a similar computation:
$\begin{align}
\kappa_3(h(\bar{X})) &= \mathbb{E}[(h(\bar{X})-\mathbb{E}[h(\bar{X})])^3]\\
&=\mathbb{E}\Big[\Big(h(\mu) + h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 + O((\bar{X}-\mu)^3) - h(\mu) - \frac{1}{2}h''(\mu)\frac{\sigma^2}{N} - O(N^{-2})\Big)^3\Big]
\end{align}$
We are only interested in the terms resulting in order $O(N^{-2})$, and with extra work you could show that you do not need the terms "$O((\bar{X}-\mu)^3)$" or "$- O(N^{-2})$" before taking the third power, as they will only result in terms of order $O(N^{-3})$. So, simplifying, we get
$\begin{align}
\kappa_3(h(\bar{X})) &= \mathbb{E}\Big[\Big(h'(\mu)(\bar{X} - \mu) + \frac{1}{2}h''(\mu)(\bar{X}-\mu)^2 - \frac{1}{2}h''(\mu)\frac{\sigma^2}{N})\Big)^3\Big]\\
&=\mathbb{E}\Big[h'(\mu)^3(\bar{X} - \mu)^3 + \frac{1}{8}h''(\mu)^3(\bar{X}-\mu)^6 - \frac{1}{8}h''(\mu)^3\frac{\sigma^6}{N^3} + \frac{3}{2}h'(\mu)^2h''(\mu)(\bar{X}-\mu)^4 + \frac{3}{4}h'(\mu)h''(\mu)(\bar{X}-\mu)^5 - \frac{3}{2}h'(\mu)^2h''(\mu)(\bar{X} - \mu)^2\frac{\sigma^2}{N} + O(N^{-3})\Big]
\end{align}$
I left off some terms that were obviously $O(N^{-3})$ in this product. You'll have to convince yourself that the terms $\mathbb{E}[(\bar{X}-\mu)^5]$ and $\mathbb{E}[(\bar{X}-\mu)^6]$ are $O(N^{-3})$ as well. However,
$\begin{align}
\mathbb{E}[(\bar{X}-\mu)^4] &= \mathbb{E}[\frac{1}{N^4}\Big(\sum_{i=1}^N(\bar{X}-\mu)\Big)^4]\\
&=\frac{1}{N^4}\Big(N\mathbb{E}[(X_i-\mu)^4] + 3N(N-1)\mathbb{E}[(X_i-\mu)^2]\mathbb{E}[(X_j-\mu)^2] + 0\Big)\\
&=\frac{3}{N^2}\sigma^4 + O(N^{-3})
\end{align}$
Then distributing the expectation on our equation for $\kappa_3(h(\bar{X}))$, we have
$\begin{align}\kappa_3(h(\bar{X})) &= h'(\mu)^3\mathbb{E}[(\bar{X} - \mu)^3] + \frac{3}{2}h'(\mu)^2h''(\mu)\mathbb{E}[(\bar{X}-\mu)^4] - \frac{3}{2}h'(\mu)^2h''(\mu)\mathbb{E}[(\bar{X} - \mu)^2]\frac{\sigma^2}{N} + O(N^{-3})\\
&= h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + \frac{9}{2}h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} - \frac{3}{2}h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} + O(N^{-3})\\
&=h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + 3h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} + O(N^{-3})
\end{align}$
This concludes the derivation of $\kappa_3(h(\bar{X}))$. Now, at last, we will derive the symmetrizing transform $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$.
For this transformation, it is important that $X_i$ is from an exponential family distribution, and in particular a natural exponential family (or it has been transformed into this distribution), of the form $f_{X_i}(x;\theta) = h(x)\exp(\theta x - b(\theta))$
In this case, the cumulants of the distribution are given by $\kappa_k = b^{(k)}(\theta)$. So $\mu = b'(\theta)$, $\sigma^2 = V(\theta) = b''(\theta)$, and $\kappa_3 = b'''(\theta)$. We can write the parameter $\theta$ as a function of $\mu$ just taking the inverse of $b'$, writing $\theta(\mu) = (b')^{-1}(\mu)$. Then
$\theta'(\mu) = \frac{1}{b''((b')^{-1}(\mu))} = \frac{1}{b''(\theta))} = \frac{1}{\sigma^2}$
Next we can write the variance as a function of $\mu$, and call this function $\bar{V}$:
$\bar{V}(\mu) = V(\theta(\mu)) = b''(\theta(\mu))$
Then
$\frac{d}{d\mu}\bar{V}(\mu) = V'(\theta(\mu))\theta'(\mu) = b'''(\theta)\frac{1}{\sigma^2} = \frac{\kappa_3}{\sigma^2}$
So as a function of $\mu$, $\kappa_3(\mu) = \bar{V}'(\mu)\bar{V}(\mu)$.
Now, for the symmetrizing transformation, we want to reduce the skewness of $h(\bar{X})$ by making $h'(\mu)^3\frac{\kappa_3(X_i)}{N^2} + 3h'(\mu)^2h''(\mu)\frac{\sigma^4}{N^2} = 0$ so that $h(\bar{X})$ is $O(N^{-3})$. Thus, we want
$h'(\mu)^3\kappa_3(X_i) + 3h'(\mu)^2h''(\mu)\sigma^4 = 0$
Substituting our expressions for $\sigma^2$ and $\kappa_3$ as functions of $\mu$, we have:
$h'(\mu)^3\bar{V}'(\mu)\bar{V}(\mu) + 3h'(\mu)^2h''(\mu)\bar{V}(\mu)^2 = 0$
So $h'(\mu)^3\bar{V}'(\mu) + 3h'(\mu)^2h''(\mu)\bar{V}(\mu) = 0$, leading to $\frac{d}{d\mu}(h'(\mu)^3\bar{V}(\mu)) = 0$.
One solution to this differential equation is:
$h'(\mu)^3\bar{V}(\mu) = 1$,
$h'(\mu) = \frac{1}{[\bar{V}(\mu)]^{1/3}}$
So, $h(\mu) = \int_c^\mu \frac{1}{[\bar{V}(\theta)]^{1/3}} d\theta$, for any constant, $c$. This gives us the symmetrizing transformation $A(u) = \int_{-\infty}^u \frac{1}{[V(\theta)]^{1/3}} d\theta$, where $V$ is the variance as a function of the mean in a natural exponential family. | Derivation of normalizing transform for GLMs
The slides you link to are somewhat confusing, leaving out steps and making a few typos, but they are ultimately correct. It will help to answer question 2 first, then 1, and then finally derive the s |
17,868 | Derivation of normalizing transform for GLMs | $\blacksquare$ 1.Why can't I get the same result by approximating in terms of noncentral moments $\mathbb{E}\bar{X}^k$ and then calculate the central moments $\mathbb{E}(\bar{X}-\mathbb{E}\bar{X})^k$using the approximating noncentral moments?
Because you change the derivation arbitrarily and drop the residue term which is important. If you are not familiar with the big O notation and relevant results, a good reference is [Casella&Lehmann].
$$h(\bar{X}) - h(u) \approx h'(u)(\bar{X} - \mu) + \frac{h''(x)}{2}(\bar{X} - \mu)^2 +O[(\bar{X} - \mu)^3]$$
$$\mathbb{E}[h(\bar{X}) - h(u)] \approx h'(u)\mathbb{E}(\bar{X} - \mu) + \frac{h''(x)}{2}\mathbb{E}(\bar{X} - \mu)^2+(?) $$
But even if you do not drop the residue by arguing that you are always doing $N\rightarrow \infty$(which is not legal...), the following step:
$$
\E\left(h(\bar{X}) - h(u)\right)^3 \approx h'(\mu)^3 \E(\bar{X}-\mu)^3 + \frac{3}{2}h'(\mu)^2h''(\mu) \E(\bar{X} - \mu)^4 + \frac{3}{4}h'(\mu)h''(\mu)^2 \E(\bar{X}-\mu)^5 + \frac{1}{8}h''(\mu)^3 \E(\bar{X} - \mu)^6.
(1)$$ is saying that $$\int [h(x)-h(x_0)]^3dx=\int [h'(x_0)(x-x_0)+\frac{1}{2}h''(x_0)(x-x_0)^2+O((x-x_0)^3)]^3dx=(1)$$
if this is still not clear, we can see the algebra of expanding the integrand goes as
$[h'(x_0)(x-x_0)+\frac{1}{2}h''(x_0)(x-x_0)^2+O((x-x_0)^3)]^3(2)$
Letting $A=h'(x_0)(x-x_0)$,$B=\frac{1}{2}h''(x_0)(x-x_0)^2$,$C=O((x-x_0)^3)$
$(2)=[A+B+C]^3$
$\color{red}{\neq}[A^3+3A^2 B+3A B^2+B^3]=[A+B]^3=(1)$
Your mistake is to omit the residue before expansion, which is a "classical" mistake in big O notation and later became a criticism of the usage of big O notation.
$\blacksquare$ 2.Why does the analysis start with $\bar{X}$ instead of $X$, the quantity we actually care about?
Because we want to base our analysis on the sufficient statistics of the exponential model we are introducing. If you have a sample of size 1 then there is no difference whether you analyze with $\bar{X}=\frac{1}{n}\sum_{i=1}^{n}X_i$ OR $X_1$.
This is a good lesson in big O notation though it is not relevant to GLM...
Reference
[Casella&Lehmann]Lehmann, Erich Leo, and George Casella. Theory of point estimation. Springer Science & Business Media, 2006. | Derivation of normalizing transform for GLMs | $\blacksquare$ 1.Why can't I get the same result by approximating in terms of noncentral moments $\mathbb{E}\bar{X}^k$ and then calculate the central moments $\mathbb{E}(\bar{X}-\mathbb{E}\bar{X})^k$u | Derivation of normalizing transform for GLMs
$\blacksquare$ 1.Why can't I get the same result by approximating in terms of noncentral moments $\mathbb{E}\bar{X}^k$ and then calculate the central moments $\mathbb{E}(\bar{X}-\mathbb{E}\bar{X})^k$using the approximating noncentral moments?
Because you change the derivation arbitrarily and drop the residue term which is important. If you are not familiar with the big O notation and relevant results, a good reference is [Casella&Lehmann].
$$h(\bar{X}) - h(u) \approx h'(u)(\bar{X} - \mu) + \frac{h''(x)}{2}(\bar{X} - \mu)^2 +O[(\bar{X} - \mu)^3]$$
$$\mathbb{E}[h(\bar{X}) - h(u)] \approx h'(u)\mathbb{E}(\bar{X} - \mu) + \frac{h''(x)}{2}\mathbb{E}(\bar{X} - \mu)^2+(?) $$
But even if you do not drop the residue by arguing that you are always doing $N\rightarrow \infty$(which is not legal...), the following step:
$$
\E\left(h(\bar{X}) - h(u)\right)^3 \approx h'(\mu)^3 \E(\bar{X}-\mu)^3 + \frac{3}{2}h'(\mu)^2h''(\mu) \E(\bar{X} - \mu)^4 + \frac{3}{4}h'(\mu)h''(\mu)^2 \E(\bar{X}-\mu)^5 + \frac{1}{8}h''(\mu)^3 \E(\bar{X} - \mu)^6.
(1)$$ is saying that $$\int [h(x)-h(x_0)]^3dx=\int [h'(x_0)(x-x_0)+\frac{1}{2}h''(x_0)(x-x_0)^2+O((x-x_0)^3)]^3dx=(1)$$
if this is still not clear, we can see the algebra of expanding the integrand goes as
$[h'(x_0)(x-x_0)+\frac{1}{2}h''(x_0)(x-x_0)^2+O((x-x_0)^3)]^3(2)$
Letting $A=h'(x_0)(x-x_0)$,$B=\frac{1}{2}h''(x_0)(x-x_0)^2$,$C=O((x-x_0)^3)$
$(2)=[A+B+C]^3$
$\color{red}{\neq}[A^3+3A^2 B+3A B^2+B^3]=[A+B]^3=(1)$
Your mistake is to omit the residue before expansion, which is a "classical" mistake in big O notation and later became a criticism of the usage of big O notation.
$\blacksquare$ 2.Why does the analysis start with $\bar{X}$ instead of $X$, the quantity we actually care about?
Because we want to base our analysis on the sufficient statistics of the exponential model we are introducing. If you have a sample of size 1 then there is no difference whether you analyze with $\bar{X}=\frac{1}{n}\sum_{i=1}^{n}X_i$ OR $X_1$.
This is a good lesson in big O notation though it is not relevant to GLM...
Reference
[Casella&Lehmann]Lehmann, Erich Leo, and George Casella. Theory of point estimation. Springer Science & Business Media, 2006. | Derivation of normalizing transform for GLMs
$\blacksquare$ 1.Why can't I get the same result by approximating in terms of noncentral moments $\mathbb{E}\bar{X}^k$ and then calculate the central moments $\mathbb{E}(\bar{X}-\mathbb{E}\bar{X})^k$u |
17,869 | Is it possible to accept the alternative hypothesis? | IMO (as not-a-logician or formally trained statistician per se), one shouldn't take any of this language too seriously. Even rejecting a null when p < .001 doesn't make the null false without a doubt. What's the harm in "accepting" the alternative hypothesis in a similarly provisional sense then? It strikes me as a safer interpretation than "accepting the null" in the opposite scenario (i.e., a large, insignificant p), because the alternative hypothesis is so much less specific. E.g., given $\alpha=.05$, if p = .06, there's still a 94% chance that future studies would find an effect that's at least as different from the null*, so accepting the null isn't a smart bet even if one cannot reject the null. Conversely, if p = .04, one can reject the null, which I've always understood to imply favoring the alternative. Why not "accepting"? The only reason I can see is the fact that one could be wrong, but the same applies when rejecting.
The alternative isn't a particularly strong claim, because as you say, it covers the whole "space". To reject your null, one must find a reliable effect on either side of the null such that the confidence interval doesn't include the null. Given such a confidence interval (CI), the alternative hypothesis is true of it: all values within are unequal to the null. The alternative hypothesis is also true of values outside the CI but more different from the null than the most extremely different value within the CI (e.g., if $\rm CI_{95\%}=[.6,.8]$, it wouldn't even be a problem for the alternative hypothesis if $\mathbb P(\rm head)=.9$). If you can get a CI like that, then again, what's not to accept about it, let alone the alternative hypothesis?
There might be some argument of which I'm unaware, but I doubt I'd be persuaded. Pragmatically, it might be wise not to write that you're accepting the alternative if there are reviewers involved, because success with them (as with people in general) often depends on not defying expectations in unwelcome ways. There's not much at stake anyway if you're not taking "accept" or "reject" too strictly as the final truth of the matter. I think that's the more important mistake to avoid in any case.
It's also important to remember that the null can be useful even if it's probably untrue. In the first example I mentioned where p = .06, failing to reject the null isn't the same as betting that it's true, but it's basically the same as judging it scientifically useful. Rejecting it is basically the same as judging the alternative to be more useful. That seems close enough to "acceptance" to me, especially since it isn't much of a hypothesis to accept.
BTW, this is another argument for focusing on CIs: if you can reject the null using Neyman–Pearson-style reasoning, then it doesn't matter how much smaller than $\alpha$ the p is for the sake of rejecting the null. It may matter by Fisher's reasoning, but if you can reject the null at a level of $\alpha$ that works for you, then it might be more useful to carry that $\alpha$ forward in a CI instead of just rejecting the null more confidently than you need to (a sort of statistical "overkill"). If you have a comfortable error rate $\alpha$ in advance, try using that error rate to describe what you think the effect size could be within a $\rm CI_{(1-\alpha)}$. This is probably more useful than accepting a more vague alternative hypothesis for most purposes.
* Another important point about the interpretation of this example p value is that it represents this chance for the scenario in which it is given that the null is true. If the null is untrue as evidence would seem to suggest in this case (albeit not persuasively enough for conventional scientific standards), then that chance is even greater. In other words, even if the null is true (but one doesn't know this), it wouldn't be wise to bet so in this case, and the bet is even worse if it's untrue! | Is it possible to accept the alternative hypothesis? | IMO (as not-a-logician or formally trained statistician per se), one shouldn't take any of this language too seriously. Even rejecting a null when p < .001 doesn't make the null false without a doubt. | Is it possible to accept the alternative hypothesis?
IMO (as not-a-logician or formally trained statistician per se), one shouldn't take any of this language too seriously. Even rejecting a null when p < .001 doesn't make the null false without a doubt. What's the harm in "accepting" the alternative hypothesis in a similarly provisional sense then? It strikes me as a safer interpretation than "accepting the null" in the opposite scenario (i.e., a large, insignificant p), because the alternative hypothesis is so much less specific. E.g., given $\alpha=.05$, if p = .06, there's still a 94% chance that future studies would find an effect that's at least as different from the null*, so accepting the null isn't a smart bet even if one cannot reject the null. Conversely, if p = .04, one can reject the null, which I've always understood to imply favoring the alternative. Why not "accepting"? The only reason I can see is the fact that one could be wrong, but the same applies when rejecting.
The alternative isn't a particularly strong claim, because as you say, it covers the whole "space". To reject your null, one must find a reliable effect on either side of the null such that the confidence interval doesn't include the null. Given such a confidence interval (CI), the alternative hypothesis is true of it: all values within are unequal to the null. The alternative hypothesis is also true of values outside the CI but more different from the null than the most extremely different value within the CI (e.g., if $\rm CI_{95\%}=[.6,.8]$, it wouldn't even be a problem for the alternative hypothesis if $\mathbb P(\rm head)=.9$). If you can get a CI like that, then again, what's not to accept about it, let alone the alternative hypothesis?
There might be some argument of which I'm unaware, but I doubt I'd be persuaded. Pragmatically, it might be wise not to write that you're accepting the alternative if there are reviewers involved, because success with them (as with people in general) often depends on not defying expectations in unwelcome ways. There's not much at stake anyway if you're not taking "accept" or "reject" too strictly as the final truth of the matter. I think that's the more important mistake to avoid in any case.
It's also important to remember that the null can be useful even if it's probably untrue. In the first example I mentioned where p = .06, failing to reject the null isn't the same as betting that it's true, but it's basically the same as judging it scientifically useful. Rejecting it is basically the same as judging the alternative to be more useful. That seems close enough to "acceptance" to me, especially since it isn't much of a hypothesis to accept.
BTW, this is another argument for focusing on CIs: if you can reject the null using Neyman–Pearson-style reasoning, then it doesn't matter how much smaller than $\alpha$ the p is for the sake of rejecting the null. It may matter by Fisher's reasoning, but if you can reject the null at a level of $\alpha$ that works for you, then it might be more useful to carry that $\alpha$ forward in a CI instead of just rejecting the null more confidently than you need to (a sort of statistical "overkill"). If you have a comfortable error rate $\alpha$ in advance, try using that error rate to describe what you think the effect size could be within a $\rm CI_{(1-\alpha)}$. This is probably more useful than accepting a more vague alternative hypothesis for most purposes.
* Another important point about the interpretation of this example p value is that it represents this chance for the scenario in which it is given that the null is true. If the null is untrue as evidence would seem to suggest in this case (albeit not persuasively enough for conventional scientific standards), then that chance is even greater. In other words, even if the null is true (but one doesn't know this), it wouldn't be wise to bet so in this case, and the bet is even worse if it's untrue! | Is it possible to accept the alternative hypothesis?
IMO (as not-a-logician or formally trained statistician per se), one shouldn't take any of this language too seriously. Even rejecting a null when p < .001 doesn't make the null false without a doubt. |
17,870 | Is it possible to accept the alternative hypothesis? | Assuming that by throwing the coin several times you get the sequence (head, tail, head, head, head)
What you truly compute with hypothesis testing is actually ℙ[ obtaining (head, tail, head, head, head) | ℙ(head) = 0.5 ]
That is, you get an answer to the following question:
Assuming H0: ℙ(head) = 0.5, do I get the sequence (head, tail, head, head, head) at least 5% of the time?
So the question is formulated in such a way that you simply cannot get the answer as formulated in 1. Is ℙ(head) ≠ 0.5 true?
Both statements are not mutually exclusive. It is not because one proposition is proven wrong that another is necessarily true.
So in case 1, is it correct to say "we accept H1"? Answer is no, and your conclusion:
We have an evidence strong enough to believe that H0 is not true, but
we may not have an evidence strong enough to believe that H1 is true.
Therefore, "rejecting H0" does not automatically imply "accepting H1"
seems right to me.
Scientific theories are only built upon a certain set of propositions, until one of them is proven wrong. Along those lines the general idea of hypothesis testing is to rule out an immediate contradiction of a proposition by readily available facts, but it does not provide a proof of it. | Is it possible to accept the alternative hypothesis? | Assuming that by throwing the coin several times you get the sequence (head, tail, head, head, head)
What you truly compute with hypothesis testing is actually ℙ[ obtaining (head, tail, head, head, he | Is it possible to accept the alternative hypothesis?
Assuming that by throwing the coin several times you get the sequence (head, tail, head, head, head)
What you truly compute with hypothesis testing is actually ℙ[ obtaining (head, tail, head, head, head) | ℙ(head) = 0.5 ]
That is, you get an answer to the following question:
Assuming H0: ℙ(head) = 0.5, do I get the sequence (head, tail, head, head, head) at least 5% of the time?
So the question is formulated in such a way that you simply cannot get the answer as formulated in 1. Is ℙ(head) ≠ 0.5 true?
Both statements are not mutually exclusive. It is not because one proposition is proven wrong that another is necessarily true.
So in case 1, is it correct to say "we accept H1"? Answer is no, and your conclusion:
We have an evidence strong enough to believe that H0 is not true, but
we may not have an evidence strong enough to believe that H1 is true.
Therefore, "rejecting H0" does not automatically imply "accepting H1"
seems right to me.
Scientific theories are only built upon a certain set of propositions, until one of them is proven wrong. Along those lines the general idea of hypothesis testing is to rule out an immediate contradiction of a proposition by readily available facts, but it does not provide a proof of it. | Is it possible to accept the alternative hypothesis?
Assuming that by throwing the coin several times you get the sequence (head, tail, head, head, head)
What you truly compute with hypothesis testing is actually ℙ[ obtaining (head, tail, head, head, he |
17,871 | Product of two independent random variables | We have, Assuming $\psi$ has support on the positive real line,
$$\xi \,\psi = X$$ Where $X \sim F_n$ and $F_n$ is the empirical distribution of the data.
Taking the log of this equation we get,
$$ Log(\xi) + Log(\psi) = Log(X) $$
Thus by Levy's continuity theorem, and independance of $\xi$ and$\psi$
taking the charactersitic functions:
$$ \Psi_{Log(\xi)}(t)\Psi_{Log(\psi)}(t) = \Psi_{Log(X)}$$
Now, $ \xi\sim Unif[0,1]$$, therefore $$-Log(\xi) \sim Exp(1) $
Thus,
$$\Psi_{Log(\xi)}(-t)= \left(1 + it\right)^{-1}\,$$
Given that $\Psi_{ln(X)} =\frac{1}{n}\sum_{k=1}^{1000}\exp(itX_k) ,$
With $ X_1 ... X_{1000}$ The random sample of $\ln(X)$.
We can now specify completly the distribution of $Log(\psi)$ through its characteristic function:
$$ \left(1 + it\right)^{-1}\,\Psi_{Log(\psi)}(t) = \frac{1}{n}\sum_{k=1}^{1000}\exp(itX_k)$$
If we assume that the moment generating functions of $\ln(\psi)$ exist and that $t<1$ we can write the above equation in term of moment generating functions:
$$ M_{Log(\psi)}(t) = \frac{1}{n}\sum_{k=1}^{1000}\exp(-t\,X_k)\,\left(1 - t\right)\,$$
It is enough then to invert the Moment generating function to get the distribution of $ln(\phi)$ and thus that of $\phi$ | Product of two independent random variables | We have, Assuming $\psi$ has support on the positive real line,
$$\xi \,\psi = X$$ Where $X \sim F_n$ and $F_n$ is the empirical distribution of the data.
Taking the log of this equation we get,
$$ Lo | Product of two independent random variables
We have, Assuming $\psi$ has support on the positive real line,
$$\xi \,\psi = X$$ Where $X \sim F_n$ and $F_n$ is the empirical distribution of the data.
Taking the log of this equation we get,
$$ Log(\xi) + Log(\psi) = Log(X) $$
Thus by Levy's continuity theorem, and independance of $\xi$ and$\psi$
taking the charactersitic functions:
$$ \Psi_{Log(\xi)}(t)\Psi_{Log(\psi)}(t) = \Psi_{Log(X)}$$
Now, $ \xi\sim Unif[0,1]$$, therefore $$-Log(\xi) \sim Exp(1) $
Thus,
$$\Psi_{Log(\xi)}(-t)= \left(1 + it\right)^{-1}\,$$
Given that $\Psi_{ln(X)} =\frac{1}{n}\sum_{k=1}^{1000}\exp(itX_k) ,$
With $ X_1 ... X_{1000}$ The random sample of $\ln(X)$.
We can now specify completly the distribution of $Log(\psi)$ through its characteristic function:
$$ \left(1 + it\right)^{-1}\,\Psi_{Log(\psi)}(t) = \frac{1}{n}\sum_{k=1}^{1000}\exp(itX_k)$$
If we assume that the moment generating functions of $\ln(\psi)$ exist and that $t<1$ we can write the above equation in term of moment generating functions:
$$ M_{Log(\psi)}(t) = \frac{1}{n}\sum_{k=1}^{1000}\exp(-t\,X_k)\,\left(1 - t\right)\,$$
It is enough then to invert the Moment generating function to get the distribution of $ln(\phi)$ and thus that of $\phi$ | Product of two independent random variables
We have, Assuming $\psi$ has support on the positive real line,
$$\xi \,\psi = X$$ Where $X \sim F_n$ and $F_n$ is the empirical distribution of the data.
Taking the log of this equation we get,
$$ Lo |
17,872 | Visualizing mixed model results | Predicting counts using the fixed-effects part of your model means that you set to zero (i.e. their mean) the random effects. This means that you can "forget" about them and use standard machinery to calculate the predictions and the standard errors of the predictions (with which you can compute the confidence intervals).
This is an example using Stata, but I suppose it can be easily "translated" into R language:
webuse epilepsy, clear
xtmepoisson seizures treat visit || subject: visit
predict log_seiz, xb
gen pred_seiz = exp(log_seiz)
predict std_log_seiz, stdp
gen ub = exp(log_seiz+invnorm(.975)*std_log_seiz)
gen lb = exp(log_seiz-invnorm(.975)*std_log_seiz)
tw (line pred_seiz ub lb visit if treat == 0, sort lc(black black black) ///
lp(l - -)), scheme(s1mono) legend(off) ytitle("Predicted Seizures") ///
xtitle("Visit")
The graph refers to treat == 0 and it's intended to be an example (visit is not a really continuous variable, but it's just to get the idea). The dashed lines are 95% confidence intervals. | Visualizing mixed model results | Predicting counts using the fixed-effects part of your model means that you set to zero (i.e. their mean) the random effects. This means that you can "forget" about them and use standard machinery to | Visualizing mixed model results
Predicting counts using the fixed-effects part of your model means that you set to zero (i.e. their mean) the random effects. This means that you can "forget" about them and use standard machinery to calculate the predictions and the standard errors of the predictions (with which you can compute the confidence intervals).
This is an example using Stata, but I suppose it can be easily "translated" into R language:
webuse epilepsy, clear
xtmepoisson seizures treat visit || subject: visit
predict log_seiz, xb
gen pred_seiz = exp(log_seiz)
predict std_log_seiz, stdp
gen ub = exp(log_seiz+invnorm(.975)*std_log_seiz)
gen lb = exp(log_seiz-invnorm(.975)*std_log_seiz)
tw (line pred_seiz ub lb visit if treat == 0, sort lc(black black black) ///
lp(l - -)), scheme(s1mono) legend(off) ytitle("Predicted Seizures") ///
xtitle("Visit")
The graph refers to treat == 0 and it's intended to be an example (visit is not a really continuous variable, but it's just to get the idea). The dashed lines are 95% confidence intervals. | Visualizing mixed model results
Predicting counts using the fixed-effects part of your model means that you set to zero (i.e. their mean) the random effects. This means that you can "forget" about them and use standard machinery to |
17,873 | How to check which model is better in state space time series analysis? | To answer your first question. Yes, all is possible. It is not usual or unusual. You should let the data tell you what the correct model is. Try augmenting the model further with seasonals, cycles, and explanatory regressors if possible.
You should not only be comparing the Akaike Information Criterion (AIC) to compare models, but also checking to see that residuals (irregular term) are normal, homoskedastic, and independent (Ljung-Box test). If you can find a model that has all of these desirable properties. This should be your preferred model (it is likely that a model with all these properties will have the best AIC).
Although the initial values will affect which maximum point of the log-likelihood function is found, if your model is well specified, it shouldn't vary too much and there should be an obvious candidate for the best model with the best initial values. I do a lot of this type of analysis in Matlab and I found the best way to find initial values is just to play around for a bit. It can be tedious but it works out well in the end. | How to check which model is better in state space time series analysis? | To answer your first question. Yes, all is possible. It is not usual or unusual. You should let the data tell you what the correct model is. Try augmenting the model further with seasonals, cycles, an | How to check which model is better in state space time series analysis?
To answer your first question. Yes, all is possible. It is not usual or unusual. You should let the data tell you what the correct model is. Try augmenting the model further with seasonals, cycles, and explanatory regressors if possible.
You should not only be comparing the Akaike Information Criterion (AIC) to compare models, but also checking to see that residuals (irregular term) are normal, homoskedastic, and independent (Ljung-Box test). If you can find a model that has all of these desirable properties. This should be your preferred model (it is likely that a model with all these properties will have the best AIC).
Although the initial values will affect which maximum point of the log-likelihood function is found, if your model is well specified, it shouldn't vary too much and there should be an obvious candidate for the best model with the best initial values. I do a lot of this type of analysis in Matlab and I found the best way to find initial values is just to play around for a bit. It can be tedious but it works out well in the end. | How to check which model is better in state space time series analysis?
To answer your first question. Yes, all is possible. It is not usual or unusual. You should let the data tell you what the correct model is. Try augmenting the model further with seasonals, cycles, an |
17,874 | Comparison of statistical tests exploring co-dependence of two binary variables | Tan et al, in Information Systems 29 (2004) 293-313, consider 21 different measures for association patterns between 2 binary variables. Each of these measures has its strengths and weaknesses. As the authors state in the Abstract:
Objective measures such as support, confidence, interest factor, correlation, and entropy are often used to evaluate the interestingness of association patterns. However, in many situations, these measures may provide conflicting information about the interestingness of a pattern. ... In this paper, we describe several key properties one should examine in order to select the right measure for a given application. ... We show that depending on its properties, each measure is useful for some application, but not for others.
The major issues are in terms of the type of association in which one is interested and the properties of the measure that one wishes to maintain. For example, if P(X) is the probability of X = 1, P(Y) is the probability of Y = 1, and P(X,Y) is the probability that both are 1 in a 2 X 2 contingency table, which of the following matter to you in a measure of association:
Is the measure 0 when X and Y are statistically independent?
Does the measure increase with P(X,Y) as P(X) and P(Y) are constant?
Does the measure decrease monotonically in either P(X) or P(Y) as the other probabilities remain constant?
Is the measure symmetric under permutation of X and Y?
Is it invariant to row and column scaling?
Is it antisymmetric under row or column permutation?
Is it invariant when both rows and columns are swapped?
Is it invariant when extra cases in which both X and Y are 0 are
added?
No measure has all of these properties. The issue of which type of measure makes the most sense in a particular application thus would seem to be more crucial than generic considerations of power; the "verdict" might well depend on the type of association of interest.
If your measure of codependence between X and Y is the type examined by a $\chi^2$ test of a 2 x 2 contingency table, then this answer also answers your question with respect to logistic regression and chi-square tests. Briefly:
Asymptotically, all the logistic regression tests are equivalent. The likelihood-ratio test is based on deviance so they're not 2 separate tests. The score test for a logistic regression (not mentioned in your question) is exactly equivalent to a $\chi^2$ test without continuity correction. With large numbers of cases, when the underlying assumptions of normality hold, they should all provide the same results.
For a logistic regression model, the likelihood-ratio test is generally preferred. The Wald test assumes a symmetric, normal distribution of the log-likelihood profile around the maximum-likelihood estimate that might not be found in a small sample. The score test in general is less powerful in practice. (I haven't worked through the details specific to a 2-way contingency table, however.) Power functions would typically be calculated based on the assumptions underlying the tests (effectively normality assumptions, as also underly the separate F-tests you note), which suggests to me that they would be the same under those assumptions. In practical applications involving small numbers of cases, such theoretical power functions might be misleading.
In answering this question, I assumed that the observations on variables X and Y are not paired. If they are paired then see the answer from @Alexis. | Comparison of statistical tests exploring co-dependence of two binary variables | Tan et al, in Information Systems 29 (2004) 293-313, consider 21 different measures for association patterns between 2 binary variables. Each of these measures has its strengths and weaknesses. As the | Comparison of statistical tests exploring co-dependence of two binary variables
Tan et al, in Information Systems 29 (2004) 293-313, consider 21 different measures for association patterns between 2 binary variables. Each of these measures has its strengths and weaknesses. As the authors state in the Abstract:
Objective measures such as support, confidence, interest factor, correlation, and entropy are often used to evaluate the interestingness of association patterns. However, in many situations, these measures may provide conflicting information about the interestingness of a pattern. ... In this paper, we describe several key properties one should examine in order to select the right measure for a given application. ... We show that depending on its properties, each measure is useful for some application, but not for others.
The major issues are in terms of the type of association in which one is interested and the properties of the measure that one wishes to maintain. For example, if P(X) is the probability of X = 1, P(Y) is the probability of Y = 1, and P(X,Y) is the probability that both are 1 in a 2 X 2 contingency table, which of the following matter to you in a measure of association:
Is the measure 0 when X and Y are statistically independent?
Does the measure increase with P(X,Y) as P(X) and P(Y) are constant?
Does the measure decrease monotonically in either P(X) or P(Y) as the other probabilities remain constant?
Is the measure symmetric under permutation of X and Y?
Is it invariant to row and column scaling?
Is it antisymmetric under row or column permutation?
Is it invariant when both rows and columns are swapped?
Is it invariant when extra cases in which both X and Y are 0 are
added?
No measure has all of these properties. The issue of which type of measure makes the most sense in a particular application thus would seem to be more crucial than generic considerations of power; the "verdict" might well depend on the type of association of interest.
If your measure of codependence between X and Y is the type examined by a $\chi^2$ test of a 2 x 2 contingency table, then this answer also answers your question with respect to logistic regression and chi-square tests. Briefly:
Asymptotically, all the logistic regression tests are equivalent. The likelihood-ratio test is based on deviance so they're not 2 separate tests. The score test for a logistic regression (not mentioned in your question) is exactly equivalent to a $\chi^2$ test without continuity correction. With large numbers of cases, when the underlying assumptions of normality hold, they should all provide the same results.
For a logistic regression model, the likelihood-ratio test is generally preferred. The Wald test assumes a symmetric, normal distribution of the log-likelihood profile around the maximum-likelihood estimate that might not be found in a small sample. The score test in general is less powerful in practice. (I haven't worked through the details specific to a 2-way contingency table, however.) Power functions would typically be calculated based on the assumptions underlying the tests (effectively normality assumptions, as also underly the separate F-tests you note), which suggests to me that they would be the same under those assumptions. In practical applications involving small numbers of cases, such theoretical power functions might be misleading.
In answering this question, I assumed that the observations on variables X and Y are not paired. If they are paired then see the answer from @Alexis. | Comparison of statistical tests exploring co-dependence of two binary variables
Tan et al, in Information Systems 29 (2004) 293-313, consider 21 different measures for association patterns between 2 binary variables. Each of these measures has its strengths and weaknesses. As the |
17,875 | Comparison of statistical tests exploring co-dependence of two binary variables | The z test of proportions assumes your samples are independent (your single index $i$ for both $X$ and $Y$ and your indication of a regression context implies that these are paired, not independent observations).
Therefore, McNemar's test (1947) is an appropriate test for association in paired binary data:
The positivist null hypothesis is that $X$ and $Y$ are not associated, with the positivist alternative being that $X$ and $Y$ are associated. One way of expressing this is:
$$H_{0}{+}: P(Y=1|X=0) = P(Y=1|X=1)\text{, and}$$
$$H_{\text{A}}^{+}: P(Y=1|X=0) \ne P(Y=1|X=1)$$
Another way of expressing it is:
$$H_{0}{+}: P(Y=1|X=0) = P(X=1|Y=0)\text{, and}$$
$$H_{\text{A}}^{+}: P(Y=1|X=0) \ne P(X=1|Y=0)$$
There are still other ways (e.g., using odds ratios, etc.)
McNemar's test uses counts of pairs:
$X=0, Y=0$ (concordant pair)
$X=1, Y=0$ (discordant pair, call this count $r$)
$X=0, Y=1$ (discordant pair, call this count $s$)
$X=1, Y=1$ (concordant pair)
And specifically, McNemar's test only uses counts of discordant pairs (i.e. $r$ and $s$) to construct it's test statistic (the below formulation includes a continuity correction (i.e. counts are discrete by definition, but $\chi^{2}$ distribution is actually continuous):
$$\chi^{2} = \frac{|(r-s)|-1^{2}}{r+s}$$
This test statistic has a single degree of freedom, and $p = P(X^{2}_{\nu=1} > \chi^{2})$.
Per Bennett and Underwood (1970) "The McNemar test is in fact UMP for $p'=\frac{1}{2}$ against alternatives $p'\ne\frac{1}{2}$ by an argument similar to that of Lehmann ([1959], section 4.7)."
McNemar's test expresses the degree of association using an odds ratio $= \frac{r}{s}$, where $OR=1$ is consistent with no association, $OR>1$ are consistent with the odds of $Y=1$ being greater for $X=1$ relative to $X=0$, and vice versa. The confidence interval for this OR is $e^{\ln (\frac{r}{s})-z_{\alpha/2}\sqrt{(r+s)/rs}}$, $e^{\ln (\frac{r}{s})+z_{\alpha/2}\sqrt{(r+s)/rs}}$.
McNemar's test can also be framed as a test for equivalence with the negativist null hypothesis that $X$ and $Y$ are associated by at least $\Delta$ (your equivalence threshold, aka the relevant association you care about). $\Delta$ in this application takes values between 0 (zero association) and 1 (perfect association). The alternative hypothesis is that $X$ and $Y$ do not have an association as strong as $\Delta$ or stronger.
Interestingly, the test statistics for McNemar's equivalence test (in the TOST framework) are z distributed, not $\chi^{2}$ distributed (Liu, et all, 2002):
$z_{1} = \frac{n\Delta - [(r-s) - 1]}{\sqrt{(r+s)-n(\frac{r}{n}-\frac{s}{n})^{2}}}$, and
$z_{2} = \frac{[(r+s)+1]+n\Delta}{\sqrt{(r+s)-n(\frac{r}{n}-\frac{s}{n})^{2}}}$
These statistics have been constructed for upper tail rejection regions:
$p_{1} = P(Z > z_{1})$, and
$p_{2} = P(Z > z_{2})$
Only if both $p_{1} \le \alpha$ and $p_{2} \le \alpha$ can you reject the negativist null hypothesis, and conclude equivalence.
References
Bennett, B. M., & Underwood, R. E. (1970). 283. Note: On McNemar’s Test for the 2 $\times$ 2 Table and Its Power Function. Biometrics, 26(2), 339–343.
Liu, J., Hsueh, H., Hsieh, E., & Chen, J. J. (2002). Tests for equivalence or non-inferiority for paired binary data. Statistics In Medicine, 21, 231–245.
McNemar, Q. (1947). Note on the Sampling Error of the Difference Between Two Correlated Proportions or Percentages. Psychometrika, 12(2), 153–157. | Comparison of statistical tests exploring co-dependence of two binary variables | The z test of proportions assumes your samples are independent (your single index $i$ for both $X$ and $Y$ and your indication of a regression context implies that these are paired, not independent ob | Comparison of statistical tests exploring co-dependence of two binary variables
The z test of proportions assumes your samples are independent (your single index $i$ for both $X$ and $Y$ and your indication of a regression context implies that these are paired, not independent observations).
Therefore, McNemar's test (1947) is an appropriate test for association in paired binary data:
The positivist null hypothesis is that $X$ and $Y$ are not associated, with the positivist alternative being that $X$ and $Y$ are associated. One way of expressing this is:
$$H_{0}{+}: P(Y=1|X=0) = P(Y=1|X=1)\text{, and}$$
$$H_{\text{A}}^{+}: P(Y=1|X=0) \ne P(Y=1|X=1)$$
Another way of expressing it is:
$$H_{0}{+}: P(Y=1|X=0) = P(X=1|Y=0)\text{, and}$$
$$H_{\text{A}}^{+}: P(Y=1|X=0) \ne P(X=1|Y=0)$$
There are still other ways (e.g., using odds ratios, etc.)
McNemar's test uses counts of pairs:
$X=0, Y=0$ (concordant pair)
$X=1, Y=0$ (discordant pair, call this count $r$)
$X=0, Y=1$ (discordant pair, call this count $s$)
$X=1, Y=1$ (concordant pair)
And specifically, McNemar's test only uses counts of discordant pairs (i.e. $r$ and $s$) to construct it's test statistic (the below formulation includes a continuity correction (i.e. counts are discrete by definition, but $\chi^{2}$ distribution is actually continuous):
$$\chi^{2} = \frac{|(r-s)|-1^{2}}{r+s}$$
This test statistic has a single degree of freedom, and $p = P(X^{2}_{\nu=1} > \chi^{2})$.
Per Bennett and Underwood (1970) "The McNemar test is in fact UMP for $p'=\frac{1}{2}$ against alternatives $p'\ne\frac{1}{2}$ by an argument similar to that of Lehmann ([1959], section 4.7)."
McNemar's test expresses the degree of association using an odds ratio $= \frac{r}{s}$, where $OR=1$ is consistent with no association, $OR>1$ are consistent with the odds of $Y=1$ being greater for $X=1$ relative to $X=0$, and vice versa. The confidence interval for this OR is $e^{\ln (\frac{r}{s})-z_{\alpha/2}\sqrt{(r+s)/rs}}$, $e^{\ln (\frac{r}{s})+z_{\alpha/2}\sqrt{(r+s)/rs}}$.
McNemar's test can also be framed as a test for equivalence with the negativist null hypothesis that $X$ and $Y$ are associated by at least $\Delta$ (your equivalence threshold, aka the relevant association you care about). $\Delta$ in this application takes values between 0 (zero association) and 1 (perfect association). The alternative hypothesis is that $X$ and $Y$ do not have an association as strong as $\Delta$ or stronger.
Interestingly, the test statistics for McNemar's equivalence test (in the TOST framework) are z distributed, not $\chi^{2}$ distributed (Liu, et all, 2002):
$z_{1} = \frac{n\Delta - [(r-s) - 1]}{\sqrt{(r+s)-n(\frac{r}{n}-\frac{s}{n})^{2}}}$, and
$z_{2} = \frac{[(r+s)+1]+n\Delta}{\sqrt{(r+s)-n(\frac{r}{n}-\frac{s}{n})^{2}}}$
These statistics have been constructed for upper tail rejection regions:
$p_{1} = P(Z > z_{1})$, and
$p_{2} = P(Z > z_{2})$
Only if both $p_{1} \le \alpha$ and $p_{2} \le \alpha$ can you reject the negativist null hypothesis, and conclude equivalence.
References
Bennett, B. M., & Underwood, R. E. (1970). 283. Note: On McNemar’s Test for the 2 $\times$ 2 Table and Its Power Function. Biometrics, 26(2), 339–343.
Liu, J., Hsueh, H., Hsieh, E., & Chen, J. J. (2002). Tests for equivalence or non-inferiority for paired binary data. Statistics In Medicine, 21, 231–245.
McNemar, Q. (1947). Note on the Sampling Error of the Difference Between Two Correlated Proportions or Percentages. Psychometrika, 12(2), 153–157. | Comparison of statistical tests exploring co-dependence of two binary variables
The z test of proportions assumes your samples are independent (your single index $i$ for both $X$ and $Y$ and your indication of a regression context implies that these are paired, not independent ob |
17,876 | Comparison of statistical tests exploring co-dependence of two binary variables | To me, this seems like a case where you should use Fisher's exact test.
I.e. you would make a table of your outcomes,
╔═════╦═════════════╦═════════════╗
║ ║ X=0 ║ X=1 ║
╠═════╬═════════════╬═════════════╣
║ Y=0 ║ |[X=0^Y=0]| ║ |[X=1^Y=0]| ║
║ Y=1 ║ |[X=0^Y=1]| ║ |[X=1^Y=1]| ║
╚═════╩═════════════╩═════════════╝
where e.g. |[X=0^Y=0]| are the number of data points with $X_i$=0 and $Y_i$=0.
As explained in the wikipedia entry, the values in the table will follow a hypergeometric distribution. | Comparison of statistical tests exploring co-dependence of two binary variables | To me, this seems like a case where you should use Fisher's exact test.
I.e. you would make a table of your outcomes,
╔═════╦═════════════╦═════════════╗
║ ║ X=0 ║ X=1 ║
╠═════╬══ | Comparison of statistical tests exploring co-dependence of two binary variables
To me, this seems like a case where you should use Fisher's exact test.
I.e. you would make a table of your outcomes,
╔═════╦═════════════╦═════════════╗
║ ║ X=0 ║ X=1 ║
╠═════╬═════════════╬═════════════╣
║ Y=0 ║ |[X=0^Y=0]| ║ |[X=1^Y=0]| ║
║ Y=1 ║ |[X=0^Y=1]| ║ |[X=1^Y=1]| ║
╚═════╩═════════════╩═════════════╝
where e.g. |[X=0^Y=0]| are the number of data points with $X_i$=0 and $Y_i$=0.
As explained in the wikipedia entry, the values in the table will follow a hypergeometric distribution. | Comparison of statistical tests exploring co-dependence of two binary variables
To me, this seems like a case where you should use Fisher's exact test.
I.e. you would make a table of your outcomes,
╔═════╦═════════════╦═════════════╗
║ ║ X=0 ║ X=1 ║
╠═════╬══ |
17,877 | Pooling calibration plots after multiple imputation | [...] if your n is 1,000 and you have 5 MI datasets, why not create a single calibration plot from the 5000 and compare observed/expected in whatever desired fashion in those 5,000?
Regarding references:
No references, we published a paper recently where we stated without
proof that we obtained inference for bootstrap standard errors and
multiple imputation by pooling them together in this fashion. I think
you can state that the purpose of the analysis is testing at the 0.05
level that the expectation / observation ratio or difference is within
a normal distributional range and that quantile estimates are
invariant to the sample size, so testing based on the 95% CI is not
affected by pooling. | Pooling calibration plots after multiple imputation | [...] if your n is 1,000 and you have 5 MI datasets, why not create a single calibration plot from the 5000 and compare observed/expected in whatever desired fashion in those 5,000?
Regarding refere | Pooling calibration plots after multiple imputation
[...] if your n is 1,000 and you have 5 MI datasets, why not create a single calibration plot from the 5000 and compare observed/expected in whatever desired fashion in those 5,000?
Regarding references:
No references, we published a paper recently where we stated without
proof that we obtained inference for bootstrap standard errors and
multiple imputation by pooling them together in this fashion. I think
you can state that the purpose of the analysis is testing at the 0.05
level that the expectation / observation ratio or difference is within
a normal distributional range and that quantile estimates are
invariant to the sample size, so testing based on the 95% CI is not
affected by pooling. | Pooling calibration plots after multiple imputation
[...] if your n is 1,000 and you have 5 MI datasets, why not create a single calibration plot from the 5000 and compare observed/expected in whatever desired fashion in those 5,000?
Regarding refere |
17,878 | Expected numbers of distinct colors when drawing without replacement | Suppose you have $k$ colors where $k \leq N$. Let $b_i$ denote the number of balls color $i$ so $\sum b_i = N$. Let $B = \{b_1, \ldots, b_k\}$ and let $E_i(B)$ notate the set which consists of the $i$ element subsets of $B$. Let $Q_{n, c}$ denote the number of ways we can choose $n$ elements from the above set such that the number of different colors in the chosen set is $c$. For $c = 1$ the formula is simple:
$$
Q_{n, 1} = \sum_{E \in E_{1}(B)}\binom{\sum_{e \in E}e}{n}
$$
For $c = 2$ we can count sets of balls of size $n$ which has at most 2 colors minus the number of sets which have exactly $1$ color:
$$
Q_{n,2} = \sum_{E \in E_{2}(B)}\binom{\sum_{e \in E}e}{n} - \binom{k - 1}{1}Q_{n, 1}
$$
$\binom{k - 1}{1}$ is the number of ways you can add a color to a fixed color such that you will have 2 colors if you have $k$ colors in total. The generic formula is if you have $c_1$ fixed colors and you want to make $c_2$ colors out of it while having $k$ colors in total($c_1 \leq c_2 \leq k$) is $\binom{k - c_1}{c_2 - c_1}$. Now we have everything to derive the generic formula for $Q_{n, c}$:
$$
Q_{n, c} = \sum_{E \in E_{c}(B)}\binom{\sum_{e \in E}e}{n} - \sum_{i = 1}^{c - 1}\binom{k - i}{c - i}Q_{n, i}
$$
The probability that you will have exactly $c$ colors if you draw $n$ balls is:
$$
P_{n, c} = Q_{n, c} / \binom{N}{n}
$$
Also note that $\binom{x}{y} = 0$ if $y > x$.
Probably there are special cases where the formula can be simplified. I didn't bother to find those simplifications this time.
The expected value you're looking for the number of colors dependent on $n$ is the following:
$$
\gamma_{n} = \sum_{i = 1}^{k} P_{n, i} * i
$$ | Expected numbers of distinct colors when drawing without replacement | Suppose you have $k$ colors where $k \leq N$. Let $b_i$ denote the number of balls color $i$ so $\sum b_i = N$. Let $B = \{b_1, \ldots, b_k\}$ and let $E_i(B)$ notate the set which consists of the $i$ | Expected numbers of distinct colors when drawing without replacement
Suppose you have $k$ colors where $k \leq N$. Let $b_i$ denote the number of balls color $i$ so $\sum b_i = N$. Let $B = \{b_1, \ldots, b_k\}$ and let $E_i(B)$ notate the set which consists of the $i$ element subsets of $B$. Let $Q_{n, c}$ denote the number of ways we can choose $n$ elements from the above set such that the number of different colors in the chosen set is $c$. For $c = 1$ the formula is simple:
$$
Q_{n, 1} = \sum_{E \in E_{1}(B)}\binom{\sum_{e \in E}e}{n}
$$
For $c = 2$ we can count sets of balls of size $n$ which has at most 2 colors minus the number of sets which have exactly $1$ color:
$$
Q_{n,2} = \sum_{E \in E_{2}(B)}\binom{\sum_{e \in E}e}{n} - \binom{k - 1}{1}Q_{n, 1}
$$
$\binom{k - 1}{1}$ is the number of ways you can add a color to a fixed color such that you will have 2 colors if you have $k$ colors in total. The generic formula is if you have $c_1$ fixed colors and you want to make $c_2$ colors out of it while having $k$ colors in total($c_1 \leq c_2 \leq k$) is $\binom{k - c_1}{c_2 - c_1}$. Now we have everything to derive the generic formula for $Q_{n, c}$:
$$
Q_{n, c} = \sum_{E \in E_{c}(B)}\binom{\sum_{e \in E}e}{n} - \sum_{i = 1}^{c - 1}\binom{k - i}{c - i}Q_{n, i}
$$
The probability that you will have exactly $c$ colors if you draw $n$ balls is:
$$
P_{n, c} = Q_{n, c} / \binom{N}{n}
$$
Also note that $\binom{x}{y} = 0$ if $y > x$.
Probably there are special cases where the formula can be simplified. I didn't bother to find those simplifications this time.
The expected value you're looking for the number of colors dependent on $n$ is the following:
$$
\gamma_{n} = \sum_{i = 1}^{k} P_{n, i} * i
$$ | Expected numbers of distinct colors when drawing without replacement
Suppose you have $k$ colors where $k \leq N$. Let $b_i$ denote the number of balls color $i$ so $\sum b_i = N$. Let $B = \{b_1, \ldots, b_k\}$ and let $E_i(B)$ notate the set which consists of the $i$ |
17,879 | Using the R forecast package with missing values and/or irregular time series | You should be very careful when you apply interpolation before further statistical treatment. The choice you do for your interpolation introduces a bias into your data. This is something you definitely want to avoid, as it could alter the quality of your predictions.
In my opinion for missing values such as those you mentioned, that are regularly spaced in time and that correspond to a stop in the activities, it might be more correct to leave these days out of your model. In the the little world of your call center (the model you are building about it), it might be better to consider that time simply stopped when it is closed instead of inventing measurements of a non-existing activity.
On the other hand the ARIMA model has been statistically built on the assumption that data is equally spaced. As far as I know there is no adaptation of ARIMA to your case. If you are just missing a few measurements on actual working days, you might be forced to use interpolation. | Using the R forecast package with missing values and/or irregular time series | You should be very careful when you apply interpolation before further statistical treatment. The choice you do for your interpolation introduces a bias into your data. This is something you definitel | Using the R forecast package with missing values and/or irregular time series
You should be very careful when you apply interpolation before further statistical treatment. The choice you do for your interpolation introduces a bias into your data. This is something you definitely want to avoid, as it could alter the quality of your predictions.
In my opinion for missing values such as those you mentioned, that are regularly spaced in time and that correspond to a stop in the activities, it might be more correct to leave these days out of your model. In the the little world of your call center (the model you are building about it), it might be better to consider that time simply stopped when it is closed instead of inventing measurements of a non-existing activity.
On the other hand the ARIMA model has been statistically built on the assumption that data is equally spaced. As far as I know there is no adaptation of ARIMA to your case. If you are just missing a few measurements on actual working days, you might be forced to use interpolation. | Using the R forecast package with missing values and/or irregular time series
You should be very careful when you apply interpolation before further statistical treatment. The choice you do for your interpolation introduces a bias into your data. This is something you definitel |
17,880 | Using the R forecast package with missing values and/or irregular time series | I am not an R expert so maybe there is a simpler way but I have come across this before. What I did before is implement a function that measures the distance (in time units) between the actual dates and saves that in a new column in the existing time series. So we have something like:
index/date | value | distance
01.01.2011 | 15 | 1
02.01.2011 | 17 | 3
05.01.2011 | 22 | ..
This way, if your time series is not yet associated with an actual series of points in time (or wrong format or whatever), then you can still work with it.
Next, you write a function that creates a new time series for you, like so:
First, you calculate how many units of time the time series actually would have between the dates of your chosing and create that timeline in zoo or ts or whatever the choice is with empty values.
Second, you take your incomplete time series array and, using a loop, fill the values into the correct timeline, according to the limits of your choosing. When you come upon a row where the unit distance is not one (days (units) are missing), you fill in interpolated values.
Now, since this is your function, you can actually chose how to interpolate. For example you decide that if the distance is less than two units, you use a standard linear interpolation. If a week is missing, you do something else and if a certain threshold of missing dates is reached, you give out a warning about the data - really whatever you want to imagine.
If the loop reaches the end date you return your new ts.
Advantage of such a function is that you can use different interpolations or handling procedures depending on the lengths of the gap and return a cleanly creates series in the format of your choosing. Once written, it allows you to gain clean and nice ts out of any sort of tabular data. Hope this helps you somehow. | Using the R forecast package with missing values and/or irregular time series | I am not an R expert so maybe there is a simpler way but I have come across this before. What I did before is implement a function that measures the distance (in time units) between the actual dates | Using the R forecast package with missing values and/or irregular time series
I am not an R expert so maybe there is a simpler way but I have come across this before. What I did before is implement a function that measures the distance (in time units) between the actual dates and saves that in a new column in the existing time series. So we have something like:
index/date | value | distance
01.01.2011 | 15 | 1
02.01.2011 | 17 | 3
05.01.2011 | 22 | ..
This way, if your time series is not yet associated with an actual series of points in time (or wrong format or whatever), then you can still work with it.
Next, you write a function that creates a new time series for you, like so:
First, you calculate how many units of time the time series actually would have between the dates of your chosing and create that timeline in zoo or ts or whatever the choice is with empty values.
Second, you take your incomplete time series array and, using a loop, fill the values into the correct timeline, according to the limits of your choosing. When you come upon a row where the unit distance is not one (days (units) are missing), you fill in interpolated values.
Now, since this is your function, you can actually chose how to interpolate. For example you decide that if the distance is less than two units, you use a standard linear interpolation. If a week is missing, you do something else and if a certain threshold of missing dates is reached, you give out a warning about the data - really whatever you want to imagine.
If the loop reaches the end date you return your new ts.
Advantage of such a function is that you can use different interpolations or handling procedures depending on the lengths of the gap and return a cleanly creates series in the format of your choosing. Once written, it allows you to gain clean and nice ts out of any sort of tabular data. Hope this helps you somehow. | Using the R forecast package with missing values and/or irregular time series
I am not an R expert so maybe there is a simpler way but I have come across this before. What I did before is implement a function that measures the distance (in time units) between the actual dates |
17,881 | Using the R forecast package with missing values and/or irregular time series | I would not interpolate the data before estimating the model on this data, as @Remi noted. It's a bad idea. An extreme example: imagine you have two data points Jan 2013 and Jan 2014. Now interpolate 10 monthly points in between: Feb through Dec 2013, and run regression on the monthly date. In reality it's not going to be this bad, but it's the same idea: you'll be inflating your statistics at best.
The way to go is to use time series methods which handle missing data. For instance, state space methods. Take a look at astsa R package. It comes with an excellent book on time series analysis. This will handle missing data nicely. Matlab now has a similar functionality in ssm package. You have to learn converting your models into state space form, but you have to learn this anyways if you want to step away from auto.arima "magic". | Using the R forecast package with missing values and/or irregular time series | I would not interpolate the data before estimating the model on this data, as @Remi noted. It's a bad idea. An extreme example: imagine you have two data points Jan 2013 and Jan 2014. Now interpolate | Using the R forecast package with missing values and/or irregular time series
I would not interpolate the data before estimating the model on this data, as @Remi noted. It's a bad idea. An extreme example: imagine you have two data points Jan 2013 and Jan 2014. Now interpolate 10 monthly points in between: Feb through Dec 2013, and run regression on the monthly date. In reality it's not going to be this bad, but it's the same idea: you'll be inflating your statistics at best.
The way to go is to use time series methods which handle missing data. For instance, state space methods. Take a look at astsa R package. It comes with an excellent book on time series analysis. This will handle missing data nicely. Matlab now has a similar functionality in ssm package. You have to learn converting your models into state space form, but you have to learn this anyways if you want to step away from auto.arima "magic". | Using the R forecast package with missing values and/or irregular time series
I would not interpolate the data before estimating the model on this data, as @Remi noted. It's a bad idea. An extreme example: imagine you have two data points Jan 2013 and Jan 2014. Now interpolate |
17,882 | Optional stopping rules not in textbooks | You can't have a stopping rule without some idea of your distribution and your effect size - which you don't know a priori.
Also yes, we need to focus on effect size - and it has never been regarded as correct to consider only p-values, and we should certainly not be showing tables or graphs that show p-values or F-values rather than effect size.
There are problems with traditional Statistical Hypothesis Inference Testing (which Cohen says is worthy of its acronym, and Fisher and Pearson would both turn over in the graves if they saw all that is being done in their violently opposed names today).
To determine N, you need to have already determined a target significance and power threshold, as well as making lots of assumptions about distribution, and in particular you also need to have determined the effect size that you want to establish. Indolering is exactly right that this should be the starting point - what minimum effect size would be cost effective!
The "New Statistics" is advocating showing the effect sizes (as paired difference where appropriate), along with the associated standard deviations or variance (because we need to understand the distribution), and the standard deviations or confidence intervals (but the latter is already locking in a p-value and a decision about whether you are predicting a direction or an each way bet). But setting a minimum effect of specified sign with a scientific prediction, makes this clear - although the pre-scientific default is to do trial and error and just look for differences. But again you have made assumptions about normality if you go this way.
Another approach is to use box-plots as a non-parametric approach, but the conventions about whiskers and outliers vary widely and even then themselves originate in distributional assumptions.
The stopping problem is indeed not a problem of an individual researcher setting or not setting N, but that we have a whole community of thousands of researchers, where 1000 is much more than 1/alpha for the traditional 0.05 level. The answer is currently proposed to be to provide the summary statistics (mean, stddev, stderr - or corresponding "non-parametric versions - median etc. as with boxplot) to facilitate meta-analysis, and present combined results from all experiments whether they happen to have reached a particular alpha level or not.
Closely related is the multiple testing problem, which is just as fraught with difficulty, and where experiments are kept oversimplistic in the name of preserving power, whilst overcomplex methodologies are proposed to analyze the results.
I don't think there can be a text book chapter dealing with this definitively yet, as we still have little idea what we are doing...
For the moment, the best approach is probably to continue to use the traditional statistics most appropriate to the problem, combined with displaying the summary statistics - the effect and standard error and N being the most important. The use of confidence intervals is basically equivalent to the corresponding T-test, but allows comparing new results to published ones more meaningully, as well as allowing an ethos encouraging reproducibility, and publication of reproduced experiments and meta-analyses.
In terms of Information Theoretic or Bayesian approaches, they use different tools and make different assumptions, but still don't have all the answers either, and in the end face the same problems, or worse ones because Bayesian inference steps back from making a definitive answer and just adduces evidence relative assumed or absent priors.
Machine Learning in the end also has results which it needs to consider for significance - often with CIs or T-Test, often with graphs, hopefully pairing rather than just comparing, and using appropriately compensated versions when the distributions don't match. It also has its controversies about bootstrapping and cross-validation, and bias and variance. Worst of all, it has the propensity to generate and test myriads of alternative models just by parameterizing thoroughly all the algorithms in one of the many toolboxes, applied to the datasets thoughtfully archived to allow unbridled multiple testing. Worst still it is still in the dark ages using accuracy, or worse still F-measure, for evaluation - rather than chance-correct methods.
I have read dozens of papers on these issues, but have failed to find anything totally convincing - except the negative survey or meta-analysis papers that seem to indicate that most researchers don't handle and interpret the statistics properly with respect to any "standard", old or new. Power, multiple testing, sizing and early stopping, interpretation of standard errors and confidence intervals, ... these are just some of the issues.
Please shoot me down - I'd like to be proven wrong! In my view there's lots of bathwater, but we haven't found the baby yet! At this stage none of the extreme views or name-brand approaches looks promising as being the answer, and those that want to throw out everything else have probably lost the baby. | Optional stopping rules not in textbooks | You can't have a stopping rule without some idea of your distribution and your effect size - which you don't know a priori.
Also yes, we need to focus on effect size - and it has never been regarded | Optional stopping rules not in textbooks
You can't have a stopping rule without some idea of your distribution and your effect size - which you don't know a priori.
Also yes, we need to focus on effect size - and it has never been regarded as correct to consider only p-values, and we should certainly not be showing tables or graphs that show p-values or F-values rather than effect size.
There are problems with traditional Statistical Hypothesis Inference Testing (which Cohen says is worthy of its acronym, and Fisher and Pearson would both turn over in the graves if they saw all that is being done in their violently opposed names today).
To determine N, you need to have already determined a target significance and power threshold, as well as making lots of assumptions about distribution, and in particular you also need to have determined the effect size that you want to establish. Indolering is exactly right that this should be the starting point - what minimum effect size would be cost effective!
The "New Statistics" is advocating showing the effect sizes (as paired difference where appropriate), along with the associated standard deviations or variance (because we need to understand the distribution), and the standard deviations or confidence intervals (but the latter is already locking in a p-value and a decision about whether you are predicting a direction or an each way bet). But setting a minimum effect of specified sign with a scientific prediction, makes this clear - although the pre-scientific default is to do trial and error and just look for differences. But again you have made assumptions about normality if you go this way.
Another approach is to use box-plots as a non-parametric approach, but the conventions about whiskers and outliers vary widely and even then themselves originate in distributional assumptions.
The stopping problem is indeed not a problem of an individual researcher setting or not setting N, but that we have a whole community of thousands of researchers, where 1000 is much more than 1/alpha for the traditional 0.05 level. The answer is currently proposed to be to provide the summary statistics (mean, stddev, stderr - or corresponding "non-parametric versions - median etc. as with boxplot) to facilitate meta-analysis, and present combined results from all experiments whether they happen to have reached a particular alpha level or not.
Closely related is the multiple testing problem, which is just as fraught with difficulty, and where experiments are kept oversimplistic in the name of preserving power, whilst overcomplex methodologies are proposed to analyze the results.
I don't think there can be a text book chapter dealing with this definitively yet, as we still have little idea what we are doing...
For the moment, the best approach is probably to continue to use the traditional statistics most appropriate to the problem, combined with displaying the summary statistics - the effect and standard error and N being the most important. The use of confidence intervals is basically equivalent to the corresponding T-test, but allows comparing new results to published ones more meaningully, as well as allowing an ethos encouraging reproducibility, and publication of reproduced experiments and meta-analyses.
In terms of Information Theoretic or Bayesian approaches, they use different tools and make different assumptions, but still don't have all the answers either, and in the end face the same problems, or worse ones because Bayesian inference steps back from making a definitive answer and just adduces evidence relative assumed or absent priors.
Machine Learning in the end also has results which it needs to consider for significance - often with CIs or T-Test, often with graphs, hopefully pairing rather than just comparing, and using appropriately compensated versions when the distributions don't match. It also has its controversies about bootstrapping and cross-validation, and bias and variance. Worst of all, it has the propensity to generate and test myriads of alternative models just by parameterizing thoroughly all the algorithms in one of the many toolboxes, applied to the datasets thoughtfully archived to allow unbridled multiple testing. Worst still it is still in the dark ages using accuracy, or worse still F-measure, for evaluation - rather than chance-correct methods.
I have read dozens of papers on these issues, but have failed to find anything totally convincing - except the negative survey or meta-analysis papers that seem to indicate that most researchers don't handle and interpret the statistics properly with respect to any "standard", old or new. Power, multiple testing, sizing and early stopping, interpretation of standard errors and confidence intervals, ... these are just some of the issues.
Please shoot me down - I'd like to be proven wrong! In my view there's lots of bathwater, but we haven't found the baby yet! At this stage none of the extreme views or name-brand approaches looks promising as being the answer, and those that want to throw out everything else have probably lost the baby. | Optional stopping rules not in textbooks
You can't have a stopping rule without some idea of your distribution and your effect size - which you don't know a priori.
Also yes, we need to focus on effect size - and it has never been regarded |
17,883 | Optional stopping rules not in textbooks | I do not believe that optional "stopping rules" is a technical term in regards to optimal stopping. However, I doubt that you will find much in-depth discussion on the topic in intro-level psychology statistics textbooks.
The cynical rationale for this is that all social-science students have weak math skills. The better answer, IMHO, is that simple t-Tests are not appropriate for most social science experiments. One has to look at the effect strength and figure out if that resolves the differences between groups. The former can indicate that the latter is possible but that's all it can do.
Measures of welfare spending, state regulation, and urbanization all have statistically significant relationships with measures of religious behavior. However, just stating the p-value is framing the test in an all-or-nothing causal relationship. See the following:
Results from both welfare spending and urbanization have statistically significant p-values but welfare spending is much more strongly correlated. That welfare spending shows such a strong relationship to other measures of religiosity (non-religious rate as well as comfort in religion) for which urbanization doesn't even attain a p-value of < .10, suggesting that urbanization doesn't impact general religious beliefs. Note, however, that even welfare spending doesn't explain Ireland or the Philippines, showing that some other effect(s) are comparatively stronger than that of welfare spending.
Relying on "stopping rules" can lead to false positives, especially in the small sample sizes of psychology. Psychology as a field is really being held back by these kind of statistical shenanigans. However, placing all of our faith on an arbitrary p-value is pretty stupid as well. Even if we all sent our sample sizes and hypothesis statements to a journal before conducting the experiment, we would still run into false positives as academia is collectively trolling for statistical significance.
The right thing to do isn't to stop data mining, the right thing to do is to describe the results in relation to their effect. Theories are judged not just by the accuracy of their predictions but also by the utility of those predictions. No matter how good the research methodology, a drug that provides a 1% improvement in cold symptoms isn't worth the cost to pack into a capsule.
Update To be clear, I totally agree that social scientists should be held to a higher standard: we need to improve education, give social scientists better tools, and up the significance levels to 3-sigma. I'm trying to emphasize an under represented point: the vast majority of psychology studies are worthless because the effect size is so small.
But with Amazon Turk, I can properly compensate for running 10 parralel studies and maintain >3-sigma confidence level very cheaply. But if the effect strength is small, then there are significant threats to external validity. The effect of the manipulation might be due to a news story, or the ordering of the questions, or ....
I don't have time for an essay, but the quality issues within the social sciences go far beyond crappy statistical methods. | Optional stopping rules not in textbooks | I do not believe that optional "stopping rules" is a technical term in regards to optimal stopping. However, I doubt that you will find much in-depth discussion on the topic in intro-level psychology | Optional stopping rules not in textbooks
I do not believe that optional "stopping rules" is a technical term in regards to optimal stopping. However, I doubt that you will find much in-depth discussion on the topic in intro-level psychology statistics textbooks.
The cynical rationale for this is that all social-science students have weak math skills. The better answer, IMHO, is that simple t-Tests are not appropriate for most social science experiments. One has to look at the effect strength and figure out if that resolves the differences between groups. The former can indicate that the latter is possible but that's all it can do.
Measures of welfare spending, state regulation, and urbanization all have statistically significant relationships with measures of religious behavior. However, just stating the p-value is framing the test in an all-or-nothing causal relationship. See the following:
Results from both welfare spending and urbanization have statistically significant p-values but welfare spending is much more strongly correlated. That welfare spending shows such a strong relationship to other measures of religiosity (non-religious rate as well as comfort in religion) for which urbanization doesn't even attain a p-value of < .10, suggesting that urbanization doesn't impact general religious beliefs. Note, however, that even welfare spending doesn't explain Ireland or the Philippines, showing that some other effect(s) are comparatively stronger than that of welfare spending.
Relying on "stopping rules" can lead to false positives, especially in the small sample sizes of psychology. Psychology as a field is really being held back by these kind of statistical shenanigans. However, placing all of our faith on an arbitrary p-value is pretty stupid as well. Even if we all sent our sample sizes and hypothesis statements to a journal before conducting the experiment, we would still run into false positives as academia is collectively trolling for statistical significance.
The right thing to do isn't to stop data mining, the right thing to do is to describe the results in relation to their effect. Theories are judged not just by the accuracy of their predictions but also by the utility of those predictions. No matter how good the research methodology, a drug that provides a 1% improvement in cold symptoms isn't worth the cost to pack into a capsule.
Update To be clear, I totally agree that social scientists should be held to a higher standard: we need to improve education, give social scientists better tools, and up the significance levels to 3-sigma. I'm trying to emphasize an under represented point: the vast majority of psychology studies are worthless because the effect size is so small.
But with Amazon Turk, I can properly compensate for running 10 parralel studies and maintain >3-sigma confidence level very cheaply. But if the effect strength is small, then there are significant threats to external validity. The effect of the manipulation might be due to a news story, or the ordering of the questions, or ....
I don't have time for an essay, but the quality issues within the social sciences go far beyond crappy statistical methods. | Optional stopping rules not in textbooks
I do not believe that optional "stopping rules" is a technical term in regards to optimal stopping. However, I doubt that you will find much in-depth discussion on the topic in intro-level psychology |
17,884 | Optional stopping rules not in textbooks | The article you cite makes no mention of stopping rules and seems to be of little bearing to the problem at hand. Their only, very slight relation is that of multiple testing which is a statistical concept, not a scientific one.
In the literature of clinical trials, you will find that stopping rules are made rigorous with explicit information about the conditions in which a study will "look": based on calendar year, or person-years enrollment, the setting of an alpha level, and also bounds on effects for "effective" versus "harmful" treatments. Indeed, we should look to the rigorous conduct of such studies as an example of science done well. The FDA will even go so far as to say, following a significant finding of efficacy other than that prespecified, a second trial must be conducted to validate these findings. This remains an issue so much so that Thomas Flemming recommends that all clinical studies need to validated with a completely independent second confirmatory trial, conducted by separate entities. So bad is the problem of false-positive errors when considering life and medical care.
With seemingly innocuous oversight, other fields of science have perpetuated bad ethics in research. Indeed, social sciences don't affect the treatments people receive, they deal in abstracts, and conceptual models which only enhance our understanding of the interplay of theory and observation. However, any consumer of social science, lay or scientific, is frequently presented with conflicting findings: chocolate is good for you, chocolate is bad for you (chocolate is good for you, by the way, the sugar and fat in chocolate is bad for you), sex is good for you, marriage makes you sad/marriage makes you happy. The field is remiss with bad science. Even I am guilty of working on analyses where I was unhappy with the strongly causal language which was then tied to strong recommendations about policy and federal support, totally unjustified and yet it was publised.
Simmons' article describes effectively, how disclosure would assist in making explicit the kinds of "shortcuts" researchers make in social studies. Simmons gives in Table 1 an example of how data dredging dramatically increases false positive error rates in a manner typical of unethical scientist "fishing for findings". The summary of findings in Table 2 describes the frequently omitted aspects of articles which would serve to greatly improve an understanding of how possibly more than one analysis was conducted.
To summarize, stopping rules would only be appropriate with a pre-specified hypothesis: these are ethically sound and require statistical methods. Simmons' article admits that much of research does not even grant that, and it is ethically unsound but the statistical language is compelling for why exactly it is wrong. | Optional stopping rules not in textbooks | The article you cite makes no mention of stopping rules and seems to be of little bearing to the problem at hand. Their only, very slight relation is that of multiple testing which is a statistical co | Optional stopping rules not in textbooks
The article you cite makes no mention of stopping rules and seems to be of little bearing to the problem at hand. Their only, very slight relation is that of multiple testing which is a statistical concept, not a scientific one.
In the literature of clinical trials, you will find that stopping rules are made rigorous with explicit information about the conditions in which a study will "look": based on calendar year, or person-years enrollment, the setting of an alpha level, and also bounds on effects for "effective" versus "harmful" treatments. Indeed, we should look to the rigorous conduct of such studies as an example of science done well. The FDA will even go so far as to say, following a significant finding of efficacy other than that prespecified, a second trial must be conducted to validate these findings. This remains an issue so much so that Thomas Flemming recommends that all clinical studies need to validated with a completely independent second confirmatory trial, conducted by separate entities. So bad is the problem of false-positive errors when considering life and medical care.
With seemingly innocuous oversight, other fields of science have perpetuated bad ethics in research. Indeed, social sciences don't affect the treatments people receive, they deal in abstracts, and conceptual models which only enhance our understanding of the interplay of theory and observation. However, any consumer of social science, lay or scientific, is frequently presented with conflicting findings: chocolate is good for you, chocolate is bad for you (chocolate is good for you, by the way, the sugar and fat in chocolate is bad for you), sex is good for you, marriage makes you sad/marriage makes you happy. The field is remiss with bad science. Even I am guilty of working on analyses where I was unhappy with the strongly causal language which was then tied to strong recommendations about policy and federal support, totally unjustified and yet it was publised.
Simmons' article describes effectively, how disclosure would assist in making explicit the kinds of "shortcuts" researchers make in social studies. Simmons gives in Table 1 an example of how data dredging dramatically increases false positive error rates in a manner typical of unethical scientist "fishing for findings". The summary of findings in Table 2 describes the frequently omitted aspects of articles which would serve to greatly improve an understanding of how possibly more than one analysis was conducted.
To summarize, stopping rules would only be appropriate with a pre-specified hypothesis: these are ethically sound and require statistical methods. Simmons' article admits that much of research does not even grant that, and it is ethically unsound but the statistical language is compelling for why exactly it is wrong. | Optional stopping rules not in textbooks
The article you cite makes no mention of stopping rules and seems to be of little bearing to the problem at hand. Their only, very slight relation is that of multiple testing which is a statistical co |
17,885 | Making big, smart(er) bets | I think I came up with a workable bruteforce solution, it goes like this:
1) calculate every possible combination of multiple bets I can make
For the example and amounts I provided in my question, this would be:
3 single, 0 double, 0 triple = equivalent to 1 single bet
2 single, 1 double, 0 triple = equivalent to 2 single bets
2 single, 0 double, 1 triple = equivalent to 3 single bets
1 single, 2 double, 0 triple = equivalent to 4 single bets
2) calculate the standard deviation of the symbol odds for every match
| 1 | X | 2 | stdev |
|---------|---------|---------|---------|
Match #1 | 0.3 | 0.4 | 0.3 | 0.047 |
|---------|---------|---------|---------|
Match #2 | 0.1 | 0.0 | 0.9 | 0.402 |
|---------|---------|---------|---------|
Match #3 | 0.0 | 0.0 | 1.0 | 0.471 |
3) for every multiple bet combination (step 1) compute a ranking using the formula:
ranking = (#n(x) [+ #n(y) [+ #n(z)]]) / stdev(#n)
Where #n is a specific match and #n(x|y|z) is the ordered odds of the symbols.
Matches are processed from low to high standard deviations.
Individual symbols in each match are processed from high to low odds.
Test for a 1 single, 2 double, 0 triple bet:
(#1(X) + #1(1)) / stdev(#1) = (0.4 + 0.3) / 0.047 = 14.89
(#2(2) + #2(1)) / stdev(#2) = (0.9 + 0.1) / 0.402 = 2.48
#3(2) / stdev(#3) = 1.0 / 0.471 = 2.12
This bet gives me global ranking of 14.89 + 2.48 + 2.12 = 19.49.
Test for a 2 single, 0 double, 1 triple bet:
(#1(X) + #1(1) + #1(2)) / stdev(#1) = (0.4 + 0.3 + 0.3) / 0.047 = 21.28
#2(2) / stdev(#2) = 0.9 / 0.402 = 2.24
#3(2) / stdev(#3) = 1.0 / 0.471 = 2.12
Which gives me a global ranking of 21.28 + 2.24 + 2.12 = 25.64. :-)
All the remaining bets will clearly be inferior so there is no point in testing them.
This method seems to work but I came up with it via trial and error and following my gut, I lack the mathematical understanding to judge whether it is correct or even if there is a better way...
Any pointers?
PS: Sorry for the bad formatting but the MD parser seems to be different from StackOverflow. | Making big, smart(er) bets | I think I came up with a workable bruteforce solution, it goes like this:
1) calculate every possible combination of multiple bets I can make
For the example and amounts I provided in my question, | Making big, smart(er) bets
I think I came up with a workable bruteforce solution, it goes like this:
1) calculate every possible combination of multiple bets I can make
For the example and amounts I provided in my question, this would be:
3 single, 0 double, 0 triple = equivalent to 1 single bet
2 single, 1 double, 0 triple = equivalent to 2 single bets
2 single, 0 double, 1 triple = equivalent to 3 single bets
1 single, 2 double, 0 triple = equivalent to 4 single bets
2) calculate the standard deviation of the symbol odds for every match
| 1 | X | 2 | stdev |
|---------|---------|---------|---------|
Match #1 | 0.3 | 0.4 | 0.3 | 0.047 |
|---------|---------|---------|---------|
Match #2 | 0.1 | 0.0 | 0.9 | 0.402 |
|---------|---------|---------|---------|
Match #3 | 0.0 | 0.0 | 1.0 | 0.471 |
3) for every multiple bet combination (step 1) compute a ranking using the formula:
ranking = (#n(x) [+ #n(y) [+ #n(z)]]) / stdev(#n)
Where #n is a specific match and #n(x|y|z) is the ordered odds of the symbols.
Matches are processed from low to high standard deviations.
Individual symbols in each match are processed from high to low odds.
Test for a 1 single, 2 double, 0 triple bet:
(#1(X) + #1(1)) / stdev(#1) = (0.4 + 0.3) / 0.047 = 14.89
(#2(2) + #2(1)) / stdev(#2) = (0.9 + 0.1) / 0.402 = 2.48
#3(2) / stdev(#3) = 1.0 / 0.471 = 2.12
This bet gives me global ranking of 14.89 + 2.48 + 2.12 = 19.49.
Test for a 2 single, 0 double, 1 triple bet:
(#1(X) + #1(1) + #1(2)) / stdev(#1) = (0.4 + 0.3 + 0.3) / 0.047 = 21.28
#2(2) / stdev(#2) = 0.9 / 0.402 = 2.24
#3(2) / stdev(#3) = 1.0 / 0.471 = 2.12
Which gives me a global ranking of 21.28 + 2.24 + 2.12 = 25.64. :-)
All the remaining bets will clearly be inferior so there is no point in testing them.
This method seems to work but I came up with it via trial and error and following my gut, I lack the mathematical understanding to judge whether it is correct or even if there is a better way...
Any pointers?
PS: Sorry for the bad formatting but the MD parser seems to be different from StackOverflow. | Making big, smart(er) bets
I think I came up with a workable bruteforce solution, it goes like this:
1) calculate every possible combination of multiple bets I can make
For the example and amounts I provided in my question, |
17,886 | Making big, smart(er) bets | How about making a solution based on the Simplex Method. Since the premise for using the Simplex method isn't fulfilled we need to modify the method slightly. I call the modified version "Walk the line".
Method:
You are able to measure the uncertainty of each match. Do it! Calculate the uncertainty of each match with a single or double bet (for a triple bet there is no uncertainty).
When adding a double or triple bet, always choose the one that reduces uncertainty the most.
Start at maximum number of triple bets. Calculate total uncertainty.
Remove one triple bet. Add one or two double bets, keeping under maximum cost. Calculate total uncertainty.
Repeat step 2 until you have the maximum number of double bets.
Pick the bet with the lowest total uncertainty. | Making big, smart(er) bets | How about making a solution based on the Simplex Method. Since the premise for using the Simplex method isn't fulfilled we need to modify the method slightly. I call the modified version "Walk the lin | Making big, smart(er) bets
How about making a solution based on the Simplex Method. Since the premise for using the Simplex method isn't fulfilled we need to modify the method slightly. I call the modified version "Walk the line".
Method:
You are able to measure the uncertainty of each match. Do it! Calculate the uncertainty of each match with a single or double bet (for a triple bet there is no uncertainty).
When adding a double or triple bet, always choose the one that reduces uncertainty the most.
Start at maximum number of triple bets. Calculate total uncertainty.
Remove one triple bet. Add one or two double bets, keeping under maximum cost. Calculate total uncertainty.
Repeat step 2 until you have the maximum number of double bets.
Pick the bet with the lowest total uncertainty. | Making big, smart(er) bets
How about making a solution based on the Simplex Method. Since the premise for using the Simplex method isn't fulfilled we need to modify the method slightly. I call the modified version "Walk the lin |
17,887 | Making big, smart(er) bets | What i come from observing this sportsbets i came to thise conclusions.
Expected value
Lets say that you have 3 bets whit 1.29 5.5 and 10.3 (last bet in table)
EV for betting is
EV = 1/(1/1.29+1/5.5+1/10.3) - 1 = -0.05132282687714185
if holds that probabilities one winning over another are distributed as
1/1.29 : 1/5.5 : 1/10.3, then you are loosing your money on long run since your EV is negative.
You can profit only if you can figure out what are probabilities of each outcome and find out irregularities.
Lets say that true probabilities are
0.7 : 0.2 : 0.1
That mean that rates should be
1.43 \ 5.0 \ 10.0
You can se that in this case best payoff is for betting draw since it gives you
EV(0) = 5.5/5 - 1 = 0.1
where for betting on loss is
EV(2) = 10.2/10 - 1 = 0.02
and betting for home win is even EV-
EV(1) = 1.29/1.43 - 1 = -0.10 | Making big, smart(er) bets | What i come from observing this sportsbets i came to thise conclusions.
Expected value
Lets say that you have 3 bets whit 1.29 5.5 and 10.3 (last bet in table)
EV for betting is
EV = 1/(1/1.29+1/5.5+ | Making big, smart(er) bets
What i come from observing this sportsbets i came to thise conclusions.
Expected value
Lets say that you have 3 bets whit 1.29 5.5 and 10.3 (last bet in table)
EV for betting is
EV = 1/(1/1.29+1/5.5+1/10.3) - 1 = -0.05132282687714185
if holds that probabilities one winning over another are distributed as
1/1.29 : 1/5.5 : 1/10.3, then you are loosing your money on long run since your EV is negative.
You can profit only if you can figure out what are probabilities of each outcome and find out irregularities.
Lets say that true probabilities are
0.7 : 0.2 : 0.1
That mean that rates should be
1.43 \ 5.0 \ 10.0
You can se that in this case best payoff is for betting draw since it gives you
EV(0) = 5.5/5 - 1 = 0.1
where for betting on loss is
EV(2) = 10.2/10 - 1 = 0.02
and betting for home win is even EV-
EV(1) = 1.29/1.43 - 1 = -0.10 | Making big, smart(er) bets
What i come from observing this sportsbets i came to thise conclusions.
Expected value
Lets say that you have 3 bets whit 1.29 5.5 and 10.3 (last bet in table)
EV for betting is
EV = 1/(1/1.29+1/5.5+ |
17,888 | Lagrangian relaxation in the context of ridge regression | The correspondence can most easily be shown using the Envelope Theorem.
First, the standard Lagrangian will have an additional $\lambda \cdot t$ term. This will not affect the maximization problem if we are just treating $\lambda$ as given, so Hastie et al drop it.
Now, if you differentiate the full Lagrangian with respect to $t$, the Envelope Theorem says you can ignore the indirect effects of $t$ through $\beta$, because you're at a maximum. What you'll be left with is the Lagrange multipler from $\lambda \cdot t$.
But what does this mean intuitively? Since the constraint binds at the maximum, the derivative of the Lagrangian, evaluated at the maximum, is the same as the deriviate the original objective. Therefore the Lagrange multiplier gives the shadow price -- the value in terms of the objective -- of relaxing the constraint by increasing $t$.
I assume this is the correspondence Hastie et al. are referring to. | Lagrangian relaxation in the context of ridge regression | The correspondence can most easily be shown using the Envelope Theorem.
First, the standard Lagrangian will have an additional $\lambda \cdot t$ term. This will not affect the maximization problem if | Lagrangian relaxation in the context of ridge regression
The correspondence can most easily be shown using the Envelope Theorem.
First, the standard Lagrangian will have an additional $\lambda \cdot t$ term. This will not affect the maximization problem if we are just treating $\lambda$ as given, so Hastie et al drop it.
Now, if you differentiate the full Lagrangian with respect to $t$, the Envelope Theorem says you can ignore the indirect effects of $t$ through $\beta$, because you're at a maximum. What you'll be left with is the Lagrange multipler from $\lambda \cdot t$.
But what does this mean intuitively? Since the constraint binds at the maximum, the derivative of the Lagrangian, evaluated at the maximum, is the same as the deriviate the original objective. Therefore the Lagrange multiplier gives the shadow price -- the value in terms of the objective -- of relaxing the constraint by increasing $t$.
I assume this is the correspondence Hastie et al. are referring to. | Lagrangian relaxation in the context of ridge regression
The correspondence can most easily be shown using the Envelope Theorem.
First, the standard Lagrangian will have an additional $\lambda \cdot t$ term. This will not affect the maximization problem if |
17,889 | An intuitive explanation why the Benjamini-Hochberg FDR procedure works? | Here is some R-code to generate a picture. It will show 15 simulated p-values plotted against their order. So they form an ascending point pattern. The points below the red/purple lines represent significant tests at the 0.1 or 0.2 level. The FDR ist the number of black points below the line divided by the total number of points below the line.
x0 <- runif(10) #p-values of 10 true null hypotheses. They are Unif[0,1] distributed.
x1 <- rbeta(5,2,30) # 5 false hypotheses, rather small p-values
xx <- c(x1,x0)
plot(sort(xx))
a0 <- sort(xx)
for (i in 1:length(x0)){a0[a0==x0[i]] <- NA}
points(a0,col="red")
points(c(1,15), c(1/15 * 0.1 ,0.1), type="l", col="red")
points(c(1,15), c(1/15 * 0.2 ,0.2), type="l", col="purple")
I hope this might give some feeling about the shape the distribution of ordered p-values has. That the lines are correct and not e.g. some parable-shaped curve, has to do with the shape of the order distributions. This has to be calculated explicitly. In fact, the line is just a conservative solution. | An intuitive explanation why the Benjamini-Hochberg FDR procedure works? | Here is some R-code to generate a picture. It will show 15 simulated p-values plotted against their order. So they form an ascending point pattern. The points below the red/purple lines represent sign | An intuitive explanation why the Benjamini-Hochberg FDR procedure works?
Here is some R-code to generate a picture. It will show 15 simulated p-values plotted against their order. So they form an ascending point pattern. The points below the red/purple lines represent significant tests at the 0.1 or 0.2 level. The FDR ist the number of black points below the line divided by the total number of points below the line.
x0 <- runif(10) #p-values of 10 true null hypotheses. They are Unif[0,1] distributed.
x1 <- rbeta(5,2,30) # 5 false hypotheses, rather small p-values
xx <- c(x1,x0)
plot(sort(xx))
a0 <- sort(xx)
for (i in 1:length(x0)){a0[a0==x0[i]] <- NA}
points(a0,col="red")
points(c(1,15), c(1/15 * 0.1 ,0.1), type="l", col="red")
points(c(1,15), c(1/15 * 0.2 ,0.2), type="l", col="purple")
I hope this might give some feeling about the shape the distribution of ordered p-values has. That the lines are correct and not e.g. some parable-shaped curve, has to do with the shape of the order distributions. This has to be calculated explicitly. In fact, the line is just a conservative solution. | An intuitive explanation why the Benjamini-Hochberg FDR procedure works?
Here is some R-code to generate a picture. It will show 15 simulated p-values plotted against their order. So they form an ascending point pattern. The points below the red/purple lines represent sign |
17,890 | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network? | As I understand your questions, what you picture is basically concatenating the input, previous hidden state, and previous cell state, and passing them through one or several fully connected layer to compute the output hidden state and cell state, instead of independently computing "gated" updates that interact arithmetically with the cell state. This would basically create a regular RNN that only outputted part of the hidden state.
The main reason not to do this is that the structure of LSTM's cell state computations ensures constant flow of error through long sequences. If you used weights for computing the cell state directly, you'd need to backpropagate through them at each time step! Avoiding such operations largely solves vanishing/exploding gradients that otherwise plague RNNs.
Plus, the ability to retain information easily over longer time spans is a nice bonus. Intuitively, it would be much more difficult for the network to learn from scratch to preserve cell state over longer time spans.
It's worth noting that the most common alternative to LSTM, the GRU, similarly computes hidden state updates without learning weights that operate directly on the hidden state itself. | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network? | As I understand your questions, what you picture is basically concatenating the input, previous hidden state, and previous cell state, and passing them through one or several fully connected layer to | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network?
As I understand your questions, what you picture is basically concatenating the input, previous hidden state, and previous cell state, and passing them through one or several fully connected layer to compute the output hidden state and cell state, instead of independently computing "gated" updates that interact arithmetically with the cell state. This would basically create a regular RNN that only outputted part of the hidden state.
The main reason not to do this is that the structure of LSTM's cell state computations ensures constant flow of error through long sequences. If you used weights for computing the cell state directly, you'd need to backpropagate through them at each time step! Avoiding such operations largely solves vanishing/exploding gradients that otherwise plague RNNs.
Plus, the ability to retain information easily over longer time spans is a nice bonus. Intuitively, it would be much more difficult for the network to learn from scratch to preserve cell state over longer time spans.
It's worth noting that the most common alternative to LSTM, the GRU, similarly computes hidden state updates without learning weights that operate directly on the hidden state itself. | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network?
As I understand your questions, what you picture is basically concatenating the input, previous hidden state, and previous cell state, and passing them through one or several fully connected layer to |
17,891 | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network? | If I have understood correctly both of your questions boil down to this. Two places where we are using both tanh and sigmoid for processing the information. Instead of that we should use one single neural network which takes in all the information.
I do not know the drawbacks of using one single neural network. In my opinion we can use a single neural network with sigmoid non-linearity which correctly learns the vector which will be used appropriately(added in cell state in first case or passed on as hidden state in second case).
However, the way we are doing it now we are breaking the task in two parts, one part which uses sigmoid non-linearity to learn the amount of data to be kept. The other part which uses tanh as non-linearity is just doing the task of learning the information which is important.
In simple terms, sigmoid learns how much to save and tanh learns what to save and breaking it in two parts will make the training easier. | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network? | If I have understood correctly both of your questions boil down to this. Two places where we are using both tanh and sigmoid for processing the information. Instead of that we should use one single ne | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network?
If I have understood correctly both of your questions boil down to this. Two places where we are using both tanh and sigmoid for processing the information. Instead of that we should use one single neural network which takes in all the information.
I do not know the drawbacks of using one single neural network. In my opinion we can use a single neural network with sigmoid non-linearity which correctly learns the vector which will be used appropriately(added in cell state in first case or passed on as hidden state in second case).
However, the way we are doing it now we are breaking the task in two parts, one part which uses sigmoid non-linearity to learn the amount of data to be kept. The other part which uses tanh as non-linearity is just doing the task of learning the information which is important.
In simple terms, sigmoid learns how much to save and tanh learns what to save and breaking it in two parts will make the training easier. | What is the intuition behind a Long Short Term Memory (LSTM) recurrent neural network?
If I have understood correctly both of your questions boil down to this. Two places where we are using both tanh and sigmoid for processing the information. Instead of that we should use one single ne |
17,892 | Using MLE vs. OLS | As explained here, OLS is just a particular instance of MLE. Here is closely related question, with a derivation of OLS in terms of MLE.
The conditional distribution corresponds to your noise model (for OLS: Gaussian and the same distribution for all inputs). There are other options (t-Student to deal with outliers, or allow the noise distribution to depend on the input) | Using MLE vs. OLS | As explained here, OLS is just a particular instance of MLE. Here is closely related question, with a derivation of OLS in terms of MLE.
The conditional distribution corresponds to your noise model (f | Using MLE vs. OLS
As explained here, OLS is just a particular instance of MLE. Here is closely related question, with a derivation of OLS in terms of MLE.
The conditional distribution corresponds to your noise model (for OLS: Gaussian and the same distribution for all inputs). There are other options (t-Student to deal with outliers, or allow the noise distribution to depend on the input) | Using MLE vs. OLS
As explained here, OLS is just a particular instance of MLE. Here is closely related question, with a derivation of OLS in terms of MLE.
The conditional distribution corresponds to your noise model (f |
17,893 | What is meant by the "level" of a time series? | This has to do with the order of integration. A stochastic process $X_t$ is said to be integrated of order $0$, equivalently $X_t$~$I(0)$ if it is stationary. If $X_t$~$I(d)$ with $d>0, d \in \mathbb{N}$, the process is said to be integrated of order $d$ and is then nonstationary. The above decomposition attempts to filter out the stationary components (as fluctuation component and innovations) and the nonstationary stochastic trend component. A stochastic trend is different from a deterministic trend, and the usage of the word trend in the passage is sloppy.
Now, this makes it all sound more complicated than it is. Lets consider an example. Take $\varepsilon_t$~$(0,\sigma^2)$ as a white noise process and let $\varepsilon_t$ be $iid$. Define the following lag polynomial
\begin{align}
C_1(L) &= 0.5L + 0.25L^2 -0.75L^3 -0.05 L^4 \\
\end{align}
The lag operator $L$ works on time-indexed random variables as $L^k\varepsilon:=\varepsilon_{t-k}$. Suppose now further that $X_t$ is generated as
\begin{align}
X_t = X_{t-1} + C_1(L)\varepsilon_t + \varepsilon_t
\end{align}
Then, using the terminology from your excerpt, the long term level would be defined by $X_{t-1}$, the seasonal/fluctuation component by $C_1(L)\varepsilon_t$ and the innovations by $\varepsilon_t$. As described in the excerpt, the fluctuation component and the innovations are stationary.
The reason why it is called that way is somewhat hard to see without making further remarks and relates back to the aforementioned order of integration. Usually, we don't encounter processes that are integrated of orders higher than $1$ or $2$, so lets consider the above example of integration order $1$.
First of, define $u_t := C_1(L)\varepsilon_t + \varepsilon_t$. $u_t$ is stationary, so $u_t$~$I(0)$.
Now we can write
\begin{align}
X_t &= X_{t-1} + u_t \Longleftrightarrow\\
X_t - X_{t-1} &= (1-L)X_t = \Delta X_t = u_t \\
\end{align}
this tells us that $X_t$~$I(1)$, because its first difference is integrated of order $0$. The meaning of this might be hard to grasp, until one realizes what $\Delta X_t = u_t$ actually means. It means that one can rewrite
\begin{align}
X_t &= \sum_{i=1}^{\infty} \Delta X_t = \sum_{i=1}^{\infty} u_t\\
\end{align}
This might not look dramatic: $\mathbb{E}(X_t) = 0$, after all! However, the variance of this process is not finite and explodes to $\infty$. This is why we say the term defines a stochastic trend: while it is not deterministic (like for instance a linear trend), $X_t$ will only be stationary once we have filtered out the nonstationary component and substracted it from $X_{t}$. (In this case, as observed previously, $\Delta X_t = X_t - X_{t-1}=C_1(L)\varepsilon_t + \varepsilon_t$ would have filtered out the nonstationary component and would be stationary.)
If you do not do this, your usual statistical inference procedures do not work anymore, since $X_{t}$ will converge to a brownian motion by the invariance principle/Functional Central Limit Theorem. These results replace standard CLR results for Autoregressions, Cointegration problems, and so forth. | What is meant by the "level" of a time series? | This has to do with the order of integration. A stochastic process $X_t$ is said to be integrated of order $0$, equivalently $X_t$~$I(0)$ if it is stationary. If $X_t$~$I(d)$ with $d>0, d \in \mathbb{ | What is meant by the "level" of a time series?
This has to do with the order of integration. A stochastic process $X_t$ is said to be integrated of order $0$, equivalently $X_t$~$I(0)$ if it is stationary. If $X_t$~$I(d)$ with $d>0, d \in \mathbb{N}$, the process is said to be integrated of order $d$ and is then nonstationary. The above decomposition attempts to filter out the stationary components (as fluctuation component and innovations) and the nonstationary stochastic trend component. A stochastic trend is different from a deterministic trend, and the usage of the word trend in the passage is sloppy.
Now, this makes it all sound more complicated than it is. Lets consider an example. Take $\varepsilon_t$~$(0,\sigma^2)$ as a white noise process and let $\varepsilon_t$ be $iid$. Define the following lag polynomial
\begin{align}
C_1(L) &= 0.5L + 0.25L^2 -0.75L^3 -0.05 L^4 \\
\end{align}
The lag operator $L$ works on time-indexed random variables as $L^k\varepsilon:=\varepsilon_{t-k}$. Suppose now further that $X_t$ is generated as
\begin{align}
X_t = X_{t-1} + C_1(L)\varepsilon_t + \varepsilon_t
\end{align}
Then, using the terminology from your excerpt, the long term level would be defined by $X_{t-1}$, the seasonal/fluctuation component by $C_1(L)\varepsilon_t$ and the innovations by $\varepsilon_t$. As described in the excerpt, the fluctuation component and the innovations are stationary.
The reason why it is called that way is somewhat hard to see without making further remarks and relates back to the aforementioned order of integration. Usually, we don't encounter processes that are integrated of orders higher than $1$ or $2$, so lets consider the above example of integration order $1$.
First of, define $u_t := C_1(L)\varepsilon_t + \varepsilon_t$. $u_t$ is stationary, so $u_t$~$I(0)$.
Now we can write
\begin{align}
X_t &= X_{t-1} + u_t \Longleftrightarrow\\
X_t - X_{t-1} &= (1-L)X_t = \Delta X_t = u_t \\
\end{align}
this tells us that $X_t$~$I(1)$, because its first difference is integrated of order $0$. The meaning of this might be hard to grasp, until one realizes what $\Delta X_t = u_t$ actually means. It means that one can rewrite
\begin{align}
X_t &= \sum_{i=1}^{\infty} \Delta X_t = \sum_{i=1}^{\infty} u_t\\
\end{align}
This might not look dramatic: $\mathbb{E}(X_t) = 0$, after all! However, the variance of this process is not finite and explodes to $\infty$. This is why we say the term defines a stochastic trend: while it is not deterministic (like for instance a linear trend), $X_t$ will only be stationary once we have filtered out the nonstationary component and substracted it from $X_{t}$. (In this case, as observed previously, $\Delta X_t = X_t - X_{t-1}=C_1(L)\varepsilon_t + \varepsilon_t$ would have filtered out the nonstationary component and would be stationary.)
If you do not do this, your usual statistical inference procedures do not work anymore, since $X_{t}$ will converge to a brownian motion by the invariance principle/Functional Central Limit Theorem. These results replace standard CLR results for Autoregressions, Cointegration problems, and so forth. | What is meant by the "level" of a time series?
This has to do with the order of integration. A stochastic process $X_t$ is said to be integrated of order $0$, equivalently $X_t$~$I(0)$ if it is stationary. If $X_t$~$I(d)$ with $d>0, d \in \mathbb{ |
17,894 | What is the "direct likelihood" point of view in statistics? | Hugo, I have seen the term "Direct-Likelihood" used as a method with respect to handling missing data (aka missingness, e.g. clinical trial) via using likelihood-based mixed-effects models, modeling the missing data as random.
There is a very good Tutorial paper on this:
Direct likelihood analysis versus simple forms of imputation for missing data in randomized clinical trials
Clinical Trials 2005 2: 379
Caroline Beunckens, Geert Molenberghs and Michael G Kenward
https://www.researchgate.net/publication/7452657_Direct_likelihood_analysis_versus_simple_forms_of_imputation_for_missing_data_in_randomized_clinical_trials
They also mention that the direct-likelihood method is also termed: likelihood-based MAR analysis, likelihood-based ignorable analysis, random-effects models, random-coefficient models. | What is the "direct likelihood" point of view in statistics? | Hugo, I have seen the term "Direct-Likelihood" used as a method with respect to handling missing data (aka missingness, e.g. clinical trial) via using likelihood-based mixed-effects models, modeling t | What is the "direct likelihood" point of view in statistics?
Hugo, I have seen the term "Direct-Likelihood" used as a method with respect to handling missing data (aka missingness, e.g. clinical trial) via using likelihood-based mixed-effects models, modeling the missing data as random.
There is a very good Tutorial paper on this:
Direct likelihood analysis versus simple forms of imputation for missing data in randomized clinical trials
Clinical Trials 2005 2: 379
Caroline Beunckens, Geert Molenberghs and Michael G Kenward
https://www.researchgate.net/publication/7452657_Direct_likelihood_analysis_versus_simple_forms_of_imputation_for_missing_data_in_randomized_clinical_trials
They also mention that the direct-likelihood method is also termed: likelihood-based MAR analysis, likelihood-based ignorable analysis, random-effects models, random-coefficient models. | What is the "direct likelihood" point of view in statistics?
Hugo, I have seen the term "Direct-Likelihood" used as a method with respect to handling missing data (aka missingness, e.g. clinical trial) via using likelihood-based mixed-effects models, modeling t |
17,895 | How to compare two Spearman correlation matrices? | Since we are working with matrices constructed from the same set of ranks to construct corresponding Spearman correlations matrices, this 2012 simple method presented in this work: A simple procedure for the comparison of covariance matrices, may be of value.
In particular to quote:
Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation.
Of particular import is the claimed results and conclusions:
Results
I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences.
Conclusions
The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure.
And further comments from the available full text:
In the present work I propose a new, simple and distribution-free procedure for the exploration of differences between covariance matrices that, in addition to providing a single and continuously varying measure of matrix differentiation, makes it possible to analyse this measure in terms of the contributions of differences in matrix orientation and shape. I use both computer simulation and P matrices corresponding to snail morphological measures to compare this procedure with some widely used alternatives. I show that the new procedure has power similar or better than that of the simpler methods, and how it can be used as the basis for more detailed analyses of the nature of the found differences.
If other methods prove less impressive, you may which to further investigate the above for the comparison of rank correlation matrices performing your own simulation testing. | How to compare two Spearman correlation matrices? | Since we are working with matrices constructed from the same set of ranks to construct corresponding Spearman correlations matrices, this 2012 simple method presented in this work: A simple procedure | How to compare two Spearman correlation matrices?
Since we are working with matrices constructed from the same set of ranks to construct corresponding Spearman correlations matrices, this 2012 simple method presented in this work: A simple procedure for the comparison of covariance matrices, may be of value.
In particular to quote:
Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation.
Of particular import is the claimed results and conclusions:
Results
I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences.
Conclusions
The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure.
And further comments from the available full text:
In the present work I propose a new, simple and distribution-free procedure for the exploration of differences between covariance matrices that, in addition to providing a single and continuously varying measure of matrix differentiation, makes it possible to analyse this measure in terms of the contributions of differences in matrix orientation and shape. I use both computer simulation and P matrices corresponding to snail morphological measures to compare this procedure with some widely used alternatives. I show that the new procedure has power similar or better than that of the simpler methods, and how it can be used as the basis for more detailed analyses of the nature of the found differences.
If other methods prove less impressive, you may which to further investigate the above for the comparison of rank correlation matrices performing your own simulation testing. | How to compare two Spearman correlation matrices?
Since we are working with matrices constructed from the same set of ranks to construct corresponding Spearman correlations matrices, this 2012 simple method presented in this work: A simple procedure |
17,896 | An unbiased estimator of the ratio of two regression coefficients? | I would suggest doing error propagation on the variable type and minimize either the error or relative error of $\frac{a_1}{a_2}$. For example, from Strategies for Variance Estimation or Wikipedia
$f = \frac{A}{B}\,$
$\sigma_f^2 \approx f^2 \left[\left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_{AB}}{AB} \right]$
$\sigma_f \approx \left| f \right| \sqrt{ \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_{AB}}{AB} }$
As a guess, you probably want to minimize $(\frac{\sigma_f}{f})^2$. It is important to understand that when one does regression to find a best parameter target, one has forsaken goodness of fit. The fit process will find a best $\frac{A}{B}$, and this is definitively not related to minimizing residuals. This has been done before by taking logarithms of a non-linear fit equation, for which multiple linear applied with a different parameter target and Tikhonov regularization.
The moral of this story is that unless one asks the data to yield the answer that one desires, one will not obtain that answer. And, regression that does not specify the desired answer as a minimization target will not answer the question. | An unbiased estimator of the ratio of two regression coefficients? | I would suggest doing error propagation on the variable type and minimize either the error or relative error of $\frac{a_1}{a_2}$. For example, from Strategies for Variance Estimation or Wikipedia
$f | An unbiased estimator of the ratio of two regression coefficients?
I would suggest doing error propagation on the variable type and minimize either the error or relative error of $\frac{a_1}{a_2}$. For example, from Strategies for Variance Estimation or Wikipedia
$f = \frac{A}{B}\,$
$\sigma_f^2 \approx f^2 \left[\left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_{AB}}{AB} \right]$
$\sigma_f \approx \left| f \right| \sqrt{ \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_{AB}}{AB} }$
As a guess, you probably want to minimize $(\frac{\sigma_f}{f})^2$. It is important to understand that when one does regression to find a best parameter target, one has forsaken goodness of fit. The fit process will find a best $\frac{A}{B}$, and this is definitively not related to minimizing residuals. This has been done before by taking logarithms of a non-linear fit equation, for which multiple linear applied with a different parameter target and Tikhonov regularization.
The moral of this story is that unless one asks the data to yield the answer that one desires, one will not obtain that answer. And, regression that does not specify the desired answer as a minimization target will not answer the question. | An unbiased estimator of the ratio of two regression coefficients?
I would suggest doing error propagation on the variable type and minimize either the error or relative error of $\frac{a_1}{a_2}$. For example, from Strategies for Variance Estimation or Wikipedia
$f |
17,897 | Multivariate biological time series : VAR and seasonality | I know this question is pretty much old but it remained unanswered. Perhaps the main question is not how to remove the seasonal cycle in the data but it is part of it, so I'll give it a try: To remove seasonality from a data set there are several methods, from simple monthly-aggregated averages to fitting a sinusoidal (or another appropriate harmonic) function with non-linear fitting methods like Nelder-Mead.
The easiest way is to average data belonging to all Januaries, to all Februaries, and so on, i.e., you create a composited annual cycle, which then you can subtract from your data | Multivariate biological time series : VAR and seasonality | I know this question is pretty much old but it remained unanswered. Perhaps the main question is not how to remove the seasonal cycle in the data but it is part of it, so I'll give it a try: To remove | Multivariate biological time series : VAR and seasonality
I know this question is pretty much old but it remained unanswered. Perhaps the main question is not how to remove the seasonal cycle in the data but it is part of it, so I'll give it a try: To remove seasonality from a data set there are several methods, from simple monthly-aggregated averages to fitting a sinusoidal (or another appropriate harmonic) function with non-linear fitting methods like Nelder-Mead.
The easiest way is to average data belonging to all Januaries, to all Februaries, and so on, i.e., you create a composited annual cycle, which then you can subtract from your data | Multivariate biological time series : VAR and seasonality
I know this question is pretty much old but it remained unanswered. Perhaps the main question is not how to remove the seasonal cycle in the data but it is part of it, so I'll give it a try: To remove |
17,898 | Conditional expectation subscript notation | This is right and what you have in the end can be simplified, e.g.:
$$
E_{X}\big[E_{Y|X}[(Y-f(X))^2|X]\big]
\quad=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} [y-f(x)]^2 p_{X,Y}(x,y) \ dx\,dy
\quad =E_{X,Y}[(Y-f(X))^2],
$$
and manipulated:
$$
=E_{Y}[Y^2] -2 E_{X,Y}[Y.f(X)] + E_{X}[f(X)^2]
.
$$
The expectation notation is just ... notation, whereas the mathematical notation is more explicit/universal and the "safest" way to consider things.
I don't believe the condition inside the expectation square brackets $E_{Y|X}[\cdot|X]$ is necessary or adds anything when the distribution is explicit in the subscript, i.e. $E_{Y|X}[g(X,Y)|X]=E_{Y|X}[g(X,Y)]$, whereas it would be necessary if the subscript were omitted (as often the case): $E[g(X,Y)|X]\neq E[g(X,Y)]$, since the latter would typically be an expectation over $p(X,Y)$ and the former over $p(Y|X)$. | Conditional expectation subscript notation | This is right and what you have in the end can be simplified, e.g.:
$$
E_{X}\big[E_{Y|X}[(Y-f(X))^2|X]\big]
\quad=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} [y-f(x)]^2 p_{X,Y}(x,y) \ dx\,dy
\quad | Conditional expectation subscript notation
This is right and what you have in the end can be simplified, e.g.:
$$
E_{X}\big[E_{Y|X}[(Y-f(X))^2|X]\big]
\quad=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} [y-f(x)]^2 p_{X,Y}(x,y) \ dx\,dy
\quad =E_{X,Y}[(Y-f(X))^2],
$$
and manipulated:
$$
=E_{Y}[Y^2] -2 E_{X,Y}[Y.f(X)] + E_{X}[f(X)^2]
.
$$
The expectation notation is just ... notation, whereas the mathematical notation is more explicit/universal and the "safest" way to consider things.
I don't believe the condition inside the expectation square brackets $E_{Y|X}[\cdot|X]$ is necessary or adds anything when the distribution is explicit in the subscript, i.e. $E_{Y|X}[g(X,Y)|X]=E_{Y|X}[g(X,Y)]$, whereas it would be necessary if the subscript were omitted (as often the case): $E[g(X,Y)|X]\neq E[g(X,Y)]$, since the latter would typically be an expectation over $p(X,Y)$ and the former over $p(Y|X)$. | Conditional expectation subscript notation
This is right and what you have in the end can be simplified, e.g.:
$$
E_{X}\big[E_{Y|X}[(Y-f(X))^2|X]\big]
\quad=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} [y-f(x)]^2 p_{X,Y}(x,y) \ dx\,dy
\quad |
17,899 | Upper bounds for the copula density? | Generally speaking, no there isn't. For example, in the bivariate gaussian copula case, the quantity in the exponent has a saddle point at (0,0), and therefore explodes to infinity in two directions.
If you come across a class of copula densities that are in fact bounded, please let me know! | Upper bounds for the copula density? | Generally speaking, no there isn't. For example, in the bivariate gaussian copula case, the quantity in the exponent has a saddle point at (0,0), and therefore explodes to infinity in two directions.
| Upper bounds for the copula density?
Generally speaking, no there isn't. For example, in the bivariate gaussian copula case, the quantity in the exponent has a saddle point at (0,0), and therefore explodes to infinity in two directions.
If you come across a class of copula densities that are in fact bounded, please let me know! | Upper bounds for the copula density?
Generally speaking, no there isn't. For example, in the bivariate gaussian copula case, the quantity in the exponent has a saddle point at (0,0), and therefore explodes to infinity in two directions.
|
17,900 | How to estimate variance components with lmer for models with random effects and compare them with lme results | One common way to determine the relative contribution of each factor to a model is to remove the factor and compare the relative likelihood with something like a chi-squared test:
pchisq(logLik(model1) - logLik(model2), 1)
As the way that likelihoods are calculated between functions may be slightly different, I typically will only compare them between the same method. | How to estimate variance components with lmer for models with random effects and compare them with l | One common way to determine the relative contribution of each factor to a model is to remove the factor and compare the relative likelihood with something like a chi-squared test:
pchisq(logLik(model1 | How to estimate variance components with lmer for models with random effects and compare them with lme results
One common way to determine the relative contribution of each factor to a model is to remove the factor and compare the relative likelihood with something like a chi-squared test:
pchisq(logLik(model1) - logLik(model2), 1)
As the way that likelihoods are calculated between functions may be slightly different, I typically will only compare them between the same method. | How to estimate variance components with lmer for models with random effects and compare them with l
One common way to determine the relative contribution of each factor to a model is to remove the factor and compare the relative likelihood with something like a chi-squared test:
pchisq(logLik(model1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.