idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
35,901 | Why does Off-Policy Monte Carlo Control only learn from the "Tails of Episodes"? | Actually, the answer to your first question (why does this method learn only from the tails of episodes?) is due to the second to last line.
The if statement basically says that the method will learn from a state-action pairs in an episode only if the action is near-greedy, otherwise $A_t$ likely would not equal $\pi(S_t)$. | Why does Off-Policy Monte Carlo Control only learn from the "Tails of Episodes"? | Actually, the answer to your first question (why does this method learn only from the tails of episodes?) is due to the second to last line.
The if statement basically says that the method will learn | Why does Off-Policy Monte Carlo Control only learn from the "Tails of Episodes"?
Actually, the answer to your first question (why does this method learn only from the tails of episodes?) is due to the second to last line.
The if statement basically says that the method will learn from a state-action pairs in an episode only if the action is near-greedy, otherwise $A_t$ likely would not equal $\pi(S_t)$. | Why does Off-Policy Monte Carlo Control only learn from the "Tails of Episodes"?
Actually, the answer to your first question (why does this method learn only from the tails of episodes?) is due to the second to last line.
The if statement basically says that the method will learn |
35,902 | Can Bernoulli random variables be used to approximate more than just the normal distribution? | Bernoulli random variables can approximate (almost) any distribution to arbitrary accuracy: A sequence of Bernoulli values gives us a binary sequence, which can be interpreted as the binary representation of a real number. (This is not surprising, given that a real number is essentially just an infinite sequence of discrete digits.) Hence, by an appropriate mapping, we can transform a Bernoulli sequence into a continuous uniform random variable. Once we have done this, we can use the standard technique of inverse transformation sampling to get a random variable from an arbitrary distribution.
Now, there is some limitation on this, because in practice we never have an infinite sequence of Bernoulli values, but we can generate a large finite sequence. This allows us to approximate a uniform random variable arbitrarily well, and so we can then approximate any distribution that can be approximated arbitrarily well by a mapping of a random variable that is arbitrarily close to a uniform random variable.
Mathematical details for generation with an infinite sequence: Suppose you want to generate a scalar random variable with distribution function $F$. To do this, we consider an exchangeable binary sequence $X_1, X_2, X_3, ... \sim \text{IID Bern}(\tfrac{1}{2})$ and define the corresponding random variables:
$$A = A(\boldsymbol{X}) = \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U \Big\} \quad \quad \quad U = U(\boldsymbol{X}) = \sum_{i=1}^\infty \frac{X_i}{2^i} \sim \text{U}(0,1).$$
(This function is well-defined, by the completeness of the real numbers.) Now, since $F$ is a non-decreasing function, we have:
$$\mathbb{P}(A \leqslant a) = \mathbb{P} \Big( \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U \Big\} \leqslant a \Big) = \mathbb{P} \Big( U \leqslant F(a) \Big) = F(a).$$
(Note that this result does not require continuity of $F$, so it works for general distributions, not just continuous distributions.)
Mathematical details for generation with a finite sequence: The above case is an idealised case where we can generate an infinite sequence of Bernoulli random variables. We now consider the more realistic case where we can generate some arbitrarily large finite sequence with $k \in \mathbb{N}$ terms. Hence, we now have the finite sequence $X_1, X_2, ..., X_k \sim \text{IID Bern}(\tfrac{1}{2})$ and we define the corresponding random variables:
$$A_k = \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U_k \Big\} \quad \quad \quad U_k = \sum_{i=1}^k \frac{X_i}{2^i} + \frac{1}{2^{k+1}}.$$
(We have included an additional "continuity correction" term in $U_k$ so that its distribution is still symmetric around the value $\mathbb{E}(U_k) = \tfrac{1}{2}$.) We now have:
$$\mathbb{P}(A_k \leqslant a) = \mathbb{P} \Big( \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U_k \Big\} \leqslant a \Big) = \mathbb{P} \Big( U_k \leqslant F(a) \Big).$$
For large $k$, we then have:
$$\mathbb{P}(A_k \leqslant a) = \mathbb{P} \Big( U_k \leqslant F(a) \Big) \approx \mathbb{P} \Big( U \leqslant F(a) \Big) = F(a).$$
As you can see, this approximation relies on our ability to approximate the event $U \leqslant F(a)$ by the event $U_k \leqslant F(a)$, for large $k$. For all but pathological distributions this approximation can be made arbitrarily close by taking $k$ to be sufficiently large. There are some pathological distributions where this is not the case (e.g., any distribution with non-zero probability on the irrational numbers, or more broadly, on real values that cannot be represented as a finite binary number), but this is quite a narrow class of distributions. Hence, this technique will approximate a random variable with (almost) any distribution to an arbitrary degree of accuracy. | Can Bernoulli random variables be used to approximate more than just the normal distribution? | Bernoulli random variables can approximate (almost) any distribution to arbitrary accuracy: A sequence of Bernoulli values gives us a binary sequence, which can be interpreted as the binary representa | Can Bernoulli random variables be used to approximate more than just the normal distribution?
Bernoulli random variables can approximate (almost) any distribution to arbitrary accuracy: A sequence of Bernoulli values gives us a binary sequence, which can be interpreted as the binary representation of a real number. (This is not surprising, given that a real number is essentially just an infinite sequence of discrete digits.) Hence, by an appropriate mapping, we can transform a Bernoulli sequence into a continuous uniform random variable. Once we have done this, we can use the standard technique of inverse transformation sampling to get a random variable from an arbitrary distribution.
Now, there is some limitation on this, because in practice we never have an infinite sequence of Bernoulli values, but we can generate a large finite sequence. This allows us to approximate a uniform random variable arbitrarily well, and so we can then approximate any distribution that can be approximated arbitrarily well by a mapping of a random variable that is arbitrarily close to a uniform random variable.
Mathematical details for generation with an infinite sequence: Suppose you want to generate a scalar random variable with distribution function $F$. To do this, we consider an exchangeable binary sequence $X_1, X_2, X_3, ... \sim \text{IID Bern}(\tfrac{1}{2})$ and define the corresponding random variables:
$$A = A(\boldsymbol{X}) = \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U \Big\} \quad \quad \quad U = U(\boldsymbol{X}) = \sum_{i=1}^\infty \frac{X_i}{2^i} \sim \text{U}(0,1).$$
(This function is well-defined, by the completeness of the real numbers.) Now, since $F$ is a non-decreasing function, we have:
$$\mathbb{P}(A \leqslant a) = \mathbb{P} \Big( \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U \Big\} \leqslant a \Big) = \mathbb{P} \Big( U \leqslant F(a) \Big) = F(a).$$
(Note that this result does not require continuity of $F$, so it works for general distributions, not just continuous distributions.)
Mathematical details for generation with a finite sequence: The above case is an idealised case where we can generate an infinite sequence of Bernoulli random variables. We now consider the more realistic case where we can generate some arbitrarily large finite sequence with $k \in \mathbb{N}$ terms. Hence, we now have the finite sequence $X_1, X_2, ..., X_k \sim \text{IID Bern}(\tfrac{1}{2})$ and we define the corresponding random variables:
$$A_k = \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U_k \Big\} \quad \quad \quad U_k = \sum_{i=1}^k \frac{X_i}{2^i} + \frac{1}{2^{k+1}}.$$
(We have included an additional "continuity correction" term in $U_k$ so that its distribution is still symmetric around the value $\mathbb{E}(U_k) = \tfrac{1}{2}$.) We now have:
$$\mathbb{P}(A_k \leqslant a) = \mathbb{P} \Big( \inf \Big\{ r \in \mathbb{R} \Big| F(r) \geqslant U_k \Big\} \leqslant a \Big) = \mathbb{P} \Big( U_k \leqslant F(a) \Big).$$
For large $k$, we then have:
$$\mathbb{P}(A_k \leqslant a) = \mathbb{P} \Big( U_k \leqslant F(a) \Big) \approx \mathbb{P} \Big( U \leqslant F(a) \Big) = F(a).$$
As you can see, this approximation relies on our ability to approximate the event $U \leqslant F(a)$ by the event $U_k \leqslant F(a)$, for large $k$. For all but pathological distributions this approximation can be made arbitrarily close by taking $k$ to be sufficiently large. There are some pathological distributions where this is not the case (e.g., any distribution with non-zero probability on the irrational numbers, or more broadly, on real values that cannot be represented as a finite binary number), but this is quite a narrow class of distributions. Hence, this technique will approximate a random variable with (almost) any distribution to an arbitrary degree of accuracy. | Can Bernoulli random variables be used to approximate more than just the normal distribution?
Bernoulli random variables can approximate (almost) any distribution to arbitrary accuracy: A sequence of Bernoulli values gives us a binary sequence, which can be interpreted as the binary representa |
35,903 | Can Bernoulli random variables be used to approximate more than just the normal distribution? | Yes, depending on circumstances. Binomial and negative binomial (including geometric) distributions are defined in terms of Bernoulli random variables so the relationship of Bernoulli random variables to those distributions is not an approximation. Especially for large $n$ and small $p$,
$\mathsf{Binom}(n, p)$ is approximately $\mathsf{Pois}(\lambda = np).$
Your purpose is not clear, but once you have approximated normal distributions in terms of Bernoulli random variables, you could exploit the relationship of normal distributions to chi-squared, t (extended to Cauchy), and F distributions to approximate those by simulation as well.
In the same spirit, you might exploit the relationship of Poisson to exponential random variables, and go further to approximate Laplace, gamma (at least with integer shape parameters), and some Weibull distributions. [This is a partial list.]
Simulations generally begin from pseudorandom realizations of $\mathsf{Unif}(0,1),$ and other distributions can then be simulated in terms of uniform random variables. Simulations in terms of Bernoulli random variables might be somewhat more
awkward and limited than simulation in terms of uniform random variables, but there are, nevertheless, many possibilities. M/M Queueing processes (defined in continuous time) can be simulated approximately by 'discretizing time' and using Bernoulli instead of exponential random variables in the simulation. | Can Bernoulli random variables be used to approximate more than just the normal distribution? | Yes, depending on circumstances. Binomial and negative binomial (including geometric) distributions are defined in terms of Bernoulli random variables so the relationship of Bernoulli random variables | Can Bernoulli random variables be used to approximate more than just the normal distribution?
Yes, depending on circumstances. Binomial and negative binomial (including geometric) distributions are defined in terms of Bernoulli random variables so the relationship of Bernoulli random variables to those distributions is not an approximation. Especially for large $n$ and small $p$,
$\mathsf{Binom}(n, p)$ is approximately $\mathsf{Pois}(\lambda = np).$
Your purpose is not clear, but once you have approximated normal distributions in terms of Bernoulli random variables, you could exploit the relationship of normal distributions to chi-squared, t (extended to Cauchy), and F distributions to approximate those by simulation as well.
In the same spirit, you might exploit the relationship of Poisson to exponential random variables, and go further to approximate Laplace, gamma (at least with integer shape parameters), and some Weibull distributions. [This is a partial list.]
Simulations generally begin from pseudorandom realizations of $\mathsf{Unif}(0,1),$ and other distributions can then be simulated in terms of uniform random variables. Simulations in terms of Bernoulli random variables might be somewhat more
awkward and limited than simulation in terms of uniform random variables, but there are, nevertheless, many possibilities. M/M Queueing processes (defined in continuous time) can be simulated approximately by 'discretizing time' and using Bernoulli instead of exponential random variables in the simulation. | Can Bernoulli random variables be used to approximate more than just the normal distribution?
Yes, depending on circumstances. Binomial and negative binomial (including geometric) distributions are defined in terms of Bernoulli random variables so the relationship of Bernoulli random variables |
35,904 | Can Bernoulli random variables be used to approximate more than just the normal distribution? | This will work in most cases, at least if the Bernoulli variables involved are independent and identically distributed, or at least exchangeable and identically distributed (Peres 1992), and an unlimited sequence of them is available. This answer will include discussion on algorithms to approximate a distribution using Bernoulli random variables.
The rest of this answer assumes the Bernoulli random variables have a mean (bias) of 1/2 (they represent unbiased random bits). If the random variables (bits) have an unknown bias, a randomness extraction method such as the von Neumann or Peres extractor can be used to generate unbiased random bits from them (see my note on randomness extraction).
There are many ways to approximate a distribution to an arbitrary accuracy, and they are discussed in Devroye and Gravel 2015/2020:
For discrete distributions, there are many possibilities. One is to take the binary expansions of the distribution's probabilities and perform a random walk on them, driven by unbiased random bits. This is the essence of the DDG tree algorithm by Knuth and Yao. In fact any practical algorithm for discrete distributions can be recast to a DDG tree algorithm. See also Gryszka 2020.
The binary entropy of a discrete distribution is a lower bound on the average number of random bits needed to produce a variate of that distribution. Unfortunately, some discrete distributions have unbounded entropy; Devroye and Gravel give as an example the zeta Dirichlet distribution (in certain parameterizations).
For continuous distributions, Devroye and Gravel discuss inversion, discretization, convolution, and rejection sampling to produce a variate from a given distribution using random bits, and give lower bounds on the number of random bits needed to generate a variate with the desired accuracy. Different approaches work well for different distributions. One notable example of generating a continuous variate using unbiased bits is the exponential distribution, such as C.F.F. Karney's improvement in "Sampling exactly from a normal distribution" of von Neumann's time-honored exponential generator. Also, I specify algorithms to do arithmetic on random variates represented by sequences of bits or digits. On the inversion side, if a distribution's inverse CDF is known, enough bits of a uniform(0,1) random variate can be generated so that the inverse CDF can be calculated from that variate with a low enough variance.
REFERENCES:
Devroye, L., Gravel, C., "Random variate generation using only finitely many unbiased, independently and identically distributed random bits", arXiv:1502.02539v6 [cs.IT], 2020.
Peres, Y., "Iterating von Neumann's procedure for extracting random bits", Annals of Statistics 1992,20,1, p. 590-597.
Knuth, Donald E. And Andrew Chi-Chih Yao. "The complexity of nonuniform random number generation", in Algorithms and Complexity: New Directions and Recent Results, 1976.
Gryszka, K. From biased coin to any discrete distribution. Period Math Hung (2020). https://doi.org/10.1007/s10998-020-00363-w | Can Bernoulli random variables be used to approximate more than just the normal distribution? | This will work in most cases, at least if the Bernoulli variables involved are independent and identically distributed, or at least exchangeable and identically distributed (Peres 1992), and an unlimi | Can Bernoulli random variables be used to approximate more than just the normal distribution?
This will work in most cases, at least if the Bernoulli variables involved are independent and identically distributed, or at least exchangeable and identically distributed (Peres 1992), and an unlimited sequence of them is available. This answer will include discussion on algorithms to approximate a distribution using Bernoulli random variables.
The rest of this answer assumes the Bernoulli random variables have a mean (bias) of 1/2 (they represent unbiased random bits). If the random variables (bits) have an unknown bias, a randomness extraction method such as the von Neumann or Peres extractor can be used to generate unbiased random bits from them (see my note on randomness extraction).
There are many ways to approximate a distribution to an arbitrary accuracy, and they are discussed in Devroye and Gravel 2015/2020:
For discrete distributions, there are many possibilities. One is to take the binary expansions of the distribution's probabilities and perform a random walk on them, driven by unbiased random bits. This is the essence of the DDG tree algorithm by Knuth and Yao. In fact any practical algorithm for discrete distributions can be recast to a DDG tree algorithm. See also Gryszka 2020.
The binary entropy of a discrete distribution is a lower bound on the average number of random bits needed to produce a variate of that distribution. Unfortunately, some discrete distributions have unbounded entropy; Devroye and Gravel give as an example the zeta Dirichlet distribution (in certain parameterizations).
For continuous distributions, Devroye and Gravel discuss inversion, discretization, convolution, and rejection sampling to produce a variate from a given distribution using random bits, and give lower bounds on the number of random bits needed to generate a variate with the desired accuracy. Different approaches work well for different distributions. One notable example of generating a continuous variate using unbiased bits is the exponential distribution, such as C.F.F. Karney's improvement in "Sampling exactly from a normal distribution" of von Neumann's time-honored exponential generator. Also, I specify algorithms to do arithmetic on random variates represented by sequences of bits or digits. On the inversion side, if a distribution's inverse CDF is known, enough bits of a uniform(0,1) random variate can be generated so that the inverse CDF can be calculated from that variate with a low enough variance.
REFERENCES:
Devroye, L., Gravel, C., "Random variate generation using only finitely many unbiased, independently and identically distributed random bits", arXiv:1502.02539v6 [cs.IT], 2020.
Peres, Y., "Iterating von Neumann's procedure for extracting random bits", Annals of Statistics 1992,20,1, p. 590-597.
Knuth, Donald E. And Andrew Chi-Chih Yao. "The complexity of nonuniform random number generation", in Algorithms and Complexity: New Directions and Recent Results, 1976.
Gryszka, K. From biased coin to any discrete distribution. Period Math Hung (2020). https://doi.org/10.1007/s10998-020-00363-w | Can Bernoulli random variables be used to approximate more than just the normal distribution?
This will work in most cases, at least if the Bernoulli variables involved are independent and identically distributed, or at least exchangeable and identically distributed (Peres 1992), and an unlimi |
35,905 | In what sense did "average" ever come to mean a statistical quantity? | Consolidation of my comments to an answer.
This doesn't address the etymology, but the underlying concern of the OP as to whether "average" can mean median.
There are times when "average" is used to refer to medians. It's common to say "The average American household earns 50,000 per year," and intend that the median household income is 50,000.
Compare, for example, the headline and data presented here. The headline:
Here’s how much the average American earns at every age
and the data presented:
Below, check out the median earnings for Americans at every age
bracket, according to data from the Bureau of Labor Statistics for the
second quarter of 2017.
(Also later in the article, use of "average man".)
As a different example from a statistical source, Grissom and Kim, 2012, Effect Sizes for Research, 2nd ed. The first sentence of chapter 2:
Recall that the location of a population is a parameter that is
usually defined as a measure of its "average," commonly its mean or
median.
To be clear, I think that "average" usually indicates the mean. But to the original question, it's possible the speaker was using average in a way that implies a median value.
As a final note, I wonder if when we say, "man of average height" if we are thinking of the median and not the mean. Probably neither per se. But I could see how speaking this way, I might use the median as the criterion. | In what sense did "average" ever come to mean a statistical quantity? | Consolidation of my comments to an answer.
This doesn't address the etymology, but the underlying concern of the OP as to whether "average" can mean median.
There are times when "average" is used to r | In what sense did "average" ever come to mean a statistical quantity?
Consolidation of my comments to an answer.
This doesn't address the etymology, but the underlying concern of the OP as to whether "average" can mean median.
There are times when "average" is used to refer to medians. It's common to say "The average American household earns 50,000 per year," and intend that the median household income is 50,000.
Compare, for example, the headline and data presented here. The headline:
Here’s how much the average American earns at every age
and the data presented:
Below, check out the median earnings for Americans at every age
bracket, according to data from the Bureau of Labor Statistics for the
second quarter of 2017.
(Also later in the article, use of "average man".)
As a different example from a statistical source, Grissom and Kim, 2012, Effect Sizes for Research, 2nd ed. The first sentence of chapter 2:
Recall that the location of a population is a parameter that is
usually defined as a measure of its "average," commonly its mean or
median.
To be clear, I think that "average" usually indicates the mean. But to the original question, it's possible the speaker was using average in a way that implies a median value.
As a final note, I wonder if when we say, "man of average height" if we are thinking of the median and not the mean. Probably neither per se. But I could see how speaking this way, I might use the median as the criterion. | In what sense did "average" ever come to mean a statistical quantity?
Consolidation of my comments to an answer.
This doesn't address the etymology, but the underlying concern of the OP as to whether "average" can mean median.
There are times when "average" is used to r |
35,906 | In what sense did "average" ever come to mean a statistical quantity? | Average had for at least two centuries since its induction in European languages implied damage to shipping goods, Words with similar roots are noted among shipping peoples (English, Dutch) and other traders in the Southern Mediterranean (Italian, French). The root awar- is from Arabic meaning damaged.
King Louis the XIV reissued and enforced a set of pan-European maritime laws after nearly a millenia of lawlessness. This was in 1681. The Maritime Law of General Average states that damage to shipping goods should result in a proportionate share of losses to all parties: receiver and shipper and among the shipper all the crew.
It seems like no coincidence that the usage of average to imply a mean in a general sense is attributed only 50 years later to another French writer, I guess it would be De Moivre.
Since the Maritime Law refers to a sum being divided with equal weights, if there is any bearing on the conversation, I think we can probably say it is the arithmetic mean. | In what sense did "average" ever come to mean a statistical quantity? | Average had for at least two centuries since its induction in European languages implied damage to shipping goods, Words with similar roots are noted among shipping peoples (English, Dutch) and other | In what sense did "average" ever come to mean a statistical quantity?
Average had for at least two centuries since its induction in European languages implied damage to shipping goods, Words with similar roots are noted among shipping peoples (English, Dutch) and other traders in the Southern Mediterranean (Italian, French). The root awar- is from Arabic meaning damaged.
King Louis the XIV reissued and enforced a set of pan-European maritime laws after nearly a millenia of lawlessness. This was in 1681. The Maritime Law of General Average states that damage to shipping goods should result in a proportionate share of losses to all parties: receiver and shipper and among the shipper all the crew.
It seems like no coincidence that the usage of average to imply a mean in a general sense is attributed only 50 years later to another French writer, I guess it would be De Moivre.
Since the Maritime Law refers to a sum being divided with equal weights, if there is any bearing on the conversation, I think we can probably say it is the arithmetic mean. | In what sense did "average" ever come to mean a statistical quantity?
Average had for at least two centuries since its induction in European languages implied damage to shipping goods, Words with similar roots are noted among shipping peoples (English, Dutch) and other |
35,907 | In what sense did "average" ever come to mean a statistical quantity? | The word average came into English from Middle French avarie, a derivative of an Arabic word meaning “damaged merchandise.” Avarie originally meant damage sustained by a ship or its cargo, but came to mean the expenses of such damage. | In what sense did "average" ever come to mean a statistical quantity? | The word average came into English from Middle French avarie, a derivative of an Arabic word meaning “damaged merchandise.” Avarie originally meant damage sustained by a ship or its cargo, but came to | In what sense did "average" ever come to mean a statistical quantity?
The word average came into English from Middle French avarie, a derivative of an Arabic word meaning “damaged merchandise.” Avarie originally meant damage sustained by a ship or its cargo, but came to mean the expenses of such damage. | In what sense did "average" ever come to mean a statistical quantity?
The word average came into English from Middle French avarie, a derivative of an Arabic word meaning “damaged merchandise.” Avarie originally meant damage sustained by a ship or its cargo, but came to |
35,908 | McNemar or Fisher exact test for propensity score matched data? | This is definitely an ongoing debate in the literature, but at this point the evidence points to using paired analysis to compute standard errors and p-values. Although the goal of matching is to arrive at two samples that mimic a randomized control trial, not a paired-randomized control trial, matching does still induce a covariance between the outcomes within each matched set, which needs to be taken account of in inference. P. C. Austin has written a great deal about this (e.g., Austin & Small, 2014). Zubizarreta, Paredes, & Rosenbaum (2014) showed that after matching (i.e., discarding unmatched units), pairing (i.e., creating matched pairs) can reduce the sensitivity of the eventual estimate to unmeasured confounding and reduce standard errors, which could only be realized if paired analyses were used on the sample. | McNemar or Fisher exact test for propensity score matched data? | This is definitely an ongoing debate in the literature, but at this point the evidence points to using paired analysis to compute standard errors and p-values. Although the goal of matching is to arri | McNemar or Fisher exact test for propensity score matched data?
This is definitely an ongoing debate in the literature, but at this point the evidence points to using paired analysis to compute standard errors and p-values. Although the goal of matching is to arrive at two samples that mimic a randomized control trial, not a paired-randomized control trial, matching does still induce a covariance between the outcomes within each matched set, which needs to be taken account of in inference. P. C. Austin has written a great deal about this (e.g., Austin & Small, 2014). Zubizarreta, Paredes, & Rosenbaum (2014) showed that after matching (i.e., discarding unmatched units), pairing (i.e., creating matched pairs) can reduce the sensitivity of the eventual estimate to unmeasured confounding and reduce standard errors, which could only be realized if paired analyses were used on the sample. | McNemar or Fisher exact test for propensity score matched data?
This is definitely an ongoing debate in the literature, but at this point the evidence points to using paired analysis to compute standard errors and p-values. Although the goal of matching is to arri |
35,909 | Stacking without splitting data | You have to have real, out-of-sample, predictions as the input to your blender, otherwise your blender is not learning about, and thereby improving, prediction accuracy - but instead learning about, and thereby improving, in-sample estimation accuracy, which can lead to overfitting. This is why you cannot use the whole training set for both layers - if you do, some of the "predictions" made by the base models will actually be in-sample estimates, not out-of-sample predictions.
You split your training data set so that subset 1 is used to train your base models. This is what is shown in your left picture. Your base models then are used to generate predictions for subset 2, and these predictions, along with the actuals for subset 2, are given to your blender for training. This is what is shown in your right picture. Basically, the predictions are features that are given to your blender, along possibly with other features from subset 2.
The model that the blender comes up with based on subset 2 is then used to predict the test data. This can be done by predicting the test data with each of the base models (developed on subset 1), then predicting the test data again with the blender model (developed on subset 2) + the predictions from the base models. The resultant predictions are the ones you use for calibrating / testing the combined base models + blender model.
Alternatively, you can re-train your base models on subset 1 and 2 prior to making predictions for the test data set. This will tend to improve the base model predictions of the test data set, but (hopefully slightly) weaken the link between the base models and the blender model, as the blender saw less-accurate predictions when it was being trained. The blender will consequently add less value and more overfitting, but given that the base models are more accurate, it may balance out.
ETA (from comments):
In practice, I tend to split into more than 2 groups, liking 10 groups for some reason. The base models are then trained with much more data so are more accurate (at least in situations where you don't have overwhelming amounts of data) and the blender is trained on predictions from models that have accuracy characteristics that are closer to what it will see when going operational, which is a win-win in accuracy terms. | Stacking without splitting data | You have to have real, out-of-sample, predictions as the input to your blender, otherwise your blender is not learning about, and thereby improving, prediction accuracy - but instead learning about, a | Stacking without splitting data
You have to have real, out-of-sample, predictions as the input to your blender, otherwise your blender is not learning about, and thereby improving, prediction accuracy - but instead learning about, and thereby improving, in-sample estimation accuracy, which can lead to overfitting. This is why you cannot use the whole training set for both layers - if you do, some of the "predictions" made by the base models will actually be in-sample estimates, not out-of-sample predictions.
You split your training data set so that subset 1 is used to train your base models. This is what is shown in your left picture. Your base models then are used to generate predictions for subset 2, and these predictions, along with the actuals for subset 2, are given to your blender for training. This is what is shown in your right picture. Basically, the predictions are features that are given to your blender, along possibly with other features from subset 2.
The model that the blender comes up with based on subset 2 is then used to predict the test data. This can be done by predicting the test data with each of the base models (developed on subset 1), then predicting the test data again with the blender model (developed on subset 2) + the predictions from the base models. The resultant predictions are the ones you use for calibrating / testing the combined base models + blender model.
Alternatively, you can re-train your base models on subset 1 and 2 prior to making predictions for the test data set. This will tend to improve the base model predictions of the test data set, but (hopefully slightly) weaken the link between the base models and the blender model, as the blender saw less-accurate predictions when it was being trained. The blender will consequently add less value and more overfitting, but given that the base models are more accurate, it may balance out.
ETA (from comments):
In practice, I tend to split into more than 2 groups, liking 10 groups for some reason. The base models are then trained with much more data so are more accurate (at least in situations where you don't have overwhelming amounts of data) and the blender is trained on predictions from models that have accuracy characteristics that are closer to what it will see when going operational, which is a win-win in accuracy terms. | Stacking without splitting data
You have to have real, out-of-sample, predictions as the input to your blender, otherwise your blender is not learning about, and thereby improving, prediction accuracy - but instead learning about, a |
35,910 | Stacking without splitting data | Let me explain.
In stacking you split your data in two.
One holdout set (10 - 20%).
One training set (80 - 90%).
You train your base learners individually on the training set using the same cross-validation method for each. You must use the same cross-validation fold indices for all base learners. This is because you can only train the metaclassifier on the predicted probabilities of those test-fold sections and the original raw data associated these rows; because these rows weren't used for building the base-learners.
Note: Not a single training fold can overlap a testing fold. If row 37 is inside a test fold; it cannot occur in any training fold. Otherwise you will be passing information from layer to the next. Use 10 to 20% of the training set rows as test-folds.
Now use each trained base learning to predict on the holdup set, and write new probability variables to it (Example: SVM_probs_up, LDA_probs_up).
Once the metaclassifier is trained using the test-fold probabilities from each base learner + the raw data from the same rows, then use the meta-classier to predict on the holdout set for your final results.
With Blending the training set is also split. But instead of using cross-validation you just take out 30% - 40% straight away (training-validation-set) and then train the base-learners on the remaining 60% - 70% of the training set. Then predict to training-validation-set and holdout-set, writing these as new variables to both sets. Then train the metaclassifier on the training validation set, then predict using the metaclassifier on the holdout set for your final results. | Stacking without splitting data | Let me explain.
In stacking you split your data in two.
One holdout set (10 - 20%).
One training set (80 - 90%).
You train your base learners individually on the training set using the same cross-vali | Stacking without splitting data
Let me explain.
In stacking you split your data in two.
One holdout set (10 - 20%).
One training set (80 - 90%).
You train your base learners individually on the training set using the same cross-validation method for each. You must use the same cross-validation fold indices for all base learners. This is because you can only train the metaclassifier on the predicted probabilities of those test-fold sections and the original raw data associated these rows; because these rows weren't used for building the base-learners.
Note: Not a single training fold can overlap a testing fold. If row 37 is inside a test fold; it cannot occur in any training fold. Otherwise you will be passing information from layer to the next. Use 10 to 20% of the training set rows as test-folds.
Now use each trained base learning to predict on the holdup set, and write new probability variables to it (Example: SVM_probs_up, LDA_probs_up).
Once the metaclassifier is trained using the test-fold probabilities from each base learner + the raw data from the same rows, then use the meta-classier to predict on the holdout set for your final results.
With Blending the training set is also split. But instead of using cross-validation you just take out 30% - 40% straight away (training-validation-set) and then train the base-learners on the remaining 60% - 70% of the training set. Then predict to training-validation-set and holdout-set, writing these as new variables to both sets. Then train the metaclassifier on the training validation set, then predict using the metaclassifier on the holdout set for your final results. | Stacking without splitting data
Let me explain.
In stacking you split your data in two.
One holdout set (10 - 20%).
One training set (80 - 90%).
You train your base learners individually on the training set using the same cross-vali |
35,911 | GINI and AUC relationship | I had the same thoughts and I stumbled upon a nice presentation.
Let us use the following definitions:
Gini (mostly equal to the accuracy ratio "AR") is the ratio of the area between your curve and the diagonal and the area between the perfect model and the diagonal.
This definition on the CAP curve gives the usual Gini. If you use it on the ROC curve then you see the relation to the AUC. The perfect model in the ROC is just a straight line (0% FPR and 100% TPR).
I tried to make this clear in the following two plots.
First on the CAP you get Gini by the usual formula:
Then on the ROC you see the perfect model and apply the same formual. We use that the area between the perfect model and the diagonal is $1/2$ in this case:
Finally, using that $A = G/2$ we get the relationship: $G = 2 \cdot AUC -1$. | GINI and AUC relationship | I had the same thoughts and I stumbled upon a nice presentation.
Let us use the following definitions:
Gini (mostly equal to the accuracy ratio "AR") is the ratio of the area between your curve and th | GINI and AUC relationship
I had the same thoughts and I stumbled upon a nice presentation.
Let us use the following definitions:
Gini (mostly equal to the accuracy ratio "AR") is the ratio of the area between your curve and the diagonal and the area between the perfect model and the diagonal.
This definition on the CAP curve gives the usual Gini. If you use it on the ROC curve then you see the relation to the AUC. The perfect model in the ROC is just a straight line (0% FPR and 100% TPR).
I tried to make this clear in the following two plots.
First on the CAP you get Gini by the usual formula:
Then on the ROC you see the perfect model and apply the same formual. We use that the area between the perfect model and the diagonal is $1/2$ in this case:
Finally, using that $A = G/2$ we get the relationship: $G = 2 \cdot AUC -1$. | GINI and AUC relationship
I had the same thoughts and I stumbled upon a nice presentation.
Let us use the following definitions:
Gini (mostly equal to the accuracy ratio "AR") is the ratio of the area between your curve and th |
35,912 | In plain English what is the difference between a most powerful test and a uniformly most powerful test? | "Uniformly" means regardless of the values of the unobservable parameters. One test may be the most powerful one for a particular value of an unobservable parameter while a different test is the most powerful one for a different value of the parameter. A uniformly more powerful test remains the most powerful one regardless of the value of the parameters. | In plain English what is the difference between a most powerful test and a uniformly most powerful t | "Uniformly" means regardless of the values of the unobservable parameters. One test may be the most powerful one for a particular value of an unobservable parameter while a different test is the most | In plain English what is the difference between a most powerful test and a uniformly most powerful test?
"Uniformly" means regardless of the values of the unobservable parameters. One test may be the most powerful one for a particular value of an unobservable parameter while a different test is the most powerful one for a different value of the parameter. A uniformly more powerful test remains the most powerful one regardless of the value of the parameters. | In plain English what is the difference between a most powerful test and a uniformly most powerful t
"Uniformly" means regardless of the values of the unobservable parameters. One test may be the most powerful one for a particular value of an unobservable parameter while a different test is the most |
35,913 | In plain English what is the difference between a most powerful test and a uniformly most powerful test? | According to Mood [1]
Most powerful test (MP): a test $\delta$ of $H_0: \theta = \theta_0$ vs $H_1: \theta = \theta_1$ of size $\alpha$ is MP if it has the greatest power $\pi(\theta_1 \mid \delta)$ among all tests of size $\alpha$ or less, where $\pi$ is the power function. In words, $\delta$ has the greatest capacity of detecting $H_1$ among tests of size at most $\alpha$ and these specific hypothesis.
Uniformly most powerful test (UMP): a test $\delta$ of $H_0: \theta \in \Theta_0$ vs $H_1: \theta \in \Theta - \Theta_0$ size $\alpha$ is UMP if it has the greatest power $\pi_{\theta \in \Theta_1}(\theta \mid \delta)$ among all tests of size $\alpha$ or less. "Uniformly" refers to all values of $\theta$.
Notice the difference in the two staments with respect to the hypothesis and power. A non-UMP test can be most powerful just for a specific value of $\theta$. A UMP test is is the "most powerful" test for each value of $\theta$ in $H_1$.
[1] Mood, Alexander McFarlane. "Introduction to the Theory of Statistics." (1950). | In plain English what is the difference between a most powerful test and a uniformly most powerful t | According to Mood [1]
Most powerful test (MP): a test $\delta$ of $H_0: \theta = \theta_0$ vs $H_1: \theta = \theta_1$ of size $\alpha$ is MP if it has the greatest power $\pi(\theta_1 \mid \delta)$ | In plain English what is the difference between a most powerful test and a uniformly most powerful test?
According to Mood [1]
Most powerful test (MP): a test $\delta$ of $H_0: \theta = \theta_0$ vs $H_1: \theta = \theta_1$ of size $\alpha$ is MP if it has the greatest power $\pi(\theta_1 \mid \delta)$ among all tests of size $\alpha$ or less, where $\pi$ is the power function. In words, $\delta$ has the greatest capacity of detecting $H_1$ among tests of size at most $\alpha$ and these specific hypothesis.
Uniformly most powerful test (UMP): a test $\delta$ of $H_0: \theta \in \Theta_0$ vs $H_1: \theta \in \Theta - \Theta_0$ size $\alpha$ is UMP if it has the greatest power $\pi_{\theta \in \Theta_1}(\theta \mid \delta)$ among all tests of size $\alpha$ or less. "Uniformly" refers to all values of $\theta$.
Notice the difference in the two staments with respect to the hypothesis and power. A non-UMP test can be most powerful just for a specific value of $\theta$. A UMP test is is the "most powerful" test for each value of $\theta$ in $H_1$.
[1] Mood, Alexander McFarlane. "Introduction to the Theory of Statistics." (1950). | In plain English what is the difference between a most powerful test and a uniformly most powerful t
According to Mood [1]
Most powerful test (MP): a test $\delta$ of $H_0: \theta = \theta_0$ vs $H_1: \theta = \theta_1$ of size $\alpha$ is MP if it has the greatest power $\pi(\theta_1 \mid \delta)$ |
35,914 | Expected minimum distance from a point with varying density | Consider the distance to the origin of $n$ independently distributed random variables $(X_i,Y_i)$ that have uniform distributions on the square $[-1,1]^2.$
Writing $R_i^2 = X_i^2+Y_i^2$ for the squared distance, Euclidean geometry shows us that
$$\Pr(R_i \le r \le 1) = \frac{1}{4} \pi\, r^2 $$
while (with a little more work)
$$\Pr(1 \le R_i \le r \le \sqrt{2}) = \frac{1}{4}\left(\pi\, r^2 + 4\sqrt{r^2-1} - 4 r^2 \operatorname{ArcTan}\left(\sqrt{r^2-1}\right)\right).$$
Together these determine the distribution function $F$ common to all the $R_i.$
Because the $n$ points are independent, so are the distances $R_i,$ whence the survival function of $\min(R_i)$ is
$$S_n(r) = (1 - F(r))^n,$$
implying the mean shortest distance is
$$\mu(n) = \int_0^\sqrt{2} S_n(r)\, dr.$$
For $n\gg 1,$ almost all the area in this integral is close to $0,$ so we may approximate it as
$$\mu_\text{approx}(n) = \int_0^1S_n(r)\, dr = \int_0^1\left(1 - \frac{\pi}{4}r^2\right)^n\,dr.$$
The error is not greater than the part of the integral that was omitted, which is in turn no greater than
$$(\sqrt{2}-1)(1-F(1))^n = (\sqrt{2}-1)(1 - \pi/4)^n,$$
which obviously decreases exponentially with $n.$
We may in turn approximate the integrand as
$$\left(1 - \frac{\pi}{4}r^2\right)^n \approx \exp\left(-\frac{1}{2} \frac{r^2}{2/(n\pi)}\right).$$
Up to a normalizing constant, this is the density function of a Normal distribution with mean $0$ and variance $\sigma^2=2/(n\pi).$ The missing normalizing constant is
$$C(n) = \frac{1}{\sqrt{2\pi \sigma^2}} = \frac{1}{\sqrt{2\pi\ 2 / (n\pi)}} = \frac{\sqrt{n}}{2}.$$
Therefore, extending the integral from $1$ to $\infty$ (which adds an error proportional to $e^{-n}$),
$$\mu_\text{approx}(n) \approx \int_0^\infty e^{-t^2/(2\sigma^2)}\,dt = \frac{1}{C(n)} \frac{1}{2} = \frac{1}{\sqrt{n}}.$$
In the process of obtaining this approximation three errors were made. Collectively they are at most of order $n^{-1},$ the error incurred when approximating $S_n(r)$ by the Gaussian.
This figure plots $n$ times the difference between $1$ and $\sqrt{n}$ times the mean shortest distance observed in $10^5$ separate simulated datasets for each $n.$ Because they decrease as $n$ grows, this is evidence that the error is $o(n^{-1}/\sqrt{n}) = o(n^{-3/2}).$
Finally, the factor $1/2$ in the question derives from the size of the square: the density is the number of points $n,$ per unit area and the square $[-1,1]^2$ has area $4$, whence
$$2\sqrt{\text{Density}} = 2\sqrt{n/4} = \sqrt{n}.$$
This is the R code for the simulation:
n.sim <- 1e5 # Size of each simulation
d <- 2 # Dimension
n <- 2^(1:11) # Numbers of points in each simulation
#
# Estimate mean distance to the origin for each `n`.
#
y <- sapply(n, function(n.points) {
x <- array(runif(d*n.points*n.sim, -1, 1), c(d, n.points, n.sim))
mean(sqrt(apply(colSums(x^2), 2, min)))
})
#
# Plot the errors (normalized) against `n`.
#
library(ggplot2)
ggplot(data.frame(Log2.n = 1:length(n), Error=sqrt(n)* (1 - y * n^(1/d))),
aes(Log2.n, Error)) + geom_point() + geom_smooth()
ylab("Error * n") + ggtitle("Simulation Means") | Expected minimum distance from a point with varying density | Consider the distance to the origin of $n$ independently distributed random variables $(X_i,Y_i)$ that have uniform distributions on the square $[-1,1]^2.$
Writing $R_i^2 = X_i^2+Y_i^2$ for the squa | Expected minimum distance from a point with varying density
Consider the distance to the origin of $n$ independently distributed random variables $(X_i,Y_i)$ that have uniform distributions on the square $[-1,1]^2.$
Writing $R_i^2 = X_i^2+Y_i^2$ for the squared distance, Euclidean geometry shows us that
$$\Pr(R_i \le r \le 1) = \frac{1}{4} \pi\, r^2 $$
while (with a little more work)
$$\Pr(1 \le R_i \le r \le \sqrt{2}) = \frac{1}{4}\left(\pi\, r^2 + 4\sqrt{r^2-1} - 4 r^2 \operatorname{ArcTan}\left(\sqrt{r^2-1}\right)\right).$$
Together these determine the distribution function $F$ common to all the $R_i.$
Because the $n$ points are independent, so are the distances $R_i,$ whence the survival function of $\min(R_i)$ is
$$S_n(r) = (1 - F(r))^n,$$
implying the mean shortest distance is
$$\mu(n) = \int_0^\sqrt{2} S_n(r)\, dr.$$
For $n\gg 1,$ almost all the area in this integral is close to $0,$ so we may approximate it as
$$\mu_\text{approx}(n) = \int_0^1S_n(r)\, dr = \int_0^1\left(1 - \frac{\pi}{4}r^2\right)^n\,dr.$$
The error is not greater than the part of the integral that was omitted, which is in turn no greater than
$$(\sqrt{2}-1)(1-F(1))^n = (\sqrt{2}-1)(1 - \pi/4)^n,$$
which obviously decreases exponentially with $n.$
We may in turn approximate the integrand as
$$\left(1 - \frac{\pi}{4}r^2\right)^n \approx \exp\left(-\frac{1}{2} \frac{r^2}{2/(n\pi)}\right).$$
Up to a normalizing constant, this is the density function of a Normal distribution with mean $0$ and variance $\sigma^2=2/(n\pi).$ The missing normalizing constant is
$$C(n) = \frac{1}{\sqrt{2\pi \sigma^2}} = \frac{1}{\sqrt{2\pi\ 2 / (n\pi)}} = \frac{\sqrt{n}}{2}.$$
Therefore, extending the integral from $1$ to $\infty$ (which adds an error proportional to $e^{-n}$),
$$\mu_\text{approx}(n) \approx \int_0^\infty e^{-t^2/(2\sigma^2)}\,dt = \frac{1}{C(n)} \frac{1}{2} = \frac{1}{\sqrt{n}}.$$
In the process of obtaining this approximation three errors were made. Collectively they are at most of order $n^{-1},$ the error incurred when approximating $S_n(r)$ by the Gaussian.
This figure plots $n$ times the difference between $1$ and $\sqrt{n}$ times the mean shortest distance observed in $10^5$ separate simulated datasets for each $n.$ Because they decrease as $n$ grows, this is evidence that the error is $o(n^{-1}/\sqrt{n}) = o(n^{-3/2}).$
Finally, the factor $1/2$ in the question derives from the size of the square: the density is the number of points $n,$ per unit area and the square $[-1,1]^2$ has area $4$, whence
$$2\sqrt{\text{Density}} = 2\sqrt{n/4} = \sqrt{n}.$$
This is the R code for the simulation:
n.sim <- 1e5 # Size of each simulation
d <- 2 # Dimension
n <- 2^(1:11) # Numbers of points in each simulation
#
# Estimate mean distance to the origin for each `n`.
#
y <- sapply(n, function(n.points) {
x <- array(runif(d*n.points*n.sim, -1, 1), c(d, n.points, n.sim))
mean(sqrt(apply(colSums(x^2), 2, min)))
})
#
# Plot the errors (normalized) against `n`.
#
library(ggplot2)
ggplot(data.frame(Log2.n = 1:length(n), Error=sqrt(n)* (1 - y * n^(1/d))),
aes(Log2.n, Error)) + geom_point() + geom_smooth()
ylab("Error * n") + ggtitle("Simulation Means") | Expected minimum distance from a point with varying density
Consider the distance to the origin of $n$ independently distributed random variables $(X_i,Y_i)$ that have uniform distributions on the square $[-1,1]^2.$
Writing $R_i^2 = X_i^2+Y_i^2$ for the squa |
35,915 | Setting max_depth greater than the number of features in a Random Forest | There is no problem with setting the maximum depth of a Random Forest (or more specifically, of any tree) higher than the number of features.
For instance, you could have two features, Age and Sex. Then you could have a series of splits that first check whether Age>18, if so check whether Sex=M, if so check whether Age>40. The end result would be four leaves/bins, one for Age<=18, one for Age>18 & Sex=F, one for 18<Age<=40 & Sex=M and one for Age>40 & Sex=M. | Setting max_depth greater than the number of features in a Random Forest | There is no problem with setting the maximum depth of a Random Forest (or more specifically, of any tree) higher than the number of features.
For instance, you could have two features, Age and Sex. Th | Setting max_depth greater than the number of features in a Random Forest
There is no problem with setting the maximum depth of a Random Forest (or more specifically, of any tree) higher than the number of features.
For instance, you could have two features, Age and Sex. Then you could have a series of splits that first check whether Age>18, if so check whether Sex=M, if so check whether Age>40. The end result would be four leaves/bins, one for Age<=18, one for Age>18 & Sex=F, one for 18<Age<=40 & Sex=M and one for Age>40 & Sex=M. | Setting max_depth greater than the number of features in a Random Forest
There is no problem with setting the maximum depth of a Random Forest (or more specifically, of any tree) higher than the number of features.
For instance, you could have two features, Age and Sex. Th |
35,916 | Setting max_depth greater than the number of features in a Random Forest | A tree's(Decision Tree/ RF) nodes are only split based on the Information Gain or Gini impurity. The number of features is not a parameter(more precisely, hyperparameter). | Setting max_depth greater than the number of features in a Random Forest | A tree's(Decision Tree/ RF) nodes are only split based on the Information Gain or Gini impurity. The number of features is not a parameter(more precisely, hyperparameter). | Setting max_depth greater than the number of features in a Random Forest
A tree's(Decision Tree/ RF) nodes are only split based on the Information Gain or Gini impurity. The number of features is not a parameter(more precisely, hyperparameter). | Setting max_depth greater than the number of features in a Random Forest
A tree's(Decision Tree/ RF) nodes are only split based on the Information Gain or Gini impurity. The number of features is not a parameter(more precisely, hyperparameter). |
35,917 | feature extraction: freezing convolutional base vs. training on extracted features | I think you understand it pretty correctly. To address your questions:
(1.) Therefore it is essentially the same as the convolutional base in the first approach and the classifiers are identical as well so I think they should give us the same accuracy (and speed). What am I missing?
The methods are generally equal. You are not missing anything. The difference in the accuracy is thanks to data augmentation (see below).
The second method is slower because you need to 1) generate augmented images on the fly, 2) compute the convnet features for every augmented image. The first method skips this and just uses precomputed convnet features for a fixed set of images.
(2.) In the book it is suggested that the first and second approach reach an accuracy of 90% and 96%, respectively on the validation data. (If the approaches are different) Why does this happen?
The second method should work better because it uses data augmentation. Data augmentation is extremely powerful thing, so improving the accuracy by 6% is expectable.
(3.) It is suggested in the book that in the first approach we could not use data augmentation. It is not clear for me why this is so.
Theoretically, you could use data augmentation for the first method. Instead of generating augmented samples on-the-fly, you would first generate a huge number of those (say, 1000 variants of every sample) and compute their convnet features, which you would use to train the classifier. The downsides of this approach are 1) higher memory requirements, 2) "limited" number of augmented samples (after every 1000 epochs, you just start using the same samples again). On the other hand, it is faster than the second approach.
(3.) considering the value of batch_size and steps_per_epoch, the number of images used for training in both approaches are the same, i.e. 2000
In every epoch, both methods use 2000 images. However, the first method uses the same 2000 images in every epoch. The second method uses different, augmented versions of those images every epoch. | feature extraction: freezing convolutional base vs. training on extracted features | I think you understand it pretty correctly. To address your questions:
(1.) Therefore it is essentially the same as the convolutional base in the first approach and the classifiers are identical as w | feature extraction: freezing convolutional base vs. training on extracted features
I think you understand it pretty correctly. To address your questions:
(1.) Therefore it is essentially the same as the convolutional base in the first approach and the classifiers are identical as well so I think they should give us the same accuracy (and speed). What am I missing?
The methods are generally equal. You are not missing anything. The difference in the accuracy is thanks to data augmentation (see below).
The second method is slower because you need to 1) generate augmented images on the fly, 2) compute the convnet features for every augmented image. The first method skips this and just uses precomputed convnet features for a fixed set of images.
(2.) In the book it is suggested that the first and second approach reach an accuracy of 90% and 96%, respectively on the validation data. (If the approaches are different) Why does this happen?
The second method should work better because it uses data augmentation. Data augmentation is extremely powerful thing, so improving the accuracy by 6% is expectable.
(3.) It is suggested in the book that in the first approach we could not use data augmentation. It is not clear for me why this is so.
Theoretically, you could use data augmentation for the first method. Instead of generating augmented samples on-the-fly, you would first generate a huge number of those (say, 1000 variants of every sample) and compute their convnet features, which you would use to train the classifier. The downsides of this approach are 1) higher memory requirements, 2) "limited" number of augmented samples (after every 1000 epochs, you just start using the same samples again). On the other hand, it is faster than the second approach.
(3.) considering the value of batch_size and steps_per_epoch, the number of images used for training in both approaches are the same, i.e. 2000
In every epoch, both methods use 2000 images. However, the first method uses the same 2000 images in every epoch. The second method uses different, augmented versions of those images every epoch. | feature extraction: freezing convolutional base vs. training on extracted features
I think you understand it pretty correctly. To address your questions:
(1.) Therefore it is essentially the same as the convolutional base in the first approach and the classifiers are identical as w |
35,918 | feature extraction: freezing convolutional base vs. training on extracted features | For point (2), there seems to have been an issue in Keras where conv_base.trainable = False wasn't working prior to version 2.1.0 (see https://github.com/fchollet/deep-learning-with-python-notebooks/issues/21).
I.e. the code was inadvertently re-training at least part of the base network in the version of Keras that was used when writing the book.
Indeed, if you remove the line:
conv_base.trainable = False
from the code, you will get a 96% accuracy, even when using Keras 2.2.0 | feature extraction: freezing convolutional base vs. training on extracted features | For point (2), there seems to have been an issue in Keras where conv_base.trainable = False wasn't working prior to version 2.1.0 (see https://github.com/fchollet/deep-learning-with-python-notebooks/i | feature extraction: freezing convolutional base vs. training on extracted features
For point (2), there seems to have been an issue in Keras where conv_base.trainable = False wasn't working prior to version 2.1.0 (see https://github.com/fchollet/deep-learning-with-python-notebooks/issues/21).
I.e. the code was inadvertently re-training at least part of the base network in the version of Keras that was used when writing the book.
Indeed, if you remove the line:
conv_base.trainable = False
from the code, you will get a 96% accuracy, even when using Keras 2.2.0 | feature extraction: freezing convolutional base vs. training on extracted features
For point (2), there seems to have been an issue in Keras where conv_base.trainable = False wasn't working prior to version 2.1.0 (see https://github.com/fchollet/deep-learning-with-python-notebooks/i |
35,919 | Maximum Likelihood estimation with a "dependent parameter"? | To deal with maximum likelihood estimation, you must formulate your problem in such a way that there is a clear range of allowable values for the unknown parameters. In your problem you need to formulate an allowable range of values for the parameter $\gamma$ for each possible value of the dependent parameter $\beta$. To do this you will need to consider what is the allowable range of functions $g$.
For example, suppose you have some independent function $g \in \mathcal{G}$ where the range of this function does not depend on the parameters in your problem. You can then define $\mathcal{Y} (\beta) \equiv \{ \gamma(\beta) | g \in \mathcal{G} \}$ for each allowable value of $\beta$ and you have the partially-maximised likelihood:
$$\bar{L}(\alpha, \beta) \equiv \max_{g \in \mathcal{G}} L(\alpha, \beta, g) = \max_{\gamma \in \mathcal{Y}(\beta)} L(\alpha, \beta, \gamma).$$
This tells you that maximising the likelihood function over the set of allowable functions $g$ is the same as maximising it over the corresponding set of allowable parameters $\gamma$. The latter requires you to specify the range of $\gamma$, and the partially-maximised likelihood then depends on $\beta$ through its direct appearance in the likelihood function, and also through its effect on the allowable range of $\gamma$.
In your problem, you have not been clear about the range of allowable functions $g$ (I am assuming this is some real function). If you impose no constraint on this, you will have $\mathcal{Y} (\beta) = \mathbb{R}$ for all $\beta$, which gives $\bar{L}(\alpha, \beta) = \max_{\gamma \in \mathbb{R}} L(\alpha, \beta, \gamma)$. In this case, there is no problem of parametric dependence and so this is an ordinary maximum-likelihood problem. | Maximum Likelihood estimation with a "dependent parameter"? | To deal with maximum likelihood estimation, you must formulate your problem in such a way that there is a clear range of allowable values for the unknown parameters. In your problem you need to formu | Maximum Likelihood estimation with a "dependent parameter"?
To deal with maximum likelihood estimation, you must formulate your problem in such a way that there is a clear range of allowable values for the unknown parameters. In your problem you need to formulate an allowable range of values for the parameter $\gamma$ for each possible value of the dependent parameter $\beta$. To do this you will need to consider what is the allowable range of functions $g$.
For example, suppose you have some independent function $g \in \mathcal{G}$ where the range of this function does not depend on the parameters in your problem. You can then define $\mathcal{Y} (\beta) \equiv \{ \gamma(\beta) | g \in \mathcal{G} \}$ for each allowable value of $\beta$ and you have the partially-maximised likelihood:
$$\bar{L}(\alpha, \beta) \equiv \max_{g \in \mathcal{G}} L(\alpha, \beta, g) = \max_{\gamma \in \mathcal{Y}(\beta)} L(\alpha, \beta, \gamma).$$
This tells you that maximising the likelihood function over the set of allowable functions $g$ is the same as maximising it over the corresponding set of allowable parameters $\gamma$. The latter requires you to specify the range of $\gamma$, and the partially-maximised likelihood then depends on $\beta$ through its direct appearance in the likelihood function, and also through its effect on the allowable range of $\gamma$.
In your problem, you have not been clear about the range of allowable functions $g$ (I am assuming this is some real function). If you impose no constraint on this, you will have $\mathcal{Y} (\beta) = \mathbb{R}$ for all $\beta$, which gives $\bar{L}(\alpha, \beta) = \max_{\gamma \in \mathbb{R}} L(\alpha, \beta, \gamma)$. In this case, there is no problem of parametric dependence and so this is an ordinary maximum-likelihood problem. | Maximum Likelihood estimation with a "dependent parameter"?
To deal with maximum likelihood estimation, you must formulate your problem in such a way that there is a clear range of allowable values for the unknown parameters. In your problem you need to formu |
35,920 | Maximum Likelihood estimation with a "dependent parameter"? | Part 1
Can you model $g(s)$ as a polynomial? If so, I believe you could turn $\gamma(\beta)$ into a closed form expression by analytically computing the integral. In general, if you can find some approximation of $g(s)$ that makes the integral analytically solvable, then you're back to ordinary maximum likelihood (without a $\gamma$ at all).
Let's say you don't know much about g(s). But you must know something! So put a Gaussian Process prior on it with mean function F(s). Use a tool like stan, sample the function during an MCMC draw and perform numerical integration within the loop. You can see me trying this here (see line 96). Now it's slow, but it is a way to estimate and incorporate uncertainty in $g(s)$.
If you have a complex nonlinear expression for $g(s)$ but don't want to go the bayesian route, you could incorporate something like the trapezoidal rule in your maximum likelihood routine, but it too will be slow!
Part 2
In order to even speak of dependence between parameters, you have to adopt a bayesian viewpoint, since the classical viewpoint has all parameters as fixed unknown quantities. If you take the classical interpretation, then I think you're okay just estimating $\gamma$ as a constant. In fact, why not use the estimates of $\beta$ and $\gamma$ to gain insight into $g(s)$? | Maximum Likelihood estimation with a "dependent parameter"? | Part 1
Can you model $g(s)$ as a polynomial? If so, I believe you could turn $\gamma(\beta)$ into a closed form expression by analytically computing the integral. In general, if you can find some appr | Maximum Likelihood estimation with a "dependent parameter"?
Part 1
Can you model $g(s)$ as a polynomial? If so, I believe you could turn $\gamma(\beta)$ into a closed form expression by analytically computing the integral. In general, if you can find some approximation of $g(s)$ that makes the integral analytically solvable, then you're back to ordinary maximum likelihood (without a $\gamma$ at all).
Let's say you don't know much about g(s). But you must know something! So put a Gaussian Process prior on it with mean function F(s). Use a tool like stan, sample the function during an MCMC draw and perform numerical integration within the loop. You can see me trying this here (see line 96). Now it's slow, but it is a way to estimate and incorporate uncertainty in $g(s)$.
If you have a complex nonlinear expression for $g(s)$ but don't want to go the bayesian route, you could incorporate something like the trapezoidal rule in your maximum likelihood routine, but it too will be slow!
Part 2
In order to even speak of dependence between parameters, you have to adopt a bayesian viewpoint, since the classical viewpoint has all parameters as fixed unknown quantities. If you take the classical interpretation, then I think you're okay just estimating $\gamma$ as a constant. In fact, why not use the estimates of $\beta$ and $\gamma$ to gain insight into $g(s)$? | Maximum Likelihood estimation with a "dependent parameter"?
Part 1
Can you model $g(s)$ as a polynomial? If so, I believe you could turn $\gamma(\beta)$ into a closed form expression by analytically computing the integral. In general, if you can find some appr |
35,921 | What is the justification of using taylor approximations inside expectation operators? | For your specific example, the first order Taylor approximation around $x_0=0, e^x = e^0 +e^0x+R_1 = 1+x+R_1$, so
$$E(e^x) = E(1+x) + E(R_1)$$
So the question is "what can we say about $E(R_1)$?
Well, we do not know as much as we would like to about the Taylor approximation -meaning, about the behavior of the remainder.
See this example of why the remainder is a treacherous thing, but also, I would suggest to read through the very stimulating thread, Taking the expectation of Taylor series (especially the remainder) on the matter.
An interesting result in linear regression is the following: assume we have the true non-linear model
$$y_i = m(\mathbf x_i) + e_i$$
where $m(\mathbf x_i)$ is the conditional expectation function, $E(y_i\mid \mathbf x_i) = m(\mathbf x_i)$, and so by construction $E(e_i \mid \mathbf x_i) = 0$.
Consider the first-order Taylor approximation specifically around $E(\mathbf x_i)$
$$y_i = \beta_0+\mathbf x_i'\beta + u_i, \;\;\;u_i = R_{1i} + e_i$$
where $R_{1i}$ is the Taylor remainder of the approximation, the betas are the partial derivatives of the non-linear function with respect to the $\mathbf x_i$'s evaluated at $E(\mathbf x_i)$, while the constant term collects all other fixed things of the approximation (by the way, this is the reason why a) we are told "always include a constant in the specification" but that b) the constant is beyond meaningful interpretation in most cases).
Then, if we apply Ordinary Least-Squares estimation, we obtain that the Taylor Remainder will be uncorrealted to the regressors, $E(R_{1i}\mathbf x_i) = E(R_{1i})E(\mathbf x_i)$, and also $E(R_{1i}^2) = \min$. The first result implies that the properties of the OLS estimator for the betas are not affected by the fact that we have approximated the non-linear function by its first order Taylor approximation. The second result implies that the approximation is optimal under the same criterion for which the conditional expectation is the optimal predictor (mean squared error, here mean squared remainder).
Both premises are needed for these results, namely, that we take the Taylor expansion around the expected value of the regressors, and that we use OLS. | What is the justification of using taylor approximations inside expectation operators? | For your specific example, the first order Taylor approximation around $x_0=0, e^x = e^0 +e^0x+R_1 = 1+x+R_1$, so
$$E(e^x) = E(1+x) + E(R_1)$$
So the question is "what can we say about $E(R_1)$?
Wel | What is the justification of using taylor approximations inside expectation operators?
For your specific example, the first order Taylor approximation around $x_0=0, e^x = e^0 +e^0x+R_1 = 1+x+R_1$, so
$$E(e^x) = E(1+x) + E(R_1)$$
So the question is "what can we say about $E(R_1)$?
Well, we do not know as much as we would like to about the Taylor approximation -meaning, about the behavior of the remainder.
See this example of why the remainder is a treacherous thing, but also, I would suggest to read through the very stimulating thread, Taking the expectation of Taylor series (especially the remainder) on the matter.
An interesting result in linear regression is the following: assume we have the true non-linear model
$$y_i = m(\mathbf x_i) + e_i$$
where $m(\mathbf x_i)$ is the conditional expectation function, $E(y_i\mid \mathbf x_i) = m(\mathbf x_i)$, and so by construction $E(e_i \mid \mathbf x_i) = 0$.
Consider the first-order Taylor approximation specifically around $E(\mathbf x_i)$
$$y_i = \beta_0+\mathbf x_i'\beta + u_i, \;\;\;u_i = R_{1i} + e_i$$
where $R_{1i}$ is the Taylor remainder of the approximation, the betas are the partial derivatives of the non-linear function with respect to the $\mathbf x_i$'s evaluated at $E(\mathbf x_i)$, while the constant term collects all other fixed things of the approximation (by the way, this is the reason why a) we are told "always include a constant in the specification" but that b) the constant is beyond meaningful interpretation in most cases).
Then, if we apply Ordinary Least-Squares estimation, we obtain that the Taylor Remainder will be uncorrealted to the regressors, $E(R_{1i}\mathbf x_i) = E(R_{1i})E(\mathbf x_i)$, and also $E(R_{1i}^2) = \min$. The first result implies that the properties of the OLS estimator for the betas are not affected by the fact that we have approximated the non-linear function by its first order Taylor approximation. The second result implies that the approximation is optimal under the same criterion for which the conditional expectation is the optimal predictor (mean squared error, here mean squared remainder).
Both premises are needed for these results, namely, that we take the Taylor expansion around the expected value of the regressors, and that we use OLS. | What is the justification of using taylor approximations inside expectation operators?
For your specific example, the first order Taylor approximation around $x_0=0, e^x = e^0 +e^0x+R_1 = 1+x+R_1$, so
$$E(e^x) = E(1+x) + E(R_1)$$
So the question is "what can we say about $E(R_1)$?
Wel |
35,922 | What is the justification of using taylor approximations inside expectation operators? | One situation in which this is used is asymptotics.
For example, suppose $\dfrac{X_n-\mu}{\sigma/\sqrt n}\sim N(0,1)$ and $g$ is a smooth function. Then
$$
\frac{g(X_n)-g(\mu)}{\left|g'(\mu)\right|\sigma/\sqrt n} \overset{\mathcal L} \longrightarrow N(0,1) \text{ as } n\to\infty,
$$
where $\text{“} \overset{\mathcal L} \longrightarrow \text{''}$ means convergence in distribution (also called convergence in law). In effect we're deleting the higher-order terms in the expansion
$$
g(x) = g(\mu) + g'(\mu)(x-\mu) + \frac{g''(\mu)} 2 (x-\mu)^2 + \frac{g'''(\mu)} 6 (x-\mu)^3 + \cdots
$$
and treating it as
$$
g(x) \approx g(\mu) + g'(\mu)(x-\mu).
$$
One writes
$$
g(X_n) \sim AN\left( g(\mu), \frac{g'(\mu)^2\sigma^2} n \right)
$$
where $\text{“}AN\text{''}$ means "asymptotically normal." | What is the justification of using taylor approximations inside expectation operators? | One situation in which this is used is asymptotics.
For example, suppose $\dfrac{X_n-\mu}{\sigma/\sqrt n}\sim N(0,1)$ and $g$ is a smooth function. Then
$$
\frac{g(X_n)-g(\mu)}{\left|g'(\mu)\right|\si | What is the justification of using taylor approximations inside expectation operators?
One situation in which this is used is asymptotics.
For example, suppose $\dfrac{X_n-\mu}{\sigma/\sqrt n}\sim N(0,1)$ and $g$ is a smooth function. Then
$$
\frac{g(X_n)-g(\mu)}{\left|g'(\mu)\right|\sigma/\sqrt n} \overset{\mathcal L} \longrightarrow N(0,1) \text{ as } n\to\infty,
$$
where $\text{“} \overset{\mathcal L} \longrightarrow \text{''}$ means convergence in distribution (also called convergence in law). In effect we're deleting the higher-order terms in the expansion
$$
g(x) = g(\mu) + g'(\mu)(x-\mu) + \frac{g''(\mu)} 2 (x-\mu)^2 + \frac{g'''(\mu)} 6 (x-\mu)^3 + \cdots
$$
and treating it as
$$
g(x) \approx g(\mu) + g'(\mu)(x-\mu).
$$
One writes
$$
g(X_n) \sim AN\left( g(\mu), \frac{g'(\mu)^2\sigma^2} n \right)
$$
where $\text{“}AN\text{''}$ means "asymptotically normal." | What is the justification of using taylor approximations inside expectation operators?
One situation in which this is used is asymptotics.
For example, suppose $\dfrac{X_n-\mu}{\sigma/\sqrt n}\sim N(0,1)$ and $g$ is a smooth function. Then
$$
\frac{g(X_n)-g(\mu)}{\left|g'(\mu)\right|\si |
35,923 | Why is my LDA performance a non-monotonic function of the amount of training data? | You discovered an interesting phenomenon.
LDA computations rely on inverting within-class scatter matrix $\mathbf S_W$. Usually LDA solution is presented as an eigenvalue decomposition of $\mathbf S_W^{-1}\mathbf S_B$, but scikit-learn never explicitly computes the scatter matrices and instead uses SVD of data matrices to compute the same thing. This is similar to how PCA is often computed directly via SVD of $\mathbf X$ without ever computing the covariance matrix. What scikit-learn does, is to compute SVD of the between-class data $\mathbf X_B$ transformed with $\mathbf S_W^{-1/2}$ ("whitened with respect to within-class covariance"). And to compute $\mathbf S_W^{-1/2}$, they do SVD of the within-class data $\mathbf X_W$.
In the $n<p$ situation the covariance matrix of $\mathbf X_W$ is not full rank, has some zero eigenvalues, and cannot be inverted. What happens in scikit in this case is that they simply use only non-zero singular values for the inversion (github link).
In other words, they implicitly do PCA of the within-class data, keep only non-zero PCs, and then do LDA on that.
The question is now, how should we expect this to affect the overfitting? Let's consider the same setting as in your question (but backwards), when the total sample size $N$ is decreasing starting from $N\gg p$ all the way to $N \ll p$. Let $n$ be the sample size per class, so that $N=nK$ where $K$ is the number of classes.
In the limit of large sample sizes PCA step has no effect (all PCs are used), overfitting is reduced to zero, and the out-of-sample (e.g. cross-validated) performance should be the best.
For $N\approx p$, the covariance matrix is already full rank (so PCA step has no effect) but the smallest eigenvalues are very noisy and LDA will badly overfit. Performance can drop almost to zero.
For $N<p$, PCA step becomes crucial. Only $N-K$ PCs are non-zero; so dimensionality gets reduced to $N-K$. What happens now depends on whether some of the leading PCs have good discriminatory power. They do not have to, but it is often the case that they do.
If so, then only using a few leading PCs should work reasonably well. PCA serves as a regularization step and improves the performance.
Of course here no dimensionality reduction is performed by PCA because all available components are kept. So it is not a priori clear that it would improve the performance but as we see it does, at least in this case.
However, if $n$ is so small that some of the important PCs cannot be estimated and are left out, then the performance should decrease again.
I don't think I have seen this discussed anywhere in the literature, but
this is my understanding of this curious curve:
Note that it is a bit of an artifact of how scikit-learn (with svd solver) deals with $N<p$ situation.
Update: We can predict the position of the minimum as follows. Within-class covariance matrix in each class has at most rank $n-1$, and so the pooled within-class covariance has at most rank $(n-1)K$. The minimum should occur when it becomes full rank (and PCA stops having any effect), i.e. for the smallest $n$ such that $(n-1)K>p$: $$n_\mathrm{min} = \Big\lceil\frac{p+K}{K}+1\Big\rceil.$$ This seems to fit perfectly to all your figures. | Why is my LDA performance a non-monotonic function of the amount of training data? | You discovered an interesting phenomenon.
LDA computations rely on inverting within-class scatter matrix $\mathbf S_W$. Usually LDA solution is presented as an eigenvalue decomposition of $\mathbf S_W | Why is my LDA performance a non-monotonic function of the amount of training data?
You discovered an interesting phenomenon.
LDA computations rely on inverting within-class scatter matrix $\mathbf S_W$. Usually LDA solution is presented as an eigenvalue decomposition of $\mathbf S_W^{-1}\mathbf S_B$, but scikit-learn never explicitly computes the scatter matrices and instead uses SVD of data matrices to compute the same thing. This is similar to how PCA is often computed directly via SVD of $\mathbf X$ without ever computing the covariance matrix. What scikit-learn does, is to compute SVD of the between-class data $\mathbf X_B$ transformed with $\mathbf S_W^{-1/2}$ ("whitened with respect to within-class covariance"). And to compute $\mathbf S_W^{-1/2}$, they do SVD of the within-class data $\mathbf X_W$.
In the $n<p$ situation the covariance matrix of $\mathbf X_W$ is not full rank, has some zero eigenvalues, and cannot be inverted. What happens in scikit in this case is that they simply use only non-zero singular values for the inversion (github link).
In other words, they implicitly do PCA of the within-class data, keep only non-zero PCs, and then do LDA on that.
The question is now, how should we expect this to affect the overfitting? Let's consider the same setting as in your question (but backwards), when the total sample size $N$ is decreasing starting from $N\gg p$ all the way to $N \ll p$. Let $n$ be the sample size per class, so that $N=nK$ where $K$ is the number of classes.
In the limit of large sample sizes PCA step has no effect (all PCs are used), overfitting is reduced to zero, and the out-of-sample (e.g. cross-validated) performance should be the best.
For $N\approx p$, the covariance matrix is already full rank (so PCA step has no effect) but the smallest eigenvalues are very noisy and LDA will badly overfit. Performance can drop almost to zero.
For $N<p$, PCA step becomes crucial. Only $N-K$ PCs are non-zero; so dimensionality gets reduced to $N-K$. What happens now depends on whether some of the leading PCs have good discriminatory power. They do not have to, but it is often the case that they do.
If so, then only using a few leading PCs should work reasonably well. PCA serves as a regularization step and improves the performance.
Of course here no dimensionality reduction is performed by PCA because all available components are kept. So it is not a priori clear that it would improve the performance but as we see it does, at least in this case.
However, if $n$ is so small that some of the important PCs cannot be estimated and are left out, then the performance should decrease again.
I don't think I have seen this discussed anywhere in the literature, but
this is my understanding of this curious curve:
Note that it is a bit of an artifact of how scikit-learn (with svd solver) deals with $N<p$ situation.
Update: We can predict the position of the minimum as follows. Within-class covariance matrix in each class has at most rank $n-1$, and so the pooled within-class covariance has at most rank $(n-1)K$. The minimum should occur when it becomes full rank (and PCA stops having any effect), i.e. for the smallest $n$ such that $(n-1)K>p$: $$n_\mathrm{min} = \Big\lceil\frac{p+K}{K}+1\Big\rceil.$$ This seems to fit perfectly to all your figures. | Why is my LDA performance a non-monotonic function of the amount of training data?
You discovered an interesting phenomenon.
LDA computations rely on inverting within-class scatter matrix $\mathbf S_W$. Usually LDA solution is presented as an eigenvalue decomposition of $\mathbf S_W |
35,924 | Gaussian process prior | A Gaussian Processes is considered a prior distribution on some unknown function $\mu(x)$ (in the context of regression). This is because you're assigning the GP a priori without exact knowledge as to the truth of $\mu(x)$. Learning a GP, and thus hyperparameters $\mathbf\theta$, is conditional on $\mathbf{X}$ in $k(\mathbf{x},\mathbf{x'})$.
It is worth noting that prior knowledge may drive the selection, or even engineering of kernel functions $k(\mathbf{x},\mathbf{x'})$ to particular model at hand.
If using a completely Bayesian formulation (such as fitting using MCMC rather than maximum liklihood), one may incorporate additional prior knowledge on hyperparameters $\mathbf\theta$ if such knowledge is available. | Gaussian process prior | A Gaussian Processes is considered a prior distribution on some unknown function $\mu(x)$ (in the context of regression). This is because you're assigning the GP a priori without exact knowledge as t | Gaussian process prior
A Gaussian Processes is considered a prior distribution on some unknown function $\mu(x)$ (in the context of regression). This is because you're assigning the GP a priori without exact knowledge as to the truth of $\mu(x)$. Learning a GP, and thus hyperparameters $\mathbf\theta$, is conditional on $\mathbf{X}$ in $k(\mathbf{x},\mathbf{x'})$.
It is worth noting that prior knowledge may drive the selection, or even engineering of kernel functions $k(\mathbf{x},\mathbf{x'})$ to particular model at hand.
If using a completely Bayesian formulation (such as fitting using MCMC rather than maximum liklihood), one may incorporate additional prior knowledge on hyperparameters $\mathbf\theta$ if such knowledge is available. | Gaussian process prior
A Gaussian Processes is considered a prior distribution on some unknown function $\mu(x)$ (in the context of regression). This is because you're assigning the GP a priori without exact knowledge as t |
35,925 | Are GAM models linear in the parameters? | Yes, GAMs are linear in the parameters. If we ignore the estimation of smoothness parameters, once we have created the bases for all the covariates we want to fit a smooth effects of, a GAM is just plain old GLM with coefficients for individual basis functions.
This is also true in smooth interactions. The ti(x1, x2) term is just a tensor product basis formed by two univariate marginal bases and the resulting coefficients map to individual functions in the 2-d basis. (The ti() basis has had the main effects of the separate covariates removed when invoked with two or more covariates.) | Are GAM models linear in the parameters? | Yes, GAMs are linear in the parameters. If we ignore the estimation of smoothness parameters, once we have created the bases for all the covariates we want to fit a smooth effects of, a GAM is just pl | Are GAM models linear in the parameters?
Yes, GAMs are linear in the parameters. If we ignore the estimation of smoothness parameters, once we have created the bases for all the covariates we want to fit a smooth effects of, a GAM is just plain old GLM with coefficients for individual basis functions.
This is also true in smooth interactions. The ti(x1, x2) term is just a tensor product basis formed by two univariate marginal bases and the resulting coefficients map to individual functions in the 2-d basis. (The ti() basis has had the main effects of the separate covariates removed when invoked with two or more covariates.) | Are GAM models linear in the parameters?
Yes, GAMs are linear in the parameters. If we ignore the estimation of smoothness parameters, once we have created the bases for all the covariates we want to fit a smooth effects of, a GAM is just pl |
35,926 | What are the properties of MLE that make it more desirable than OLS? | As you move sufficiently far away from normality, all linear estimators may be arbitrarily bad.
Knowing that you can get the best of a bad lot (i.e. the best linear unbiased estimate) isn't much consolation.
If you can specify a suitable distributional model (ay, there's the rub),
maximizing the likelihood has both a direct intuitive appeal - in that it "maximizes the chance" of seeing the sample you did actually see (with a suitable refinement of what we mean by that for the continuous case) and a number of very neat properties that are both theoretically and practically useful (e.g. relationship to the Cramer-Rao lower bound, equivariance under transformation, relationship to likelihood ratio tests and so forth). This motivates M-estimation for example.
Even when you can't specify a model, it is possible to construct a model for which ML is robust to contamination by gross errors in the conditional distribution of the response -- where it retains pretty good efficiency at the Gaussian but avoids the potentially disastrous impact of arbitrarily large outliers.
[That's not the only consideration with regression, since there's also a need for robustness to the effect of influential outliers for example, but it's a good initial step]
As a demonstration of the problem with even the best linear estimator, consider this comparison of slope estimators for regression. In this case there are 100 observations in each sample, x is 0/1, the true slope is $\frac12$ and errors are standard Cauchy. The simulation takes 1000 sets of simulated data and computes the least squares estimate of slope ("LS") as well as a couple of nonlinear estimators that could be used in this situation (neither is fully efficient at the Cauchy but they're both reasonable) - one is an L1 estimator of the line ("L1") and the second computes a simple L-estimate of location at the two values of x and fits a line joining them ("LE").
The top part of the diagram is a boxplot of those thousand slope estimates for each simulation. The lower part is the central one percent (roughly, it is marked with a faint orange-grey box in the top plot) of that image "blown up" so we can see more detail. As we see the least squares slopes range from -771 to 1224 and the lower and upper quartiles are -1.24 and 2.46. The error in the LS slope was over 10 more than 10% of the time. The two nonlinear estimators do much better -- they perform fairly similarly to each other, none of the 1000 slope estimates in either case are more than 0.84 from the true slope and the median absolute error in the slope is in the ballpark of 0.14 for each (vs 1.86 for the least squares estimator). The LS slope has a RMSE of 223 and 232 times that of the L1 and LE estimators in this case (that's not an especially meaningful quantity, however as the LS estimator doesn't have a finite variance when you have Cauchy errors).
There are dozens of other reasonable estimators that might have been used here; this was simply a quick calculation to illustrate that even the best/most efficient linear estimators may not be useful. An ML estimator of the slope would perform better (in the MSE sense) than the two robust estimators used here, but in practice you'd want something with some robustness to influential points. | What are the properties of MLE that make it more desirable than OLS? | As you move sufficiently far away from normality, all linear estimators may be arbitrarily bad.
Knowing that you can get the best of a bad lot (i.e. the best linear unbiased estimate) isn't much conso | What are the properties of MLE that make it more desirable than OLS?
As you move sufficiently far away from normality, all linear estimators may be arbitrarily bad.
Knowing that you can get the best of a bad lot (i.e. the best linear unbiased estimate) isn't much consolation.
If you can specify a suitable distributional model (ay, there's the rub),
maximizing the likelihood has both a direct intuitive appeal - in that it "maximizes the chance" of seeing the sample you did actually see (with a suitable refinement of what we mean by that for the continuous case) and a number of very neat properties that are both theoretically and practically useful (e.g. relationship to the Cramer-Rao lower bound, equivariance under transformation, relationship to likelihood ratio tests and so forth). This motivates M-estimation for example.
Even when you can't specify a model, it is possible to construct a model for which ML is robust to contamination by gross errors in the conditional distribution of the response -- where it retains pretty good efficiency at the Gaussian but avoids the potentially disastrous impact of arbitrarily large outliers.
[That's not the only consideration with regression, since there's also a need for robustness to the effect of influential outliers for example, but it's a good initial step]
As a demonstration of the problem with even the best linear estimator, consider this comparison of slope estimators for regression. In this case there are 100 observations in each sample, x is 0/1, the true slope is $\frac12$ and errors are standard Cauchy. The simulation takes 1000 sets of simulated data and computes the least squares estimate of slope ("LS") as well as a couple of nonlinear estimators that could be used in this situation (neither is fully efficient at the Cauchy but they're both reasonable) - one is an L1 estimator of the line ("L1") and the second computes a simple L-estimate of location at the two values of x and fits a line joining them ("LE").
The top part of the diagram is a boxplot of those thousand slope estimates for each simulation. The lower part is the central one percent (roughly, it is marked with a faint orange-grey box in the top plot) of that image "blown up" so we can see more detail. As we see the least squares slopes range from -771 to 1224 and the lower and upper quartiles are -1.24 and 2.46. The error in the LS slope was over 10 more than 10% of the time. The two nonlinear estimators do much better -- they perform fairly similarly to each other, none of the 1000 slope estimates in either case are more than 0.84 from the true slope and the median absolute error in the slope is in the ballpark of 0.14 for each (vs 1.86 for the least squares estimator). The LS slope has a RMSE of 223 and 232 times that of the L1 and LE estimators in this case (that's not an especially meaningful quantity, however as the LS estimator doesn't have a finite variance when you have Cauchy errors).
There are dozens of other reasonable estimators that might have been used here; this was simply a quick calculation to illustrate that even the best/most efficient linear estimators may not be useful. An ML estimator of the slope would perform better (in the MSE sense) than the two robust estimators used here, but in practice you'd want something with some robustness to influential points. | What are the properties of MLE that make it more desirable than OLS?
As you move sufficiently far away from normality, all linear estimators may be arbitrarily bad.
Knowing that you can get the best of a bad lot (i.e. the best linear unbiased estimate) isn't much conso |
35,927 | What are the properties of MLE that make it more desirable than OLS? | In the case of normally distributed data, OLS converges with the MLE, a solution which is BLUE (in that point). Once out of normal, OLS isn't BLUE anymore (in the terms of Gauss-Markov theorem) - this is because OLS looks to minimize the SSR whereas GMT defines BLUE in terms of minimal SE. See more here.
Generally speaking, given that a MLE exists (google for 'MLE failure' or for cases where MLE doesn't exist), it is easier to adjust it, either for minimizing the variance or making it unbiased (and therefore comparable to other estimators). | What are the properties of MLE that make it more desirable than OLS? | In the case of normally distributed data, OLS converges with the MLE, a solution which is BLUE (in that point). Once out of normal, OLS isn't BLUE anymore (in the terms of Gauss-Markov theorem) - this | What are the properties of MLE that make it more desirable than OLS?
In the case of normally distributed data, OLS converges with the MLE, a solution which is BLUE (in that point). Once out of normal, OLS isn't BLUE anymore (in the terms of Gauss-Markov theorem) - this is because OLS looks to minimize the SSR whereas GMT defines BLUE in terms of minimal SE. See more here.
Generally speaking, given that a MLE exists (google for 'MLE failure' or for cases where MLE doesn't exist), it is easier to adjust it, either for minimizing the variance or making it unbiased (and therefore comparable to other estimators). | What are the properties of MLE that make it more desirable than OLS?
In the case of normally distributed data, OLS converges with the MLE, a solution which is BLUE (in that point). Once out of normal, OLS isn't BLUE anymore (in the terms of Gauss-Markov theorem) - this |
35,928 | Is building deep learning architectures a trial and error scheme? | At the current stage, the neural network architecture selection is driven much more by empirical results rater than solid mathematical theory. Moreover, the network architecture (depth, breadth, activation functions, connections) are not the only decisions you have to make; also the optimization algorithm and its parameters interplay tightly with these choices. The specific dataset and the chosen loss function also define the loss surface along which you are optimizing. Sometimes even the hardware presents a limitation (e.g. amount of available GPU memory). There is simply not an universal, theoretically founded answer.
Of course, there are some intuitions: For example, you know how convolutions work, so it is easy to imagine what kind of information they can extract. Actually, most of the papers introducing some architectural tweaks, such as Batch normalization, Stochastic pooling, etc., provide such intuitive hints. It is your job to consider which of those make sense in your scenario. Any machine learning method has its hyperparameters that you have to tune. In case of neural networks, architecture is simply a hyperparameter (albeit an obscure one).
Besides, there are plenty threads dealing with this topic:
How to choose the number of hidden layers and nodes in a feedforward neural network?
Rules for selecting convolutional neural network hyparameters
How to decide neural network architecture?
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN? | Is building deep learning architectures a trial and error scheme? | At the current stage, the neural network architecture selection is driven much more by empirical results rater than solid mathematical theory. Moreover, the network architecture (depth, breadth, activ | Is building deep learning architectures a trial and error scheme?
At the current stage, the neural network architecture selection is driven much more by empirical results rater than solid mathematical theory. Moreover, the network architecture (depth, breadth, activation functions, connections) are not the only decisions you have to make; also the optimization algorithm and its parameters interplay tightly with these choices. The specific dataset and the chosen loss function also define the loss surface along which you are optimizing. Sometimes even the hardware presents a limitation (e.g. amount of available GPU memory). There is simply not an universal, theoretically founded answer.
Of course, there are some intuitions: For example, you know how convolutions work, so it is easy to imagine what kind of information they can extract. Actually, most of the papers introducing some architectural tweaks, such as Batch normalization, Stochastic pooling, etc., provide such intuitive hints. It is your job to consider which of those make sense in your scenario. Any machine learning method has its hyperparameters that you have to tune. In case of neural networks, architecture is simply a hyperparameter (albeit an obscure one).
Besides, there are plenty threads dealing with this topic:
How to choose the number of hidden layers and nodes in a feedforward neural network?
Rules for selecting convolutional neural network hyparameters
How to decide neural network architecture?
Is there a heuristic for determining the size of a fully connected layer at the end of a CNN? | Is building deep learning architectures a trial and error scheme?
At the current stage, the neural network architecture selection is driven much more by empirical results rater than solid mathematical theory. Moreover, the network architecture (depth, breadth, activ |
35,929 | Do stepwise regression techniques increase a model's predictive power? | There are a variety of problems with stepwise selection. I discussed stepwise in my answer here: Algorithms for automatic model selection. In that answer, I did not primarily focus on the problems with inference, but on the fact that the coefficients are biased (the athletes trying out are analogous to variables). Because the coefficients are biased away from their true values, the out of sample predictive error should be enlarged, ceteris paribus.
Consider the notion of the bias-variance trade-off. If you think of the accuracy of your model as the variance of the prediction errors (i.e., MSE: $1/n\sum (y_i -\hat y_i)^2$), the expected prediction error is the sum of three different sources of variance:
$$\newcommand{\Var}{{\rm Var}}
E\big[(y_i -\hat y_i)^2\big] = \Var(\hat f) + \big[{\rm Bias}(\hat f)\big]^2 + \Var(\varepsilon)
$$
These three terms are the variance of your estimate of the function, the square of the bias of the estimate, and the irreducible error in the data generating process, respectively. (The latter exists because the data are not deterministic—you will never get predictions that are closer than that on average.) The former two come from the procedure used to estimate your model. By default we might think OLS is the procedure used to estimate the model, but it is more correct to say that stepwise selection over OLS estimates is the procedure. The idea of the bias-variance trade-off is that whereas an explanatory model rightly emphasizes unbiasedness, a predictive model may benefit from using a biased procedure if the variance is sufficiently reduced (for a fuller explanation, see: What problem do shrinkage methods solve?).
With those ideas in mind, the point of my answer linked at the top is that a great deal of bias is induced. All things being equal, that will make out of sample predictions worse. Unfortunately, stepwise selection does not reduce the variance of the estimate. At best, its variance is the same, but it is quite likely to make the variance much worse as well (for example, @Glen_b reports only 15.5% of the times were the right variables even chosen in a simulation study discussed here: Why are p-values misleading after performing a stepwise selection?). | Do stepwise regression techniques increase a model's predictive power? | There are a variety of problems with stepwise selection. I discussed stepwise in my answer here: Algorithms for automatic model selection. In that answer, I did not primarily focus on the problems w | Do stepwise regression techniques increase a model's predictive power?
There are a variety of problems with stepwise selection. I discussed stepwise in my answer here: Algorithms for automatic model selection. In that answer, I did not primarily focus on the problems with inference, but on the fact that the coefficients are biased (the athletes trying out are analogous to variables). Because the coefficients are biased away from their true values, the out of sample predictive error should be enlarged, ceteris paribus.
Consider the notion of the bias-variance trade-off. If you think of the accuracy of your model as the variance of the prediction errors (i.e., MSE: $1/n\sum (y_i -\hat y_i)^2$), the expected prediction error is the sum of three different sources of variance:
$$\newcommand{\Var}{{\rm Var}}
E\big[(y_i -\hat y_i)^2\big] = \Var(\hat f) + \big[{\rm Bias}(\hat f)\big]^2 + \Var(\varepsilon)
$$
These three terms are the variance of your estimate of the function, the square of the bias of the estimate, and the irreducible error in the data generating process, respectively. (The latter exists because the data are not deterministic—you will never get predictions that are closer than that on average.) The former two come from the procedure used to estimate your model. By default we might think OLS is the procedure used to estimate the model, but it is more correct to say that stepwise selection over OLS estimates is the procedure. The idea of the bias-variance trade-off is that whereas an explanatory model rightly emphasizes unbiasedness, a predictive model may benefit from using a biased procedure if the variance is sufficiently reduced (for a fuller explanation, see: What problem do shrinkage methods solve?).
With those ideas in mind, the point of my answer linked at the top is that a great deal of bias is induced. All things being equal, that will make out of sample predictions worse. Unfortunately, stepwise selection does not reduce the variance of the estimate. At best, its variance is the same, but it is quite likely to make the variance much worse as well (for example, @Glen_b reports only 15.5% of the times were the right variables even chosen in a simulation study discussed here: Why are p-values misleading after performing a stepwise selection?). | Do stepwise regression techniques increase a model's predictive power?
There are a variety of problems with stepwise selection. I discussed stepwise in my answer here: Algorithms for automatic model selection. In that answer, I did not primarily focus on the problems w |
35,930 | Do stepwise regression techniques increase a model's predictive power? | The exact effects will depend on the model and the "truth" which, of course, we can't know. You can look at the effects of stepwise in any particular case by crossvalidating or use a simple train and test approach. | Do stepwise regression techniques increase a model's predictive power? | The exact effects will depend on the model and the "truth" which, of course, we can't know. You can look at the effects of stepwise in any particular case by crossvalidating or use a simple train and | Do stepwise regression techniques increase a model's predictive power?
The exact effects will depend on the model and the "truth" which, of course, we can't know. You can look at the effects of stepwise in any particular case by crossvalidating or use a simple train and test approach. | Do stepwise regression techniques increase a model's predictive power?
The exact effects will depend on the model and the "truth" which, of course, we can't know. You can look at the effects of stepwise in any particular case by crossvalidating or use a simple train and |
35,931 | Quantile regression revealing different relationships at different quantiles: how? | The "true slope" in a normal linear model tells you how much the mean response changes thanks to a one point increase in $x$. By assuming normality and equal variance, all quantiles of the conditional distribution of the response move in line with that. Sometimes, these assumptions are very unrealistic: variance or skewness of the conditional distribution depend on $x$ and thus, its quantiles move at their own speed when increasing $x$. In QR you will immediately see this from very different slope estimates. Since OLS only cares about the mean (i.e. the average quantile), you can't model each quantile separately. There, you are fully relying on the assumption of fixed shape of the conditional distribution when making statements on its quantiles.
EDIT: Embed comment and illustrate
If you are willing to make that strong assumtions, there is not much point in running QR as you can always calculate conditional quantiles via conditional mean and fixed variance. The "true" slopes of all quantiles will be equal to the true slope of the mean. In a specific sample, of course there will be some random variation. Or you might even detect that your strict assumptions were wrong...
Let me illustrate by an example in R. It shows the least squares line (black) and then in red the modelled 20%, 50%, and 80% quantiles of data simulated according to the following linear relationship
$$
y = x + x \varepsilon, \quad \varepsilon \sim N(0, 1) \ \text{iid},
$$
so that not only the conditional mean of $y$ depends on $x$ but also the variance.
The regression lines of the mean and the median are essentially identical because of the symmetrical conditional distribution. Their slope is 1.
The regression line of the 80% quantile is much steeper (slope 1.9), while the regression line of the 20% quantile is almost constant (slope 0.3). This suits well to the extremely unequal variance.
Approximately 60% of all values are within the outer red lines. They form a simple, pointwise 60% forecast interval at each value of $x$.
The code to generate the picture:
library(quantreg)
set.seed(3249)
n <- 1000
x <- seq(0, 1, length.out = n)
y <- rnorm(n, mean = x, sd = x)
plot(y~x)
(fit_lm <- lm(y~x)) # intercept: 0.02445, slope: 1.04858
abline(fit_lm, lwd = 3)
# quantile cuts
taus <- c(0.2, 0.5, 0.8)
(fit_rq <- rq(y~x, tau = taus))
# tau= 0.2 tau= 0.5 tau= 0.8
# (Intercept) 0.00108228 -0.0005110046 0.001089583
# x 0.29960652 1.0954521888 1.918622442
lapply(seq_along(taus), function(i) abline(coef(fit_rq)[, i], lwd = 2, lty = 2, col = "red")) | Quantile regression revealing different relationships at different quantiles: how? | The "true slope" in a normal linear model tells you how much the mean response changes thanks to a one point increase in $x$. By assuming normality and equal variance, all quantiles of the conditional | Quantile regression revealing different relationships at different quantiles: how?
The "true slope" in a normal linear model tells you how much the mean response changes thanks to a one point increase in $x$. By assuming normality and equal variance, all quantiles of the conditional distribution of the response move in line with that. Sometimes, these assumptions are very unrealistic: variance or skewness of the conditional distribution depend on $x$ and thus, its quantiles move at their own speed when increasing $x$. In QR you will immediately see this from very different slope estimates. Since OLS only cares about the mean (i.e. the average quantile), you can't model each quantile separately. There, you are fully relying on the assumption of fixed shape of the conditional distribution when making statements on its quantiles.
EDIT: Embed comment and illustrate
If you are willing to make that strong assumtions, there is not much point in running QR as you can always calculate conditional quantiles via conditional mean and fixed variance. The "true" slopes of all quantiles will be equal to the true slope of the mean. In a specific sample, of course there will be some random variation. Or you might even detect that your strict assumptions were wrong...
Let me illustrate by an example in R. It shows the least squares line (black) and then in red the modelled 20%, 50%, and 80% quantiles of data simulated according to the following linear relationship
$$
y = x + x \varepsilon, \quad \varepsilon \sim N(0, 1) \ \text{iid},
$$
so that not only the conditional mean of $y$ depends on $x$ but also the variance.
The regression lines of the mean and the median are essentially identical because of the symmetrical conditional distribution. Their slope is 1.
The regression line of the 80% quantile is much steeper (slope 1.9), while the regression line of the 20% quantile is almost constant (slope 0.3). This suits well to the extremely unequal variance.
Approximately 60% of all values are within the outer red lines. They form a simple, pointwise 60% forecast interval at each value of $x$.
The code to generate the picture:
library(quantreg)
set.seed(3249)
n <- 1000
x <- seq(0, 1, length.out = n)
y <- rnorm(n, mean = x, sd = x)
plot(y~x)
(fit_lm <- lm(y~x)) # intercept: 0.02445, slope: 1.04858
abline(fit_lm, lwd = 3)
# quantile cuts
taus <- c(0.2, 0.5, 0.8)
(fit_rq <- rq(y~x, tau = taus))
# tau= 0.2 tau= 0.5 tau= 0.8
# (Intercept) 0.00108228 -0.0005110046 0.001089583
# x 0.29960652 1.0954521888 1.918622442
lapply(seq_along(taus), function(i) abline(coef(fit_rq)[, i], lwd = 2, lty = 2, col = "red")) | Quantile regression revealing different relationships at different quantiles: how?
The "true slope" in a normal linear model tells you how much the mean response changes thanks to a one point increase in $x$. By assuming normality and equal variance, all quantiles of the conditional |
35,932 | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk | There are innumerable ways a distribution can differ from a normal distribution. No test could capture all of them. As a result, each test differs in how it checks to see if your distribution matches the normal. For example, the KS test looks at the quantile where your empirical cumulative distribution function differs maximally from the normal's theoretical cumulative distribution function. This is often somewhere in the middle of the distribution, which isn't where we typically care about mismatches. The SW test focuses on the tails, which is where we typically do care if the distributions are similar. As a result, the SW is usually preferred. In addition, the KW test is not valid if you are using distribution parameters that were estimated from your sample (see: What is the difference between the Shapiro-Wilk test of normality and the Kolmogorov-Smirnov test of normality?). You should use the SW here.
But plots are generally recommended and tests are not (see: Is normality testing 'essentially useless'?). You can see from all your plots that you have a heavy right tail and a light left tail relative to a true normal. That is, you have a little bit of right skew. | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk | There are innumerable ways a distribution can differ from a normal distribution. No test could capture all of them. As a result, each test differs in how it checks to see if your distribution matche | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk
There are innumerable ways a distribution can differ from a normal distribution. No test could capture all of them. As a result, each test differs in how it checks to see if your distribution matches the normal. For example, the KS test looks at the quantile where your empirical cumulative distribution function differs maximally from the normal's theoretical cumulative distribution function. This is often somewhere in the middle of the distribution, which isn't where we typically care about mismatches. The SW test focuses on the tails, which is where we typically do care if the distributions are similar. As a result, the SW is usually preferred. In addition, the KW test is not valid if you are using distribution parameters that were estimated from your sample (see: What is the difference between the Shapiro-Wilk test of normality and the Kolmogorov-Smirnov test of normality?). You should use the SW here.
But plots are generally recommended and tests are not (see: Is normality testing 'essentially useless'?). You can see from all your plots that you have a heavy right tail and a light left tail relative to a true normal. That is, you have a little bit of right skew. | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk
There are innumerable ways a distribution can differ from a normal distribution. No test could capture all of them. As a result, each test differs in how it checks to see if your distribution matche |
35,933 | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk | You can't cherry pick normality tests based on the results. In this case, you either go with the rejection in any test conducted, or not use them at all. KS test is not very powerful, it's not a "specialized" normality test. If anything SW is probably more trustworthy in this case.
To me your QQ plot has signs of either fat right tail or skew to the left, or both. I would suggest using Tukey's tool to study the fatness of tails. It'll give you an indication how much a distribution is like normal or Cauchy. | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk | You can't cherry pick normality tests based on the results. In this case, you either go with the rejection in any test conducted, or not use them at all. KS test is not very powerful, it's not a "spec | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk
You can't cherry pick normality tests based on the results. In this case, you either go with the rejection in any test conducted, or not use them at all. KS test is not very powerful, it's not a "specialized" normality test. If anything SW is probably more trustworthy in this case.
To me your QQ plot has signs of either fat right tail or skew to the left, or both. I would suggest using Tukey's tool to study the fatness of tails. It'll give you an indication how much a distribution is like normal or Cauchy. | Inconsistent normality tests: Kolmogorov-Smirnov vs Shapiro-Wilk
You can't cherry pick normality tests based on the results. In this case, you either go with the rejection in any test conducted, or not use them at all. KS test is not very powerful, it's not a "spec |
35,934 | Ask verification of a simple matrix result | Let $\mu = EX$ and $\Sigma = E(XX^T) - \mu\mu^T$. Then we need to show
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu \leq 1.
$$
Let $C = E(XX^T)$ so that $\Sigma = C - \mu\mu^T$. Using the matrix determinant lemma and the existence of $C^{-1}$ we can see that $\Sigma^{-1}$ exists exactly when $\mu^T C^{-1}\mu\neq 1$. If $\mu^T C^{-1} \mu = 1$ we are done, so WLOG we assume $\mu^T C^{-1}\mu\neq 1$.
Then by the Sherman–Morrison formula we have
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu = \mu^T \Sigma^{-1}\mu - \mu^T \left(\frac{\Sigma^{-1}\mu\mu^T\Sigma^{-1}}{1 + \mu^T \Sigma^{-1}\mu}\right)\mu = c - \frac{c^2}{1+c} = \frac{c}{1+c} < 1
$$
since $\Sigma^{-1}$ is PSD so $c \geq 0$. | Ask verification of a simple matrix result | Let $\mu = EX$ and $\Sigma = E(XX^T) - \mu\mu^T$. Then we need to show
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu \leq 1.
$$
Let $C = E(XX^T)$ so that $\Sigma = C - \mu\mu^T$. Using the matrix determinant l | Ask verification of a simple matrix result
Let $\mu = EX$ and $\Sigma = E(XX^T) - \mu\mu^T$. Then we need to show
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu \leq 1.
$$
Let $C = E(XX^T)$ so that $\Sigma = C - \mu\mu^T$. Using the matrix determinant lemma and the existence of $C^{-1}$ we can see that $\Sigma^{-1}$ exists exactly when $\mu^T C^{-1}\mu\neq 1$. If $\mu^T C^{-1} \mu = 1$ we are done, so WLOG we assume $\mu^T C^{-1}\mu\neq 1$.
Then by the Sherman–Morrison formula we have
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu = \mu^T \Sigma^{-1}\mu - \mu^T \left(\frac{\Sigma^{-1}\mu\mu^T\Sigma^{-1}}{1 + \mu^T \Sigma^{-1}\mu}\right)\mu = c - \frac{c^2}{1+c} = \frac{c}{1+c} < 1
$$
since $\Sigma^{-1}$ is PSD so $c \geq 0$. | Ask verification of a simple matrix result
Let $\mu = EX$ and $\Sigma = E(XX^T) - \mu\mu^T$. Then we need to show
$$
\mu^T (\Sigma + \mu\mu^T)^{-1}\mu \leq 1.
$$
Let $C = E(XX^T)$ so that $\Sigma = C - \mu\mu^T$. Using the matrix determinant l |
35,935 | Variance of Minimum and Maximum of 2 iid Normal | If you can convince yourself that
$$
\max(X,Y) \overset{d}{=} -\min(X,Y),
$$
then taking the variance on both sides will give you your answer.
Regarding the other part, you'll probably have to integrate by hand. | Variance of Minimum and Maximum of 2 iid Normal | If you can convince yourself that
$$
\max(X,Y) \overset{d}{=} -\min(X,Y),
$$
then taking the variance on both sides will give you your answer.
Regarding the other part, you'll probably have to integ | Variance of Minimum and Maximum of 2 iid Normal
If you can convince yourself that
$$
\max(X,Y) \overset{d}{=} -\min(X,Y),
$$
then taking the variance on both sides will give you your answer.
Regarding the other part, you'll probably have to integrate by hand. | Variance of Minimum and Maximum of 2 iid Normal
If you can convince yourself that
$$
\max(X,Y) \overset{d}{=} -\min(X,Y),
$$
then taking the variance on both sides will give you your answer.
Regarding the other part, you'll probably have to integ |
35,936 | Variance of Minimum and Maximum of 2 iid Normal | Doing it out the long way, which generalizes to more than 2 iid Normals, here are the integral calculations in MAPLE:
$EA^2 = $
2*int(z^2*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp(-z^2/2),z=-infinity..infinity);
which equals 1.
$EA = $
2*int(z*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp(-z^2/2),z=-infinity..infinity);
which equals $1/\sqrt{\pi}$.
Therefore, Var(A) = $1-1/\pi = $0.68169... which agrees with my simulation.
Of course, Var(B) is identical. | Variance of Minimum and Maximum of 2 iid Normal | Doing it out the long way, which generalizes to more than 2 iid Normals, here are the integral calculations in MAPLE:
$EA^2 = $
2*int(z^2*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp( | Variance of Minimum and Maximum of 2 iid Normal
Doing it out the long way, which generalizes to more than 2 iid Normals, here are the integral calculations in MAPLE:
$EA^2 = $
2*int(z^2*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp(-z^2/2),z=-infinity..infinity);
which equals 1.
$EA = $
2*int(z*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp(-z^2/2),z=-infinity..infinity);
which equals $1/\sqrt{\pi}$.
Therefore, Var(A) = $1-1/\pi = $0.68169... which agrees with my simulation.
Of course, Var(B) is identical. | Variance of Minimum and Maximum of 2 iid Normal
Doing it out the long way, which generalizes to more than 2 iid Normals, here are the integral calculations in MAPLE:
$EA^2 = $
2*int(z^2*1/sqrt(2*Pi)*int(exp(-x^2/2),x=-infinity..z)*1/sqrt(2*Pi)*exp( |
35,937 | Variance of Minimum and Maximum of 2 iid Normal | Consider the standard normal case (since it's trivial to generalize). Let $Z = \max(X,Y)$.
$F_Z(z)=P(\max(X,Y)\leq z) = P(X\leq z,Y\leq z) = \Phi(z)^2$
hence obtain $f_Z(z)$ by differentiation.
As for expectation, note the following:
$\frac{d}{dx} \phi(x)\Phi(x) = -x\phi(x)\Phi(x) + \phi(x)^2$
Further note that $\phi(x)^2$ can be written in terms of $a\phi(bx)$ for some constants $a$ and $b$. From there you should be able to show that
$\int x\phi(x)\Phi(x)\,dx={\frac{1}{\sqrt{2}}}\frac{1}{\sqrt{2\pi}}\Phi(x\sqrt{2})-\phi(x)\Phi(x)+C$
(if not, show it by differentiation ...)
And by taking derivatives of $x\phi(x)\Phi(x)$ you should be able to use previous results to get to $E(Z^2)$.
.... Or just use the table of definite integrals here:
https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions#Definite_integrals
with a little manipulation, I think you can do the expectation and variance from there. | Variance of Minimum and Maximum of 2 iid Normal | Consider the standard normal case (since it's trivial to generalize). Let $Z = \max(X,Y)$.
$F_Z(z)=P(\max(X,Y)\leq z) = P(X\leq z,Y\leq z) = \Phi(z)^2$
hence obtain $f_Z(z)$ by differentiation.
As for | Variance of Minimum and Maximum of 2 iid Normal
Consider the standard normal case (since it's trivial to generalize). Let $Z = \max(X,Y)$.
$F_Z(z)=P(\max(X,Y)\leq z) = P(X\leq z,Y\leq z) = \Phi(z)^2$
hence obtain $f_Z(z)$ by differentiation.
As for expectation, note the following:
$\frac{d}{dx} \phi(x)\Phi(x) = -x\phi(x)\Phi(x) + \phi(x)^2$
Further note that $\phi(x)^2$ can be written in terms of $a\phi(bx)$ for some constants $a$ and $b$. From there you should be able to show that
$\int x\phi(x)\Phi(x)\,dx={\frac{1}{\sqrt{2}}}\frac{1}{\sqrt{2\pi}}\Phi(x\sqrt{2})-\phi(x)\Phi(x)+C$
(if not, show it by differentiation ...)
And by taking derivatives of $x\phi(x)\Phi(x)$ you should be able to use previous results to get to $E(Z^2)$.
.... Or just use the table of definite integrals here:
https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions#Definite_integrals
with a little manipulation, I think you can do the expectation and variance from there. | Variance of Minimum and Maximum of 2 iid Normal
Consider the standard normal case (since it's trivial to generalize). Let $Z = \max(X,Y)$.
$F_Z(z)=P(\max(X,Y)\leq z) = P(X\leq z,Y\leq z) = \Phi(z)^2$
hence obtain $f_Z(z)$ by differentiation.
As for |
35,938 | What is the objective of a variational autoencoder (VAE)? | Similar to Auto-encoders, the objective of a Variational Auto-encoder is to reconstruct the input.
The only difference is that AEs have direct links between encoder and decoder parts, but VAEs have a sampling layer which samples form a distribution (usually a Gaussian) and then feeds the generated samples to the decoder part.
Here are some examples from different auto encoders as generative models. You can easily see how the networks are able to capture the data distribution and generate samples very similar to the original ones by only using random observations as an input.
On the top, there's the random input and on the bottom, there's the reconstructed image. The models are trained on MNIST.
If you have a look at this paper, you will find the answer to your question: | What is the objective of a variational autoencoder (VAE)? | Similar to Auto-encoders, the objective of a Variational Auto-encoder is to reconstruct the input.
The only difference is that AEs have direct links between encoder and decoder parts, but VAEs have a | What is the objective of a variational autoencoder (VAE)?
Similar to Auto-encoders, the objective of a Variational Auto-encoder is to reconstruct the input.
The only difference is that AEs have direct links between encoder and decoder parts, but VAEs have a sampling layer which samples form a distribution (usually a Gaussian) and then feeds the generated samples to the decoder part.
Here are some examples from different auto encoders as generative models. You can easily see how the networks are able to capture the data distribution and generate samples very similar to the original ones by only using random observations as an input.
On the top, there's the random input and on the bottom, there's the reconstructed image. The models are trained on MNIST.
If you have a look at this paper, you will find the answer to your question: | What is the objective of a variational autoencoder (VAE)?
Similar to Auto-encoders, the objective of a Variational Auto-encoder is to reconstruct the input.
The only difference is that AEs have direct links between encoder and decoder parts, but VAEs have a |
35,939 | What is the objective of a variational autoencoder (VAE)? | What do we want to compute and why?
I thought you are mostly interested in the inference issues here. Once we have the trained model we can inference any distribution using the model. For instance:
1. We can do sampling using the joint distribution $P(X,Z)$.
Why? It is a method for use to generate new data points.
How? We first sample from the latent variables $Z$ and then sample from the conditional distribution $P(X|Z)$. If $X$ are all binary varialbes we can just sample according to a binomial distribution.
2. We can plot the conditional expectation $E(X|Z)$.
Why? We can intuit the role of the latent variables by checking the expectation of the manifest variables.
How? We just sample from the latent variables and then get the conditional distribution and that is the expectation of the manifest variables. We then can plot the images on a N-D grid where the grid axes correspond to $z_1$...$z_n$ respectively.
For instance, if we set two latent variables(discretized here) for the MNIST task and we may get such a grid:
We can see that most likel the vertical coyordinate controls the thickness and the horizental coordinate the curve.
3. We can calculate the joint distribution of the manifest varialbes
$P(X)$.
Why? We can use this distribution to check the validity of some new data. If the probability of a new case is too low we can say that it an invalid case and otherwise it would most like a similar case.
How? We can use a validation dataset and a test dataset. Firstly over the validation dataset(all valid cases) we calculate the joint probability(of the manifest variables by summing out the latent variables) of each case and then find the 75 percentile and the 25 percentile. Then we try the test cases(may contain invalid cases), if the probability falls in the range(25-75 percentile) we say that it is valid otherwise invalid. In this way we can filter out some bad cases from the test dataset.
Actually once we know the parameters of the Bayes model we can get any distribution and we can use them to find many interesting things. | What is the objective of a variational autoencoder (VAE)? | What do we want to compute and why?
I thought you are mostly interested in the inference issues here. Once we have the trained model we can inference any distribution using the model. For instance: | What is the objective of a variational autoencoder (VAE)?
What do we want to compute and why?
I thought you are mostly interested in the inference issues here. Once we have the trained model we can inference any distribution using the model. For instance:
1. We can do sampling using the joint distribution $P(X,Z)$.
Why? It is a method for use to generate new data points.
How? We first sample from the latent variables $Z$ and then sample from the conditional distribution $P(X|Z)$. If $X$ are all binary varialbes we can just sample according to a binomial distribution.
2. We can plot the conditional expectation $E(X|Z)$.
Why? We can intuit the role of the latent variables by checking the expectation of the manifest variables.
How? We just sample from the latent variables and then get the conditional distribution and that is the expectation of the manifest variables. We then can plot the images on a N-D grid where the grid axes correspond to $z_1$...$z_n$ respectively.
For instance, if we set two latent variables(discretized here) for the MNIST task and we may get such a grid:
We can see that most likel the vertical coyordinate controls the thickness and the horizental coordinate the curve.
3. We can calculate the joint distribution of the manifest varialbes
$P(X)$.
Why? We can use this distribution to check the validity of some new data. If the probability of a new case is too low we can say that it an invalid case and otherwise it would most like a similar case.
How? We can use a validation dataset and a test dataset. Firstly over the validation dataset(all valid cases) we calculate the joint probability(of the manifest variables by summing out the latent variables) of each case and then find the 75 percentile and the 25 percentile. Then we try the test cases(may contain invalid cases), if the probability falls in the range(25-75 percentile) we say that it is valid otherwise invalid. In this way we can filter out some bad cases from the test dataset.
Actually once we know the parameters of the Bayes model we can get any distribution and we can use them to find many interesting things. | What is the objective of a variational autoencoder (VAE)?
What do we want to compute and why?
I thought you are mostly interested in the inference issues here. Once we have the trained model we can inference any distribution using the model. For instance: |
35,940 | What is the objective of a variational autoencoder (VAE)? | AVE's are networks that have two parts the encode and decoder. The encoder has a v shape so does the decoder. When they are placed together they have a shape like this ><. The interesting part is the latent vector, which is between the two v's. After you have trained the AVE's (say on faces) the decoder part can be discarded so you are left with the encoder part.
Now the latent vector is the input to the encoder lets say that the latent vector is an array of 100 floating point numbers between -1 and 1. Since you trained you AVE on faces you can generate new faces by creating a random array of floating point numbers between -1 and 1 and generate a new face. This can be used to make all sorts of things like new shoes, bags new maps for games, art, cars. The latent vector is the distribution which is created randomly by you after you have trained the network. The distribution has become the data. With 100 floating point numbers the distribution space is very large that would be if my math is correct 2 to the power 100*31 unique faces. that is a large space you would create faces of people that were never born and those that are dead. Hope that helps you. | What is the objective of a variational autoencoder (VAE)? | AVE's are networks that have two parts the encode and decoder. The encoder has a v shape so does the decoder. When they are placed together they have a shape like this ><. The interesting part is the | What is the objective of a variational autoencoder (VAE)?
AVE's are networks that have two parts the encode and decoder. The encoder has a v shape so does the decoder. When they are placed together they have a shape like this ><. The interesting part is the latent vector, which is between the two v's. After you have trained the AVE's (say on faces) the decoder part can be discarded so you are left with the encoder part.
Now the latent vector is the input to the encoder lets say that the latent vector is an array of 100 floating point numbers between -1 and 1. Since you trained you AVE on faces you can generate new faces by creating a random array of floating point numbers between -1 and 1 and generate a new face. This can be used to make all sorts of things like new shoes, bags new maps for games, art, cars. The latent vector is the distribution which is created randomly by you after you have trained the network. The distribution has become the data. With 100 floating point numbers the distribution space is very large that would be if my math is correct 2 to the power 100*31 unique faces. that is a large space you would create faces of people that were never born and those that are dead. Hope that helps you. | What is the objective of a variational autoencoder (VAE)?
AVE's are networks that have two parts the encode and decoder. The encoder has a v shape so does the decoder. When they are placed together they have a shape like this ><. The interesting part is the |
35,941 | What is the objective of a variational autoencoder (VAE)? | The objetive of an autoencoder is to learn an encoding of something (along with its decoding function). There are many uses for an encoding.
In a variational autoencoder what is learnt is the distribution of the encodings instead of the encoding function directly. A consequence of this is that you can sample many times the learnt distribution of an object’s encoding and each time you could get a different encoding of the same object. In this sense, variational autoencoders capture the idea that you can represent something in many ways as long as the “essence” is present in all the encodings. What is “essential” to be represented in each encoding is problem dependent; some problems may require more precision and other problems less. The precision of the encodings can be adjusted by adjusting the neural network used for learning the encoding as well as the cost function used to judge how similar is the reconstruction of the object from the encoding.
Variational autoencoders can learn more complex objects than plain autoencoders given the same amount of data with the trade-off of being less precise (although being less precise is not always a bad thing). | What is the objective of a variational autoencoder (VAE)? | The objetive of an autoencoder is to learn an encoding of something (along with its decoding function). There are many uses for an encoding.
In a variational autoencoder what is learnt is the distribu | What is the objective of a variational autoencoder (VAE)?
The objetive of an autoencoder is to learn an encoding of something (along with its decoding function). There are many uses for an encoding.
In a variational autoencoder what is learnt is the distribution of the encodings instead of the encoding function directly. A consequence of this is that you can sample many times the learnt distribution of an object’s encoding and each time you could get a different encoding of the same object. In this sense, variational autoencoders capture the idea that you can represent something in many ways as long as the “essence” is present in all the encodings. What is “essential” to be represented in each encoding is problem dependent; some problems may require more precision and other problems less. The precision of the encodings can be adjusted by adjusting the neural network used for learning the encoding as well as the cost function used to judge how similar is the reconstruction of the object from the encoding.
Variational autoencoders can learn more complex objects than plain autoencoders given the same amount of data with the trade-off of being less precise (although being less precise is not always a bad thing). | What is the objective of a variational autoencoder (VAE)?
The objetive of an autoencoder is to learn an encoding of something (along with its decoding function). There are many uses for an encoding.
In a variational autoencoder what is learnt is the distribu |
35,942 | Why is the log derivative estimator considered of large variance? | Great question, but I'm not sure a good answer exists (at this point in time). Indeed, this estimator (alternatively called the REINFORCE estimator, the score function estimator [SFE], and the likelihood ratio estimator) is known to have very high variance, which is a major problem in RL, as well as in other problems (e.g., differentiating through discrete latent variable models).
I think there are three reasons, intuitively, why it has very high variance.
We use only values of $f$, never its derivative (which is usually unknown or non-existent). In other words, the estimator has no access to any information about how $f$ varies locally. Presumably, if we knew how our alteration was going to change $f$ infinitesimally, we could do better; i.e., exactly how our change in $\psi$ would affect $\theta$ (which we know probabilistically), and how perturbing $\theta$ would affect $f$. (We could then basically invoke the chain rule). Basically, we just don't have much information. Since we don't know how $f$ changes, we cannot build it into the estimator (and our estimator is of a gradient, i.e., something that is measuring how the function is supposed to be changing).
The SFE is extremely general, requiring absolutely nothing from $f$ except the need to be able to evaluate it. I think it is intuitive that it must pay a price for such generality. This is unlike, e.g., the reparametrization trick, which requires certain properties from $f$ and/or $\theta$, but may have lower variance (e.g., see [1]).
The SFE is unbiased. There are closely related methods that sacrifice this property for greatly reduced variance. I think it's intuitive that there is some bias-variance tradeoff for such estimators. Building off stochastic optimal control theory, it turns out it's possible to use control variate baselines to reduce the variance without introducing bias however (e.g., [2]).
The main alternative to the SFE is the reparameterization trick (closely related to pathwise derivative estimators, e.g. see [3,4]). However, if $f$ is truly a blackbox function, it cannot readily be applied (unlike the SFE). Furthermore, backprop through discrete variables requires "softening" the variable (e.g., the concrete [5] or Gumbel-Softmax [6] method), meaning some bias can be introduced to the estimator. Nevertheless, the resulting estimator is empirically known to be much lower variance and more stable (i.e., in applications).
I say empirically because it is known that theoretically the SFE can have lower variance in some cases (see [7], page 34). The same reference gives a condition (in the Gaussian case) under which the SFE is worse. In other words, it is theoretically possible that (in some cases) the SFE has lower variance than its alternatives, but in practice this usually seems to not be the case (again also see [1]).
References
[1] Variance reduction properties of the reparameterization trick, Xu et al, 2019
[2] Backpropagation through the void: Optimizing control variates for black-box gradient estimation, Grathwohl et al, 2017
[3] Pathwise Derivatives Beyond the Reparameterization Trick, Jankowiak & Obermeyer, 2018
[4] Implicit Reparameterization Gradients, Figurnov et al, 2018
[5] The concrete distribution: A continuous relaxation of discrete random variables, Maddison et al, 2016
[6] Categorical reparameterization with gumbel-softmax, Jang et al, 2016 (abstract)
[7] Uncertainty in Deep Learning (thesis), Gal, 2016
This is a rather active research area. See also:
ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables, Yin et al, 2019
Evaluating the Variance of Likelihood-Ratio Gradient Estimators, Tokui & Sato, 2017
New Tricks for Estimating Gradients of Expectations, Walder et al, 2019 | Why is the log derivative estimator considered of large variance? | Great question, but I'm not sure a good answer exists (at this point in time). Indeed, this estimator (alternatively called the REINFORCE estimator, the score function estimator [SFE], and the likelih | Why is the log derivative estimator considered of large variance?
Great question, but I'm not sure a good answer exists (at this point in time). Indeed, this estimator (alternatively called the REINFORCE estimator, the score function estimator [SFE], and the likelihood ratio estimator) is known to have very high variance, which is a major problem in RL, as well as in other problems (e.g., differentiating through discrete latent variable models).
I think there are three reasons, intuitively, why it has very high variance.
We use only values of $f$, never its derivative (which is usually unknown or non-existent). In other words, the estimator has no access to any information about how $f$ varies locally. Presumably, if we knew how our alteration was going to change $f$ infinitesimally, we could do better; i.e., exactly how our change in $\psi$ would affect $\theta$ (which we know probabilistically), and how perturbing $\theta$ would affect $f$. (We could then basically invoke the chain rule). Basically, we just don't have much information. Since we don't know how $f$ changes, we cannot build it into the estimator (and our estimator is of a gradient, i.e., something that is measuring how the function is supposed to be changing).
The SFE is extremely general, requiring absolutely nothing from $f$ except the need to be able to evaluate it. I think it is intuitive that it must pay a price for such generality. This is unlike, e.g., the reparametrization trick, which requires certain properties from $f$ and/or $\theta$, but may have lower variance (e.g., see [1]).
The SFE is unbiased. There are closely related methods that sacrifice this property for greatly reduced variance. I think it's intuitive that there is some bias-variance tradeoff for such estimators. Building off stochastic optimal control theory, it turns out it's possible to use control variate baselines to reduce the variance without introducing bias however (e.g., [2]).
The main alternative to the SFE is the reparameterization trick (closely related to pathwise derivative estimators, e.g. see [3,4]). However, if $f$ is truly a blackbox function, it cannot readily be applied (unlike the SFE). Furthermore, backprop through discrete variables requires "softening" the variable (e.g., the concrete [5] or Gumbel-Softmax [6] method), meaning some bias can be introduced to the estimator. Nevertheless, the resulting estimator is empirically known to be much lower variance and more stable (i.e., in applications).
I say empirically because it is known that theoretically the SFE can have lower variance in some cases (see [7], page 34). The same reference gives a condition (in the Gaussian case) under which the SFE is worse. In other words, it is theoretically possible that (in some cases) the SFE has lower variance than its alternatives, but in practice this usually seems to not be the case (again also see [1]).
References
[1] Variance reduction properties of the reparameterization trick, Xu et al, 2019
[2] Backpropagation through the void: Optimizing control variates for black-box gradient estimation, Grathwohl et al, 2017
[3] Pathwise Derivatives Beyond the Reparameterization Trick, Jankowiak & Obermeyer, 2018
[4] Implicit Reparameterization Gradients, Figurnov et al, 2018
[5] The concrete distribution: A continuous relaxation of discrete random variables, Maddison et al, 2016
[6] Categorical reparameterization with gumbel-softmax, Jang et al, 2016 (abstract)
[7] Uncertainty in Deep Learning (thesis), Gal, 2016
This is a rather active research area. See also:
ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables, Yin et al, 2019
Evaluating the Variance of Likelihood-Ratio Gradient Estimators, Tokui & Sato, 2017
New Tricks for Estimating Gradients of Expectations, Walder et al, 2019 | Why is the log derivative estimator considered of large variance?
Great question, but I'm not sure a good answer exists (at this point in time). Indeed, this estimator (alternatively called the REINFORCE estimator, the score function estimator [SFE], and the likelih |
35,943 | Bayesian nonparametric answer to deep learning? | As the other answer notes, a common non-parametric Bayesian alternative to neural networks is the Gaussian Process.
(See also here).
However, the connection runs much deeper than that. Consider the class of models known as Bayesian Neural Networks (BNN). Such models are like regular deep neural networks except that each weight/parameter in the network has a probability distribution describing its value. A normal neural network is then somewhat like a special case of a BNN, except that the probability distribution on each weight is a Dirac Delta.
An interesting fact is that infinitely wide Bayesian neural networks become Gaussian Processes under some reasonable conditions.
Neal's thesis, Bayesian Learning for Neural Networks (1995) shows this in the case of a single-layer network with an IID prior.
More recent work (see Lee et al, Deep Neural Networks as Gaussian Processes, 2018) extends this to deeper networks.
So perhaps you can consider large BNNs as approximations of a non-parametric Gaussian process model.
As for your question more generally, people often just need mappings in supervised learning, which it seems Bayesian non-parametrics are not as common for (at least for now), mostly for computational reasons (the same applies to BNNs, even with recent advances in variational inference).
However, in unsupervised learning, they show up more often.
For instance:
Goyal et al, Nonparametric Variational Auto-encoders for
Hierarchical Representation Learning, 2017
Abbasnejad and Dick, Infinite Variational Autoencoder for Semi-Supervised Learning, 2017
Chen, Deep Learning with Nonparametric Clustering, 2015 | Bayesian nonparametric answer to deep learning? | As the other answer notes, a common non-parametric Bayesian alternative to neural networks is the Gaussian Process.
(See also here).
However, the connection runs much deeper than that. Consider the cl | Bayesian nonparametric answer to deep learning?
As the other answer notes, a common non-parametric Bayesian alternative to neural networks is the Gaussian Process.
(See also here).
However, the connection runs much deeper than that. Consider the class of models known as Bayesian Neural Networks (BNN). Such models are like regular deep neural networks except that each weight/parameter in the network has a probability distribution describing its value. A normal neural network is then somewhat like a special case of a BNN, except that the probability distribution on each weight is a Dirac Delta.
An interesting fact is that infinitely wide Bayesian neural networks become Gaussian Processes under some reasonable conditions.
Neal's thesis, Bayesian Learning for Neural Networks (1995) shows this in the case of a single-layer network with an IID prior.
More recent work (see Lee et al, Deep Neural Networks as Gaussian Processes, 2018) extends this to deeper networks.
So perhaps you can consider large BNNs as approximations of a non-parametric Gaussian process model.
As for your question more generally, people often just need mappings in supervised learning, which it seems Bayesian non-parametrics are not as common for (at least for now), mostly for computational reasons (the same applies to BNNs, even with recent advances in variational inference).
However, in unsupervised learning, they show up more often.
For instance:
Goyal et al, Nonparametric Variational Auto-encoders for
Hierarchical Representation Learning, 2017
Abbasnejad and Dick, Infinite Variational Autoencoder for Semi-Supervised Learning, 2017
Chen, Deep Learning with Nonparametric Clustering, 2015 | Bayesian nonparametric answer to deep learning?
As the other answer notes, a common non-parametric Bayesian alternative to neural networks is the Gaussian Process.
(See also here).
However, the connection runs much deeper than that. Consider the cl |
35,944 | Bayesian nonparametric answer to deep learning? | Hm I am not sure, but maybe deep gaussian processes might be one example of what you are looking for?
Deep Gaussian Processes
There is also more recent work on deep gaussian processes on scholar, but I am not knowledgeable enough to tell you what would be good to read:
https://scholar.google.de/scholar?as_ylo=2016&q=deep+gaussian+processes&hl=de&as_sdt=0,5&as_vis=1 | Bayesian nonparametric answer to deep learning? | Hm I am not sure, but maybe deep gaussian processes might be one example of what you are looking for?
Deep Gaussian Processes
There is also more recent work on deep gaussian processes on scholar, but | Bayesian nonparametric answer to deep learning?
Hm I am not sure, but maybe deep gaussian processes might be one example of what you are looking for?
Deep Gaussian Processes
There is also more recent work on deep gaussian processes on scholar, but I am not knowledgeable enough to tell you what would be good to read:
https://scholar.google.de/scholar?as_ylo=2016&q=deep+gaussian+processes&hl=de&as_sdt=0,5&as_vis=1 | Bayesian nonparametric answer to deep learning?
Hm I am not sure, but maybe deep gaussian processes might be one example of what you are looking for?
Deep Gaussian Processes
There is also more recent work on deep gaussian processes on scholar, but |
35,945 | Difference between: Offset and Weights? | Offset and weights are very different things. Offset is really a covariable included in a model with a fixed coefficient of $1$, which is not estimated.
They are mostly used with poisson models to represent exposure, see Should I use an offset for my Poisson GLM? , When to use an offset in a Poisson regression? and search this site, there are many posts. For example of use with logistic regression, see Using offset in binomial model to account for increased numbers of patients
Weights are used to modify the estimation algorithm, by giving some observation more importance ("weight") than others. Their use varies by model type and software implementation, so you really have to read the docs!
See also Can Weights and Offset lead to similar results in poisson regression? | Difference between: Offset and Weights? | Offset and weights are very different things. Offset is really a covariable included in a model with a fixed coefficient of $1$, which is not estimated.
They are mostly used with poisson models to re | Difference between: Offset and Weights?
Offset and weights are very different things. Offset is really a covariable included in a model with a fixed coefficient of $1$, which is not estimated.
They are mostly used with poisson models to represent exposure, see Should I use an offset for my Poisson GLM? , When to use an offset in a Poisson regression? and search this site, there are many posts. For example of use with logistic regression, see Using offset in binomial model to account for increased numbers of patients
Weights are used to modify the estimation algorithm, by giving some observation more importance ("weight") than others. Their use varies by model type and software implementation, so you really have to read the docs!
See also Can Weights and Offset lead to similar results in poisson regression? | Difference between: Offset and Weights?
Offset and weights are very different things. Offset is really a covariable included in a model with a fixed coefficient of $1$, which is not estimated.
They are mostly used with poisson models to re |
35,946 | ANOVA for equivalence testing | Since you have only two groups, there is no need to perform an ANOVA. You can perform a TOST procedure by simply building a confidence interval (CI) for a two sample problem, say x and y, where x = MP2-MP1 in the control group and y = MP2-MP1 in the treated group. You need to pay attention to the usual assumptions for the statistical tests: normality, heteroskedasticity, etc. But, once you have your CI, you can see if it is contained within the +-delta region or not. In R you can do something like this:
set.seed(12)
x = rnorm(100)
y = rnorm(70)
# for two sample t-test with equal variances
tt <- t.test(x, y, var.equal = TRUE, conf.level = 0.90)
# for Welch two sample t-test
tt.uneq <- t.test(x, y, var.equal = FALSE, conf.level = 0.90)
# for two sample Wilcoxon test
wcox <- wilcox.test(x,y, conf.int = TRUE, conf.level = 0.90)
delta <- 0.32
library(plotrix)
plotCI(x = 1, y=diff(rev(tt$estimate)), ui = tt$conf.int[2], li = tt$conf.int[1],
xlim=c(0,4), ylab = "Confidence intervals", ylim = c(-0.6, 0.6),
xlab ="Methods")
plotCI(x = 2, y=diff(rev(tt.uneq$estimate)), ui = tt.uneq$conf.int[2],
li = tt.uneq$conf.int[1], add=TRUE, col=2)
plotCI(x = 3, y=wcox$estimate, ui = wcox$conf.int[2],
li = wcox$conf.int[1], add=TRUE, col=3)
abline(h = c(-delta, delta), lwd = 2, lty = 2)
Here, to illustrate the idea I'm considering three type of tests but there are many others in the literature. Hope this helps. | ANOVA for equivalence testing | Since you have only two groups, there is no need to perform an ANOVA. You can perform a TOST procedure by simply building a confidence interval (CI) for a two sample problem, say x and y, where x = M | ANOVA for equivalence testing
Since you have only two groups, there is no need to perform an ANOVA. You can perform a TOST procedure by simply building a confidence interval (CI) for a two sample problem, say x and y, where x = MP2-MP1 in the control group and y = MP2-MP1 in the treated group. You need to pay attention to the usual assumptions for the statistical tests: normality, heteroskedasticity, etc. But, once you have your CI, you can see if it is contained within the +-delta region or not. In R you can do something like this:
set.seed(12)
x = rnorm(100)
y = rnorm(70)
# for two sample t-test with equal variances
tt <- t.test(x, y, var.equal = TRUE, conf.level = 0.90)
# for Welch two sample t-test
tt.uneq <- t.test(x, y, var.equal = FALSE, conf.level = 0.90)
# for two sample Wilcoxon test
wcox <- wilcox.test(x,y, conf.int = TRUE, conf.level = 0.90)
delta <- 0.32
library(plotrix)
plotCI(x = 1, y=diff(rev(tt$estimate)), ui = tt$conf.int[2], li = tt$conf.int[1],
xlim=c(0,4), ylab = "Confidence intervals", ylim = c(-0.6, 0.6),
xlab ="Methods")
plotCI(x = 2, y=diff(rev(tt.uneq$estimate)), ui = tt.uneq$conf.int[2],
li = tt.uneq$conf.int[1], add=TRUE, col=2)
plotCI(x = 3, y=wcox$estimate, ui = wcox$conf.int[2],
li = wcox$conf.int[1], add=TRUE, col=3)
abline(h = c(-delta, delta), lwd = 2, lty = 2)
Here, to illustrate the idea I'm considering three type of tests but there are many others in the literature. Hope this helps. | ANOVA for equivalence testing
Since you have only two groups, there is no need to perform an ANOVA. You can perform a TOST procedure by simply building a confidence interval (CI) for a two sample problem, say x and y, where x = M |
35,947 | ANOVA for equivalence testing | Generally equivalence testing is very limited to simple models. I have, however, recently learned that the lsmeans package in r does equivalence testing for models more complex than a t-test. I have not used it, but I have used lsmeans and it is a very flexible tool. This might work, but you will have to check out the package vignette for the details to be sure see here
As of 2022, the emmeans package (successor to lmmeans can be used for equivalence tests of linear models; vignette info is here. | ANOVA for equivalence testing | Generally equivalence testing is very limited to simple models. I have, however, recently learned that the lsmeans package in r does equivalence testing for models more complex than a t-test. I have | ANOVA for equivalence testing
Generally equivalence testing is very limited to simple models. I have, however, recently learned that the lsmeans package in r does equivalence testing for models more complex than a t-test. I have not used it, but I have used lsmeans and it is a very flexible tool. This might work, but you will have to check out the package vignette for the details to be sure see here
As of 2022, the emmeans package (successor to lmmeans can be used for equivalence tests of linear models; vignette info is here. | ANOVA for equivalence testing
Generally equivalence testing is very limited to simple models. I have, however, recently learned that the lsmeans package in r does equivalence testing for models more complex than a t-test. I have |
35,948 | Why does the first eigenvector in PCA resemble the derivative of an underlying trend? | Let's ignore the mean-centering for a moment. One way to understand the data is to view each time series as being approximately a fixed multiple of an overall "trend," which itself is a time series $x=(x_1, x_2, \ldots, x_p)^\prime$ (with $p=7$ the number of time periods). I will refer to this below as "having a similar trend."
Writing $\phi=(\phi_1, \phi_2, \ldots, \phi_n)^\prime$ for those multiples (with $n=10$ the number of time series), the data matrix is approximately
$$X = \phi x^\prime.$$
The PCA eigenvalues (without mean centering) are the eigenvalues of
$$X^\prime X = (x\phi^\prime)(\phi x^\prime) = x(\phi^\prime \phi)x^\prime = (\phi^\prime \phi) x x^\prime,$$
because $\phi^\prime \phi$ is just a number. By definition, for any eigenvalue $\lambda $ and any corresponding eigenvector $\beta$,
$$\lambda \beta = X^\prime X \beta = (\phi^\prime \phi) x x^\prime \beta = ((\phi^\prime \phi) (x^\prime \beta)) x,\tag{1}$$
where once again the number $x^\prime\beta$ can be commuted with the vector $x$. Let $\lambda$ be the largest eigenvalue, so (unless all time series are identically zero at all times) $\lambda \gt 0$.
Since the right hand side of $(1)$ is a multiple of $x$ and the left hand side is a nonzero multiple of $\beta$, the eigenvector $\beta$ must be a multiple of $x$, too.
In other words, when a set of time series conforms to this ideal (that all are multiples of a common time series), then
There is a unique positive eigenvalue in the PCA.
There is a unique corresponding eigenspace spanned by the common time series $x$.
Colloquially, (2) says "the first eigenvector is proportional to the trend."
"Mean centering" in PCA means that the columns are centered. Since the columns correspond to the observation times of the time series, this amounts to removing the average time trend by separately setting the average of all $n$ time series to zero at each of the $p$ times. Thus, each time series $\phi_i x$ is replaced by a residual $(\phi_i - \bar\phi) x$, where $\bar\phi$ is the mean of the $\phi_i$. But this is the same situation as before, simply replacing the $\phi$ by their deviations from their mean value.
Conversely, when there is a unique very large eigenvalue in the PCA, we may retain a single principal component and closely approximate the original data matrix $X$. Thus, this analysis contains a mechanism to check its validity:
All time series have similar trends if and only if there is one principal component dominating all the others.
This conclusion applies both to PCA on the raw data and PCA on the (column) mean centered data.
Allow me to illustrate. At the end of this post is R code to generate random data according to the model used here and analyze their first PC. The values of $x$ and $\phi$ are qualitatively likely those shown in the question. The code generates two rows of graphics: a "scree plot" showing the sorted eigenvalues and a plot of the data used. Here is one set of results.
The raw data appear at the upper right. The scree plot at the upper left confirms the largest eigenvalue dominates all others. Above the data I have plotted the first eigenvector (first principal component) as a thick black line and the overall trend (the means by time) as a dashed red line. They are practically coincident.
The centered data appear at the lower right. You now the "trend" in the data is a trend in variability rather than level. Although the scree plot is far from nice--the largest eigenvalue no longer predominates--nevertheless the first eigenvector does a good job of tracing out this trend.
#
# Specify a model.
#
x <- c(5, 11, 15, 25, 20, 35, 28)
phi <- exp(seq(log(1/10)/5, log(10)/5, length.out=10))
sigma <- 0.25 # SD of errors
#
# Generate data.
#
set.seed(17)
D <- phi %o% x * exp(rnorm(length(x)*length(phi), sd=0.25))
#
# Prepare to plot results.
#
par(mfrow=c(2,2))
sub <- "Raw data"
l2 <- function(y) sqrt(sum(y*y))
times <- 1:length(x)
col <- hsv(1:nrow(X)/nrow(X), 0.5, 0.7, 0.5)
#
# Plot results for data and centered data.
#
k <- 1 # Use this PC
for (X in list(D, sweep(D, 2, colMeans(D)))) {
#
# Perform the SVD.
#
S <- svd(X)
X.bar <- colMeans(X)
u <- S$v[, k] / l2(S$v[, k]) * l2(X) / sqrt(nrow(X))
u <- u * sign(max(X)) * sign(max(u))
#
# Check the scree plot to verify the largest eigenvalue is much larger
# than all others.
#
plot(S$d, pch=21, cex=1.25, bg="Tan2", main="Eigenvalues", sub=sub)
#
# Show the data series and overplot the first PC.
#
plot(range(times)+c(-1,1), range(X), type="n", main="Data Series",
xlab="Time", ylab="Value", sub=sub)
invisible(sapply(1:nrow(X), function(i) lines(times, X[i,], col=col[i])))
lines(times, u, lwd=2)
#
# If applicable, plot the mean series.
#
if (zapsmall(l2(X.bar)) > 1e-6*l2(X)) lines(times, X.bar, lwd=2, col="#a03020", lty=3)
#
# Prepare for the next step.
#
sub <- "Centered data"
} | Why does the first eigenvector in PCA resemble the derivative of an underlying trend? | Let's ignore the mean-centering for a moment. One way to understand the data is to view each time series as being approximately a fixed multiple of an overall "trend," which itself is a time series $ | Why does the first eigenvector in PCA resemble the derivative of an underlying trend?
Let's ignore the mean-centering for a moment. One way to understand the data is to view each time series as being approximately a fixed multiple of an overall "trend," which itself is a time series $x=(x_1, x_2, \ldots, x_p)^\prime$ (with $p=7$ the number of time periods). I will refer to this below as "having a similar trend."
Writing $\phi=(\phi_1, \phi_2, \ldots, \phi_n)^\prime$ for those multiples (with $n=10$ the number of time series), the data matrix is approximately
$$X = \phi x^\prime.$$
The PCA eigenvalues (without mean centering) are the eigenvalues of
$$X^\prime X = (x\phi^\prime)(\phi x^\prime) = x(\phi^\prime \phi)x^\prime = (\phi^\prime \phi) x x^\prime,$$
because $\phi^\prime \phi$ is just a number. By definition, for any eigenvalue $\lambda $ and any corresponding eigenvector $\beta$,
$$\lambda \beta = X^\prime X \beta = (\phi^\prime \phi) x x^\prime \beta = ((\phi^\prime \phi) (x^\prime \beta)) x,\tag{1}$$
where once again the number $x^\prime\beta$ can be commuted with the vector $x$. Let $\lambda$ be the largest eigenvalue, so (unless all time series are identically zero at all times) $\lambda \gt 0$.
Since the right hand side of $(1)$ is a multiple of $x$ and the left hand side is a nonzero multiple of $\beta$, the eigenvector $\beta$ must be a multiple of $x$, too.
In other words, when a set of time series conforms to this ideal (that all are multiples of a common time series), then
There is a unique positive eigenvalue in the PCA.
There is a unique corresponding eigenspace spanned by the common time series $x$.
Colloquially, (2) says "the first eigenvector is proportional to the trend."
"Mean centering" in PCA means that the columns are centered. Since the columns correspond to the observation times of the time series, this amounts to removing the average time trend by separately setting the average of all $n$ time series to zero at each of the $p$ times. Thus, each time series $\phi_i x$ is replaced by a residual $(\phi_i - \bar\phi) x$, where $\bar\phi$ is the mean of the $\phi_i$. But this is the same situation as before, simply replacing the $\phi$ by their deviations from their mean value.
Conversely, when there is a unique very large eigenvalue in the PCA, we may retain a single principal component and closely approximate the original data matrix $X$. Thus, this analysis contains a mechanism to check its validity:
All time series have similar trends if and only if there is one principal component dominating all the others.
This conclusion applies both to PCA on the raw data and PCA on the (column) mean centered data.
Allow me to illustrate. At the end of this post is R code to generate random data according to the model used here and analyze their first PC. The values of $x$ and $\phi$ are qualitatively likely those shown in the question. The code generates two rows of graphics: a "scree plot" showing the sorted eigenvalues and a plot of the data used. Here is one set of results.
The raw data appear at the upper right. The scree plot at the upper left confirms the largest eigenvalue dominates all others. Above the data I have plotted the first eigenvector (first principal component) as a thick black line and the overall trend (the means by time) as a dashed red line. They are practically coincident.
The centered data appear at the lower right. You now the "trend" in the data is a trend in variability rather than level. Although the scree plot is far from nice--the largest eigenvalue no longer predominates--nevertheless the first eigenvector does a good job of tracing out this trend.
#
# Specify a model.
#
x <- c(5, 11, 15, 25, 20, 35, 28)
phi <- exp(seq(log(1/10)/5, log(10)/5, length.out=10))
sigma <- 0.25 # SD of errors
#
# Generate data.
#
set.seed(17)
D <- phi %o% x * exp(rnorm(length(x)*length(phi), sd=0.25))
#
# Prepare to plot results.
#
par(mfrow=c(2,2))
sub <- "Raw data"
l2 <- function(y) sqrt(sum(y*y))
times <- 1:length(x)
col <- hsv(1:nrow(X)/nrow(X), 0.5, 0.7, 0.5)
#
# Plot results for data and centered data.
#
k <- 1 # Use this PC
for (X in list(D, sweep(D, 2, colMeans(D)))) {
#
# Perform the SVD.
#
S <- svd(X)
X.bar <- colMeans(X)
u <- S$v[, k] / l2(S$v[, k]) * l2(X) / sqrt(nrow(X))
u <- u * sign(max(X)) * sign(max(u))
#
# Check the scree plot to verify the largest eigenvalue is much larger
# than all others.
#
plot(S$d, pch=21, cex=1.25, bg="Tan2", main="Eigenvalues", sub=sub)
#
# Show the data series and overplot the first PC.
#
plot(range(times)+c(-1,1), range(X), type="n", main="Data Series",
xlab="Time", ylab="Value", sub=sub)
invisible(sapply(1:nrow(X), function(i) lines(times, X[i,], col=col[i])))
lines(times, u, lwd=2)
#
# If applicable, plot the mean series.
#
if (zapsmall(l2(X.bar)) > 1e-6*l2(X)) lines(times, X.bar, lwd=2, col="#a03020", lty=3)
#
# Prepare for the next step.
#
sub <- "Centered data"
} | Why does the first eigenvector in PCA resemble the derivative of an underlying trend?
Let's ignore the mean-centering for a moment. One way to understand the data is to view each time series as being approximately a fixed multiple of an overall "trend," which itself is a time series $ |
35,949 | Why does the first eigenvector in PCA resemble the derivative of an underlying trend? | Derivative of the data (~ first difference) removes the pointwise dependencies in the data which are due to nonstationarity (cf. ARIMA). What you then recover is approximately the stable stationary signal, which I guess the SVD is recovering. | Why does the first eigenvector in PCA resemble the derivative of an underlying trend? | Derivative of the data (~ first difference) removes the pointwise dependencies in the data which are due to nonstationarity (cf. ARIMA). What you then recover is approximately the stable stationary si | Why does the first eigenvector in PCA resemble the derivative of an underlying trend?
Derivative of the data (~ first difference) removes the pointwise dependencies in the data which are due to nonstationarity (cf. ARIMA). What you then recover is approximately the stable stationary signal, which I guess the SVD is recovering. | Why does the first eigenvector in PCA resemble the derivative of an underlying trend?
Derivative of the data (~ first difference) removes the pointwise dependencies in the data which are due to nonstationarity (cf. ARIMA). What you then recover is approximately the stable stationary si |
35,950 | Logistic regression with lasso versus PCA? | I answered PCA allows you to do feature selection outside of the fit and transform and therefore give more flexibility in the hyper parameter search.
PCA can be used as a dimensionality reduction technique if you drop Principal Components based on a heuristic, but it offers no feature selection, as the Principal Components are retained instead of the original features. However, tuning the number of Principal Components retained should work better than using heuristics, unless there are many low variance components and you are simply interested in filtering them.
Whereas in lasso the "feature selection" is kind of done for you and therefore there is less scope of hyper parameter optimization.
LASSO ($\ell_1$ regularization) on the other hand can, intrinsically, perform feature selection as the coefficients of predictors are shrunk towards zero. It still requires hyperparameter tuning because there's a regularization coefficient that weights how severe is the regularization of the loss function.
As @MatthewDrury commented, ordinary PCA is agnostic to the target variable while LASSO regression isn't, as it's part of a regression model. This is the most important difference, actuallly. | Logistic regression with lasso versus PCA? | I answered PCA allows you to do feature selection outside of the fit and transform and therefore give more flexibility in the hyper parameter search.
PCA can be used as a dimensionality reduction te | Logistic regression with lasso versus PCA?
I answered PCA allows you to do feature selection outside of the fit and transform and therefore give more flexibility in the hyper parameter search.
PCA can be used as a dimensionality reduction technique if you drop Principal Components based on a heuristic, but it offers no feature selection, as the Principal Components are retained instead of the original features. However, tuning the number of Principal Components retained should work better than using heuristics, unless there are many low variance components and you are simply interested in filtering them.
Whereas in lasso the "feature selection" is kind of done for you and therefore there is less scope of hyper parameter optimization.
LASSO ($\ell_1$ regularization) on the other hand can, intrinsically, perform feature selection as the coefficients of predictors are shrunk towards zero. It still requires hyperparameter tuning because there's a regularization coefficient that weights how severe is the regularization of the loss function.
As @MatthewDrury commented, ordinary PCA is agnostic to the target variable while LASSO regression isn't, as it's part of a regression model. This is the most important difference, actuallly. | Logistic regression with lasso versus PCA?
I answered PCA allows you to do feature selection outside of the fit and transform and therefore give more flexibility in the hyper parameter search.
PCA can be used as a dimensionality reduction te |
35,951 | Logistic regression with lasso versus PCA? | PCA while reducing the number of features does not care about the class labels. The only thing that it cares about is preserving the maximum variance which may not always be optimal for classification task.
L1-Reg on the other hand pushes those features towards zero that do not have much correlation with the class labels. Hence, L1-Reg strives to reduce the number of features while also getting good classification performance.
To avoid under-fitting, we can always do hyper-parameter tuning to find best lambda. | Logistic regression with lasso versus PCA? | PCA while reducing the number of features does not care about the class labels. The only thing that it cares about is preserving the maximum variance which may not always be optimal for classification | Logistic regression with lasso versus PCA?
PCA while reducing the number of features does not care about the class labels. The only thing that it cares about is preserving the maximum variance which may not always be optimal for classification task.
L1-Reg on the other hand pushes those features towards zero that do not have much correlation with the class labels. Hence, L1-Reg strives to reduce the number of features while also getting good classification performance.
To avoid under-fitting, we can always do hyper-parameter tuning to find best lambda. | Logistic regression with lasso versus PCA?
PCA while reducing the number of features does not care about the class labels. The only thing that it cares about is preserving the maximum variance which may not always be optimal for classification |
35,952 | Show that $Y_1+Y_2$ have distribution skew-normal | Reparameterizing the skew in terms of $\delta=\lambda/\sqrt{1+\lambda^2}$ and using the mgf of the skew normal (see below), since $Y_1$ and $Y_2$ are independent, $Z=Y_1 + Y_2$ has mgf
\begin{align}
M_Z(t) &= M_{Y_1}(t)M_{Y_2}(t) \\
&= 2e^{\mu_1 t +\sigma_1^2 t^2/2}\Phi(\sigma_1\delta t)e^{\mu_2t +\sigma_2^2 t^2/2} \\
&= 2e^{(\mu_1+\mu_2)t + (\sigma_1^2+\sigma_2^2)t^2/2}\Phi(\sigma_1 \delta t) \\
&= 2e^{\mu t + \sigma^2 t^2/2}\Phi(\sigma \delta' t),
\end{align}
that is, the mgf of a skew normal with parameters $\mu=\mu_1+\mu_2$, $\sigma^2=\sigma_1^2+\sigma_2^2$ and $\sigma\delta'=\sigma_1\delta$ where $\delta'$ is the new skew parameter. Hence,
$$
\delta'=\delta\frac{\sigma_1}\sigma=\delta\frac{\sigma_1}{\sqrt{\sigma_1^2+\sigma_2^2}}.
$$
In the other parameterization, the new skew parameter $\lambda'$ can be written, after some algebra, e.g. as
$$
\lambda' = \frac{\delta'}{\sqrt{1-\delta'^2}}=\frac{\lambda}{\sqrt{1 + \frac{\sigma_2^2}{\sigma_1^2}(1+\lambda^2)}}.
$$
The mgf of a standard skew normal can be derived as follows:
\begin{align}
M_X(t)&=Ee^{tX} \\
&=\int_{-\infty}^\infty e^{xt}2\frac1{\sqrt{2\pi}}e^{-x^2/2}\Phi(\lambda x)dx \\
&=2\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12(x^2-2tx)}\Phi(\lambda x)dx\\
&=2\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12((x-t)^2-t^2)}\Phi(\lambda x)dx \\
&=2e^{t^2/2} \int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12(x-t)^2}P(Z\le \lambda x)dx, & \text{where }Z \sim N(0,1) \\
&=2e^{t^2/2} P(Z\le \lambda U), & \text{where }U \sim N(t,1)\\
&=2e^{t^2/2} P(Z - \lambda U \le 0) \\
&=2e^{t^2/2} P(\frac{Z - \lambda U +\lambda t}{\sqrt{1+\lambda^2}} \le \frac{\lambda t}{\sqrt{1+\lambda^2}}) \\
&=2e^{t^2/2}\Phi(\frac\lambda{\sqrt{1+\lambda^2}}t).
\end{align}
The mgf of a skew normal with location and scale parameters $\mu$ and $\sigma$ is then
$$
M_{\mu + \sigma X}(t)=Ee^{(\mu+\sigma X)t} = e^{\mu t}M_X(\sigma t) = 2e^{\mu t+\sigma^2 t^2/2}\Phi(\frac\lambda{\sqrt{1+\lambda^2}}\sigma t).
$$ | Show that $Y_1+Y_2$ have distribution skew-normal | Reparameterizing the skew in terms of $\delta=\lambda/\sqrt{1+\lambda^2}$ and using the mgf of the skew normal (see below), since $Y_1$ and $Y_2$ are independent, $Z=Y_1 + Y_2$ has mgf
\begin{align}
M | Show that $Y_1+Y_2$ have distribution skew-normal
Reparameterizing the skew in terms of $\delta=\lambda/\sqrt{1+\lambda^2}$ and using the mgf of the skew normal (see below), since $Y_1$ and $Y_2$ are independent, $Z=Y_1 + Y_2$ has mgf
\begin{align}
M_Z(t) &= M_{Y_1}(t)M_{Y_2}(t) \\
&= 2e^{\mu_1 t +\sigma_1^2 t^2/2}\Phi(\sigma_1\delta t)e^{\mu_2t +\sigma_2^2 t^2/2} \\
&= 2e^{(\mu_1+\mu_2)t + (\sigma_1^2+\sigma_2^2)t^2/2}\Phi(\sigma_1 \delta t) \\
&= 2e^{\mu t + \sigma^2 t^2/2}\Phi(\sigma \delta' t),
\end{align}
that is, the mgf of a skew normal with parameters $\mu=\mu_1+\mu_2$, $\sigma^2=\sigma_1^2+\sigma_2^2$ and $\sigma\delta'=\sigma_1\delta$ where $\delta'$ is the new skew parameter. Hence,
$$
\delta'=\delta\frac{\sigma_1}\sigma=\delta\frac{\sigma_1}{\sqrt{\sigma_1^2+\sigma_2^2}}.
$$
In the other parameterization, the new skew parameter $\lambda'$ can be written, after some algebra, e.g. as
$$
\lambda' = \frac{\delta'}{\sqrt{1-\delta'^2}}=\frac{\lambda}{\sqrt{1 + \frac{\sigma_2^2}{\sigma_1^2}(1+\lambda^2)}}.
$$
The mgf of a standard skew normal can be derived as follows:
\begin{align}
M_X(t)&=Ee^{tX} \\
&=\int_{-\infty}^\infty e^{xt}2\frac1{\sqrt{2\pi}}e^{-x^2/2}\Phi(\lambda x)dx \\
&=2\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12(x^2-2tx)}\Phi(\lambda x)dx\\
&=2\int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12((x-t)^2-t^2)}\Phi(\lambda x)dx \\
&=2e^{t^2/2} \int_{-\infty}^\infty \frac1{\sqrt{2\pi}}e^{-\frac12(x-t)^2}P(Z\le \lambda x)dx, & \text{where }Z \sim N(0,1) \\
&=2e^{t^2/2} P(Z\le \lambda U), & \text{where }U \sim N(t,1)\\
&=2e^{t^2/2} P(Z - \lambda U \le 0) \\
&=2e^{t^2/2} P(\frac{Z - \lambda U +\lambda t}{\sqrt{1+\lambda^2}} \le \frac{\lambda t}{\sqrt{1+\lambda^2}}) \\
&=2e^{t^2/2}\Phi(\frac\lambda{\sqrt{1+\lambda^2}}t).
\end{align}
The mgf of a skew normal with location and scale parameters $\mu$ and $\sigma$ is then
$$
M_{\mu + \sigma X}(t)=Ee^{(\mu+\sigma X)t} = e^{\mu t}M_X(\sigma t) = 2e^{\mu t+\sigma^2 t^2/2}\Phi(\frac\lambda{\sqrt{1+\lambda^2}}\sigma t).
$$ | Show that $Y_1+Y_2$ have distribution skew-normal
Reparameterizing the skew in terms of $\delta=\lambda/\sqrt{1+\lambda^2}$ and using the mgf of the skew normal (see below), since $Y_1$ and $Y_2$ are independent, $Z=Y_1 + Y_2$ has mgf
\begin{align}
M |
35,953 | How is a negative binomial regression model different from OLS with a logged outcome variable? | Assuming you've already identified the predictor(s) of interest/importance, the considerations for a model (in approximate order of importance) would be:
a. do you want to model the conditional mean or something else (e.g. a quantile? some more robust location-estimate than the mean? ...)?
b. What is the anticipated form of the relationship between response and predictors (linear? exponential? power? unknown-but-smooth? unknown but smooth and monotonic? etc etc) ...
should the variability be expected to be fairly constant? changing with mean? changing but unrelated to mean? ...
do you anticipate any substantial dependence between observations?
Now comes distributional considerations and whether you want to worry about bounding impact of influential points (note we haven't yet looked at our data). But the distribution you need to worry about is the conditional distribution (distribution at given value(s) of predictors) not the marginal distribution.
Note that taking logs and fitting OLS would be ideal (close to optimal) if the relationship is exponential and your conditional response is very close to lognormal - which is continuous while the negative binomial is discrete. Further note that negative binomial models have a nonzero probability of a 0 but you can't take log of 0.
In practice, aside from discreteness / issues with small counts there may sometimes be little else to distinguish the two. Here's a plot of (conditionally) negative binomial (left) and lognormal (right) response, both with log "link". As you see they look pretty similar - they have similar movement of the mean with $x$, similar spread at each $x$ and so on.
On the other hand if you consider a lognormal model, why not a gamma or Weibull, for example? It's fairly easy to fit a model with the same kind of linear-in-the-logs relationship with any of them.
One thing that is different is the way the observations enter the model -- if at the optimum we approximate both models with transformed weighted least squares linear model, the points will get somewhat different relative weight.
Something to keep in mind is if you're transforming, the predicted value on the log scale (which is an estimate of the mean of the logs) will not represent a mean when you transform back. If the conditional variability is very small that may not bother you, but more typically it can make a big difference -- in particular $\exp(E(\log(Y|x)))<E(Y|x)$ when the variance is non-zero. | How is a negative binomial regression model different from OLS with a logged outcome variable? | Assuming you've already identified the predictor(s) of interest/importance, the considerations for a model (in approximate order of importance) would be:
a. do you want to model the conditional mean | How is a negative binomial regression model different from OLS with a logged outcome variable?
Assuming you've already identified the predictor(s) of interest/importance, the considerations for a model (in approximate order of importance) would be:
a. do you want to model the conditional mean or something else (e.g. a quantile? some more robust location-estimate than the mean? ...)?
b. What is the anticipated form of the relationship between response and predictors (linear? exponential? power? unknown-but-smooth? unknown but smooth and monotonic? etc etc) ...
should the variability be expected to be fairly constant? changing with mean? changing but unrelated to mean? ...
do you anticipate any substantial dependence between observations?
Now comes distributional considerations and whether you want to worry about bounding impact of influential points (note we haven't yet looked at our data). But the distribution you need to worry about is the conditional distribution (distribution at given value(s) of predictors) not the marginal distribution.
Note that taking logs and fitting OLS would be ideal (close to optimal) if the relationship is exponential and your conditional response is very close to lognormal - which is continuous while the negative binomial is discrete. Further note that negative binomial models have a nonzero probability of a 0 but you can't take log of 0.
In practice, aside from discreteness / issues with small counts there may sometimes be little else to distinguish the two. Here's a plot of (conditionally) negative binomial (left) and lognormal (right) response, both with log "link". As you see they look pretty similar - they have similar movement of the mean with $x$, similar spread at each $x$ and so on.
On the other hand if you consider a lognormal model, why not a gamma or Weibull, for example? It's fairly easy to fit a model with the same kind of linear-in-the-logs relationship with any of them.
One thing that is different is the way the observations enter the model -- if at the optimum we approximate both models with transformed weighted least squares linear model, the points will get somewhat different relative weight.
Something to keep in mind is if you're transforming, the predicted value on the log scale (which is an estimate of the mean of the logs) will not represent a mean when you transform back. If the conditional variability is very small that may not bother you, but more typically it can make a big difference -- in particular $\exp(E(\log(Y|x)))<E(Y|x)$ when the variance is non-zero. | How is a negative binomial regression model different from OLS with a logged outcome variable?
Assuming you've already identified the predictor(s) of interest/importance, the considerations for a model (in approximate order of importance) would be:
a. do you want to model the conditional mean |
35,954 | Required sample size and degrees of freedom for a VAR | One should not expect there to be a hard-and-fast rule for such thresholds. The reason is that the precision of the estimates does not only depend on the ratio between parameters and observations, but also, for example, the "signal-to-noise ratio". That is, if the variance of the errors driving the process is large relative to the signal in the regressors, the estimates will be more variable all else equal.
Take the easiest possible example of a VAR, a univariate AR(1)
$$
y_t=\rho y_{t-1}+\epsilon_t,$$
where we assume $\epsilon_t\sim(0,\sigma^2_\epsilon)$. Then, we know that the variance of $y_t$ is (see, e.g., here)
$$
V(y_t)=\frac{\sigma^2_\epsilon}{1-\rho^2}
$$
Hence, the signal-to-noise-ratio is
$$
SNR=\frac{\frac{\sigma^2_\epsilon}{1-\rho^2}}{\sigma^2_\epsilon}=\frac{1}{1-\rho^2}
$$
Hence, there cannot be a single uniformly valid threshold for the ratio of parameters and observations, as I argue there is more information in the regressor $y_{t-1}$ as $\rho$ increases. In the limit as $\rho\to1$, we even have "superconsistency", i.e., the OLS estimator converges at rate $T$ rather than $T^{1/2}$.
To illustrate, here is a little simulation study showing average standard errors over 5000 simulations runs for different sample sizes $T$ and different AR coefficients $\rho$ (see the code below for the values used). We observe that, for any $T$, the standard errors are on average smaller for larger $\rho$, as predicted. (As expected, they fall in $T$ for any $\rho$.) So, depending on where your preferred threshold is, it may or may not be satisfied for any given $T$ depending on the value of $\rho$.
Here, the number of parameters is of course 2 (the estimate of $\rho$ and the constant) for any $\rho$ and $T$, so that is held constant, so to speak:
library(RColorBrewer)
rho <- seq(0.1,0.9,by = .1)
T <- seq(50,250,by = 50)
reps <- 5000
stderr <- array(NA,dim=c(length(rho),length(T),reps))
for (r in 1:length(rho)){
for (t in 1:length(T)){
for (j in 1:reps){
y <- arima.sim(n=T[t],list(ar=rho[r]))
stderr[r,t,j] <- sqrt(arima(y,c(1,0,0), method = "CSS")$var.coef[1,1])
}
}
}
jBrewColors <- brewer.pal(n = length(rho), name = "Paired")
matlines(T,t(apply(stderr,c(1,2),mean)), col = jBrewColors, lwd=2, lty=1)
legend("topright",legend = rho, col = jBrewColors, lty = 1, lwd=2)
This is by no means unique to VAR models. In general, there is little guidance as to for what sample sizes asymptotic approximations are useful guides to finite-sample distributions. One exception is the normal distribution as an approximation to the t-distribution, where eyeballing the densities suggests that from 30 degrees of freedom onwards, the two are so similar we may as well use the normal. But even there, the choice of 30 is quite arbitrary. One might even go further and say that this particular rule of thumb has done more harm than good, because, in my experience, it has led quite a few people to infer that asymptotic approximations are useful as soon as one has 30 observations no matter what context is given... | Required sample size and degrees of freedom for a VAR | One should not expect there to be a hard-and-fast rule for such thresholds. The reason is that the precision of the estimates does not only depend on the ratio between parameters and observations, but | Required sample size and degrees of freedom for a VAR
One should not expect there to be a hard-and-fast rule for such thresholds. The reason is that the precision of the estimates does not only depend on the ratio between parameters and observations, but also, for example, the "signal-to-noise ratio". That is, if the variance of the errors driving the process is large relative to the signal in the regressors, the estimates will be more variable all else equal.
Take the easiest possible example of a VAR, a univariate AR(1)
$$
y_t=\rho y_{t-1}+\epsilon_t,$$
where we assume $\epsilon_t\sim(0,\sigma^2_\epsilon)$. Then, we know that the variance of $y_t$ is (see, e.g., here)
$$
V(y_t)=\frac{\sigma^2_\epsilon}{1-\rho^2}
$$
Hence, the signal-to-noise-ratio is
$$
SNR=\frac{\frac{\sigma^2_\epsilon}{1-\rho^2}}{\sigma^2_\epsilon}=\frac{1}{1-\rho^2}
$$
Hence, there cannot be a single uniformly valid threshold for the ratio of parameters and observations, as I argue there is more information in the regressor $y_{t-1}$ as $\rho$ increases. In the limit as $\rho\to1$, we even have "superconsistency", i.e., the OLS estimator converges at rate $T$ rather than $T^{1/2}$.
To illustrate, here is a little simulation study showing average standard errors over 5000 simulations runs for different sample sizes $T$ and different AR coefficients $\rho$ (see the code below for the values used). We observe that, for any $T$, the standard errors are on average smaller for larger $\rho$, as predicted. (As expected, they fall in $T$ for any $\rho$.) So, depending on where your preferred threshold is, it may or may not be satisfied for any given $T$ depending on the value of $\rho$.
Here, the number of parameters is of course 2 (the estimate of $\rho$ and the constant) for any $\rho$ and $T$, so that is held constant, so to speak:
library(RColorBrewer)
rho <- seq(0.1,0.9,by = .1)
T <- seq(50,250,by = 50)
reps <- 5000
stderr <- array(NA,dim=c(length(rho),length(T),reps))
for (r in 1:length(rho)){
for (t in 1:length(T)){
for (j in 1:reps){
y <- arima.sim(n=T[t],list(ar=rho[r]))
stderr[r,t,j] <- sqrt(arima(y,c(1,0,0), method = "CSS")$var.coef[1,1])
}
}
}
jBrewColors <- brewer.pal(n = length(rho), name = "Paired")
matlines(T,t(apply(stderr,c(1,2),mean)), col = jBrewColors, lwd=2, lty=1)
legend("topright",legend = rho, col = jBrewColors, lty = 1, lwd=2)
This is by no means unique to VAR models. In general, there is little guidance as to for what sample sizes asymptotic approximations are useful guides to finite-sample distributions. One exception is the normal distribution as an approximation to the t-distribution, where eyeballing the densities suggests that from 30 degrees of freedom onwards, the two are so similar we may as well use the normal. But even there, the choice of 30 is quite arbitrary. One might even go further and say that this particular rule of thumb has done more harm than good, because, in my experience, it has led quite a few people to infer that asymptotic approximations are useful as soon as one has 30 observations no matter what context is given... | Required sample size and degrees of freedom for a VAR
One should not expect there to be a hard-and-fast rule for such thresholds. The reason is that the precision of the estimates does not only depend on the ratio between parameters and observations, but |
35,955 | Generate a random set of numbers with fixed sum and desired means and variances? | Multivariate logit-normal distribution can be considered as a generalization of the Dirichlet distribution that you have in mind. It is parametrized by a vector of $D-1$ means $\boldsymbol{\mu}$ for $D$-dimensional vector $\mathbf{x}$ and covariance matrix $\boldsymbol{\Sigma}$,
$$
f_X( \mathbf{x}; \boldsymbol{\mu} , \boldsymbol{\Sigma} ) = \frac{1}{ | 2 \pi \boldsymbol{\Sigma} |^\frac{1}{2} } \, \frac{1}{ \prod\limits_{i=1}^D x_i } \, e^{- \frac{1}{2} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\}^\top \boldsymbol{\Sigma}^{-1} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\} }
$$
Since it is defined as a logistic transformation of multivariate normal distribution $\mathbf{y} \sim \mathcal{N} \left( \boldsymbol{\mu} , \boldsymbol{\Sigma} \right)$,
$$
\mathbf{x} = \left[ \frac{ e^{ y_1 } }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } , \dots , \frac{ e^{ y_{D-1} } }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } , \frac{ 1 }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } \right]^\top
$$
the random generation is straightforward since you need only to take samples from multivariate normal distribution and transform them. | Generate a random set of numbers with fixed sum and desired means and variances? | Multivariate logit-normal distribution can be considered as a generalization of the Dirichlet distribution that you have in mind. It is parametrized by a vector of $D-1$ means $\boldsymbol{\mu}$ for | Generate a random set of numbers with fixed sum and desired means and variances?
Multivariate logit-normal distribution can be considered as a generalization of the Dirichlet distribution that you have in mind. It is parametrized by a vector of $D-1$ means $\boldsymbol{\mu}$ for $D$-dimensional vector $\mathbf{x}$ and covariance matrix $\boldsymbol{\Sigma}$,
$$
f_X( \mathbf{x}; \boldsymbol{\mu} , \boldsymbol{\Sigma} ) = \frac{1}{ | 2 \pi \boldsymbol{\Sigma} |^\frac{1}{2} } \, \frac{1}{ \prod\limits_{i=1}^D x_i } \, e^{- \frac{1}{2} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\}^\top \boldsymbol{\Sigma}^{-1} \left\{ \log \left( \frac{ \mathbf{x}_{-D} }{ x_D } \right) - \boldsymbol{\mu} \right\} }
$$
Since it is defined as a logistic transformation of multivariate normal distribution $\mathbf{y} \sim \mathcal{N} \left( \boldsymbol{\mu} , \boldsymbol{\Sigma} \right)$,
$$
\mathbf{x} = \left[ \frac{ e^{ y_1 } }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } , \dots , \frac{ e^{ y_{D-1} } }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } , \frac{ 1 }{ 1 + \sum_{i=1}^{D-1} e^{ y_i } } \right]^\top
$$
the random generation is straightforward since you need only to take samples from multivariate normal distribution and transform them. | Generate a random set of numbers with fixed sum and desired means and variances?
Multivariate logit-normal distribution can be considered as a generalization of the Dirichlet distribution that you have in mind. It is parametrized by a vector of $D-1$ means $\boldsymbol{\mu}$ for |
35,956 | Generate a random set of numbers with fixed sum and desired means and variances? | Wikipedia (https://en.wikipedia.org/wiki/Dirichlet_distribution) describes the means and variances of the n individual x's in terms of the n exponenents (written as a's) and their sum. The means will not be changed if all the a's are multiplied by some constant b. However the variances will change, at least somewhat.
More, but more complicated, control of both would be provided by giving each "a" its own "b".
Depending on exactly what you are trying to achieve, you might write a set of simultaneous for the means and variances in terms of a's and b's that could be inverted to give a prescription for the later in terms of the former. These equations might require a nonlinear numeric solver. (But note that multiplying the a's and b's would not be the only option for combining the two.)
However, because of the constraints built into this distribution, it is not clear how much flexibility will be available in making independent adjustments to the means and variances. You would have to experiment. | Generate a random set of numbers with fixed sum and desired means and variances? | Wikipedia (https://en.wikipedia.org/wiki/Dirichlet_distribution) describes the means and variances of the n individual x's in terms of the n exponenents (written as a's) and their sum. The means will | Generate a random set of numbers with fixed sum and desired means and variances?
Wikipedia (https://en.wikipedia.org/wiki/Dirichlet_distribution) describes the means and variances of the n individual x's in terms of the n exponenents (written as a's) and their sum. The means will not be changed if all the a's are multiplied by some constant b. However the variances will change, at least somewhat.
More, but more complicated, control of both would be provided by giving each "a" its own "b".
Depending on exactly what you are trying to achieve, you might write a set of simultaneous for the means and variances in terms of a's and b's that could be inverted to give a prescription for the later in terms of the former. These equations might require a nonlinear numeric solver. (But note that multiplying the a's and b's would not be the only option for combining the two.)
However, because of the constraints built into this distribution, it is not clear how much flexibility will be available in making independent adjustments to the means and variances. You would have to experiment. | Generate a random set of numbers with fixed sum and desired means and variances?
Wikipedia (https://en.wikipedia.org/wiki/Dirichlet_distribution) describes the means and variances of the n individual x's in terms of the n exponenents (written as a's) and their sum. The means will |
35,957 | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$) | It's standard to look at point null vs point alternative when first introducing the Neyman Pearson lemma, so if you've seen that you'll probably have seen this simple-alternative case done already.
Very little is different in the simple-null-simple-alternative case from the simple-null case, you are just in a situation where those (60 and 57) are the only two possible values for $\mu$.
Clearly unusually small values of $\bar{X}$ would lead you to consider $H_0$ to be untenable, but large values (larger than 60) would not lead to you conclude the mean is $57$ instead of $60$, so you only reject on one side.
So all that's left to do is give a test statistic whose distribution under the null hypothesis can be calculated, in order to determine a rejection region for that statistic that corresponds to small values being taken by $\bar{X}$.
You already know a test statistic which will have a known distribution under $H_0$ (... and if you use the Neyman-Pearson lemma, you can argue it will be the most powerful test in this circumstance). | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$) | It's standard to look at point null vs point alternative when first introducing the Neyman Pearson lemma, so if you've seen that you'll probably have seen this simple-alternative case done already.
V | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$)
It's standard to look at point null vs point alternative when first introducing the Neyman Pearson lemma, so if you've seen that you'll probably have seen this simple-alternative case done already.
Very little is different in the simple-null-simple-alternative case from the simple-null case, you are just in a situation where those (60 and 57) are the only two possible values for $\mu$.
Clearly unusually small values of $\bar{X}$ would lead you to consider $H_0$ to be untenable, but large values (larger than 60) would not lead to you conclude the mean is $57$ instead of $60$, so you only reject on one side.
So all that's left to do is give a test statistic whose distribution under the null hypothesis can be calculated, in order to determine a rejection region for that statistic that corresponds to small values being taken by $\bar{X}$.
You already know a test statistic which will have a known distribution under $H_0$ (... and if you use the Neyman-Pearson lemma, you can argue it will be the most powerful test in this circumstance). | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$)
It's standard to look at point null vs point alternative when first introducing the Neyman Pearson lemma, so if you've seen that you'll probably have seen this simple-alternative case done already.
V |
35,958 | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$) | I'm not as expert at statistics as some people here so please bear with me; but I'd like to throw in my two cents worth. As I understand the example problem you quote, it is basically asking if a sample mean of 57 is more likely, or has a lower (more statistically significant) p-value, than a sample mean of 60, if the true mean is 58.05. So your H0 is that mu=60 and H1 is that mu=57. Or, "is 57 truly lower than 60, given the sample size (20), true mean (58.05) and the alpha level (.05) stated?" and at what p-level. So (again as I understand it) you would test the hypothesis in the usual way and use a one-sided p-value to test if 57 or lower is significantly different from 60, given the above parameters. (my intuition tells me it's not different from 60, because 60 is farther away from the mean of 58.05 than 57 is and the distribution is normal, i.e., symmetrical, and both are within one standard deviation (3) from the mean).
Your actual question however seems to be asking what is the probability of your H1 mu being exactly equal to 57; is that correct? To answer that may require a different approach, perhaps a probability density function. | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$) | I'm not as expert at statistics as some people here so please bear with me; but I'd like to throw in my two cents worth. As I understand the example problem you quote, it is basically asking if a samp | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$)
I'm not as expert at statistics as some people here so please bear with me; but I'd like to throw in my two cents worth. As I understand the example problem you quote, it is basically asking if a sample mean of 57 is more likely, or has a lower (more statistically significant) p-value, than a sample mean of 60, if the true mean is 58.05. So your H0 is that mu=60 and H1 is that mu=57. Or, "is 57 truly lower than 60, given the sample size (20), true mean (58.05) and the alpha level (.05) stated?" and at what p-level. So (again as I understand it) you would test the hypothesis in the usual way and use a one-sided p-value to test if 57 or lower is significantly different from 60, given the above parameters. (my intuition tells me it's not different from 60, because 60 is farther away from the mean of 58.05 than 57 is and the distribution is normal, i.e., symmetrical, and both are within one standard deviation (3) from the mean).
Your actual question however seems to be asking what is the probability of your H1 mu being exactly equal to 57; is that correct? To answer that may require a different approach, perhaps a probability density function. | Hypothesis testing with an unusual alternative hypothesis ($H_1:\mu = \mu_1$)
I'm not as expert at statistics as some people here so please bear with me; but I'd like to throw in my two cents worth. As I understand the example problem you quote, it is basically asking if a samp |
35,959 | Manually calculating logistic regression coefficient | You are right that although you should be able to calculate the OLS coefficient estimate in logit space, you can't do it directly because the logit, $g(y) = \log \frac{p}{1-p}$, goes either to $-\infty$ for $y=0$ or $\infty$ for $y=1$. An added difficulty is that the variance in this model depends on $x$.
The likelihood for logistic regression is optimized by an algorithm called iteratively reweighted least squares (IRLS). There is a nice breakdown of this in Shalizi's Advanced Data Analysis from an Elementary Point of View, from which I have the details below:
To deal with the infinite logit problem, make a first-order Taylor approximation to $g(y)$ around the point $p$ such that $g(y) \approx g(p) + (y − p)g'(p)$. Since $g(p)$ is by definition $\beta_0 + \beta x$, put that in there instead of $g(p)$ and say that your effective response is $z = \beta_0 + \beta x + (y-p)g'(p)$.
Calculate the variance $V[Z|X=x] = V[(Y-p)g'(p) | X=x]=g'(p)^2V(p)$. Use this to weight your samples so that you can simply do a weighted regression of $z$ on $x$.
At this point you might ask yourself how you can use the regression coefficients you're trying to estimate to calculate your effective response, $z$. Of course you can't. And what is $p$ anyway? That's where the iterative part of IRLS comes in: you start with some guess at the $\beta$s, for instance to set them all to 0. From this you can first calculate the fitted probabilities $p$, and second use these fitted probabilities and your current coefficient estimates to calculate $z$.
All this and you get a new estimate for your $\beta$s, and it should be closer to the right one, but probably not the right one. So you iterate: use the new coefficients to calculate new fitted probabilities, calculate new effective responses, new weights, and go again. Sooner or later, unless you're unlucky, the $\beta$s will converge to a nice estimate. Says Shalizi:
The treatment above is rather heuristic, but it turns out to be equivalent to using Newton’s method, only with the expected second derivative of the log likelihood, instead of its actual value.
So in summary: you never use the logit directly because, as you point out, it's impractical. You can certainly calculate the logistic regression coefficients by hand, but it won't be fun. | Manually calculating logistic regression coefficient | You are right that although you should be able to calculate the OLS coefficient estimate in logit space, you can't do it directly because the logit, $g(y) = \log \frac{p}{1-p}$, goes either to $-\inft | Manually calculating logistic regression coefficient
You are right that although you should be able to calculate the OLS coefficient estimate in logit space, you can't do it directly because the logit, $g(y) = \log \frac{p}{1-p}$, goes either to $-\infty$ for $y=0$ or $\infty$ for $y=1$. An added difficulty is that the variance in this model depends on $x$.
The likelihood for logistic regression is optimized by an algorithm called iteratively reweighted least squares (IRLS). There is a nice breakdown of this in Shalizi's Advanced Data Analysis from an Elementary Point of View, from which I have the details below:
To deal with the infinite logit problem, make a first-order Taylor approximation to $g(y)$ around the point $p$ such that $g(y) \approx g(p) + (y − p)g'(p)$. Since $g(p)$ is by definition $\beta_0 + \beta x$, put that in there instead of $g(p)$ and say that your effective response is $z = \beta_0 + \beta x + (y-p)g'(p)$.
Calculate the variance $V[Z|X=x] = V[(Y-p)g'(p) | X=x]=g'(p)^2V(p)$. Use this to weight your samples so that you can simply do a weighted regression of $z$ on $x$.
At this point you might ask yourself how you can use the regression coefficients you're trying to estimate to calculate your effective response, $z$. Of course you can't. And what is $p$ anyway? That's where the iterative part of IRLS comes in: you start with some guess at the $\beta$s, for instance to set them all to 0. From this you can first calculate the fitted probabilities $p$, and second use these fitted probabilities and your current coefficient estimates to calculate $z$.
All this and you get a new estimate for your $\beta$s, and it should be closer to the right one, but probably not the right one. So you iterate: use the new coefficients to calculate new fitted probabilities, calculate new effective responses, new weights, and go again. Sooner or later, unless you're unlucky, the $\beta$s will converge to a nice estimate. Says Shalizi:
The treatment above is rather heuristic, but it turns out to be equivalent to using Newton’s method, only with the expected second derivative of the log likelihood, instead of its actual value.
So in summary: you never use the logit directly because, as you point out, it's impractical. You can certainly calculate the logistic regression coefficients by hand, but it won't be fun. | Manually calculating logistic regression coefficient
You are right that although you should be able to calculate the OLS coefficient estimate in logit space, you can't do it directly because the logit, $g(y) = \log \frac{p}{1-p}$, goes either to $-\inft |
35,960 | Can I use output of classifier A as feature for classifier B? | Yes, using outputs of one model as inputs of another is possible, and this concept is used to some extent in some approaches.
If you stack models on top of each other, in the application case the model at position $n$ in the chain essentially does preprocessing/feature derivation for the model at position $n+1$. One advantage of this is that more preprocessing and feature derivation conceptually increases the power of whole setup (lets just assume it's this way for simplicity) - hence more complex problems can be solved with such chains. But this will also make training more complex. The question will usually be: how to train this chain of models? Both at the same time? One after another? If supervised - which target variable to use for models not at last position in the chain?
Such thoughts lead to e.g. what we currently see in deep learning: one core concept of deep neural networks (deep nets) is that there are many layers, where each layer is an additional feature preprocessing/feature derivation that is embedded right into the model (therefore increases the models power). Conceptually, all layers could be trained together - which would lead to straight, supervised learning. But in practice, complexity issues usually prevent this. This is why some deep nets learn some layers individually, some learn parts in supervised manner, some in unsupervised manner, and most mix those concepts. Consider e.g. the learning process and information flow in deep believe networks, convolutional neural nets, or deep autoencoders. My guess is that those might get pretty close to what you had in mind when asking your question.
Sidenote: you might also want to look into the concept of boosting if you are not familiar with it yet. Boosting does not "chain" any models, but it uses the training error of one model to influence training of subsequent models - which turned out to be very effective in the past. Boosting in a nutshell: it assigns a weight to each sample at the start of training. After a model is trained and evaluated, samples that were classified wrong get their weights increased, while sample that were classified correctly get their weight decreased. The subsequent model uses those new weight in training, therefore lays more weight on samples that were classified wrong before. This process is repeated, and this way each model is influenced by its predecessor and focuses on the part of data that is "currently difficult to classify". | Can I use output of classifier A as feature for classifier B? | Yes, using outputs of one model as inputs of another is possible, and this concept is used to some extent in some approaches.
If you stack models on top of each other, in the application case the mod | Can I use output of classifier A as feature for classifier B?
Yes, using outputs of one model as inputs of another is possible, and this concept is used to some extent in some approaches.
If you stack models on top of each other, in the application case the model at position $n$ in the chain essentially does preprocessing/feature derivation for the model at position $n+1$. One advantage of this is that more preprocessing and feature derivation conceptually increases the power of whole setup (lets just assume it's this way for simplicity) - hence more complex problems can be solved with such chains. But this will also make training more complex. The question will usually be: how to train this chain of models? Both at the same time? One after another? If supervised - which target variable to use for models not at last position in the chain?
Such thoughts lead to e.g. what we currently see in deep learning: one core concept of deep neural networks (deep nets) is that there are many layers, where each layer is an additional feature preprocessing/feature derivation that is embedded right into the model (therefore increases the models power). Conceptually, all layers could be trained together - which would lead to straight, supervised learning. But in practice, complexity issues usually prevent this. This is why some deep nets learn some layers individually, some learn parts in supervised manner, some in unsupervised manner, and most mix those concepts. Consider e.g. the learning process and information flow in deep believe networks, convolutional neural nets, or deep autoencoders. My guess is that those might get pretty close to what you had in mind when asking your question.
Sidenote: you might also want to look into the concept of boosting if you are not familiar with it yet. Boosting does not "chain" any models, but it uses the training error of one model to influence training of subsequent models - which turned out to be very effective in the past. Boosting in a nutshell: it assigns a weight to each sample at the start of training. After a model is trained and evaluated, samples that were classified wrong get their weights increased, while sample that were classified correctly get their weight decreased. The subsequent model uses those new weight in training, therefore lays more weight on samples that were classified wrong before. This process is repeated, and this way each model is influenced by its predecessor and focuses on the part of data that is "currently difficult to classify". | Can I use output of classifier A as feature for classifier B?
Yes, using outputs of one model as inputs of another is possible, and this concept is used to some extent in some approaches.
If you stack models on top of each other, in the application case the mod |
35,961 | Variance explained - equivalent statistics for categorical data? | For generalised linear models (GLMs) we use deviance values as a generalisation of the scaled sums-of-squares used in regression (see related answer here). Instead of the coefficient of determination used in linear regression, we would use McFadden's $R^2$ value, which is given by:
$$R^2_\text{GLM} = \frac{\hat{\ell}_p - \hat{\ell}_0}{\hat{\ell}_S - \hat{\ell}_0},$$
where $\hat{\ell}_S$ is the maximised log-likelihood under the saturated model (one coefficient per data point), $\hat{\ell}_p$ is the maximised log-likelihood under the actual model, and $\hat{\ell}_0$ is the maximised log-likelihood under the null model (intercept term only).
This goodness-of-fit quantity measures the proportion of the deviance beyond the null model that is explained by the explanatory variables in the actual model. It is a generalisation of the coefficient of determination in the Gaussian linear regression model (i.e., it reduces down to that statistic in that model). | Variance explained - equivalent statistics for categorical data? | For generalised linear models (GLMs) we use deviance values as a generalisation of the scaled sums-of-squares used in regression (see related answer here). Instead of the coefficient of determination | Variance explained - equivalent statistics for categorical data?
For generalised linear models (GLMs) we use deviance values as a generalisation of the scaled sums-of-squares used in regression (see related answer here). Instead of the coefficient of determination used in linear regression, we would use McFadden's $R^2$ value, which is given by:
$$R^2_\text{GLM} = \frac{\hat{\ell}_p - \hat{\ell}_0}{\hat{\ell}_S - \hat{\ell}_0},$$
where $\hat{\ell}_S$ is the maximised log-likelihood under the saturated model (one coefficient per data point), $\hat{\ell}_p$ is the maximised log-likelihood under the actual model, and $\hat{\ell}_0$ is the maximised log-likelihood under the null model (intercept term only).
This goodness-of-fit quantity measures the proportion of the deviance beyond the null model that is explained by the explanatory variables in the actual model. It is a generalisation of the coefficient of determination in the Gaussian linear regression model (i.e., it reduces down to that statistic in that model). | Variance explained - equivalent statistics for categorical data?
For generalised linear models (GLMs) we use deviance values as a generalisation of the scaled sums-of-squares used in regression (see related answer here). Instead of the coefficient of determination |
35,962 | Variance explained - equivalent statistics for categorical data? | If you’re comfortable using $R^2$ as a crude approximation for proportion of variance explained, knowing that it is not true in the nonlinear case like in a logistic regression (but you decide how good it is for your work as an easy-to-compute estimate that your audience thinks it understands), then it doesn’t matter if you’re using categorical or continuous features. The decomposition of the total sum of squares that leads to $R^2$, which is given in the link, never explicitly mentions the features (yes, they’re in there implicitly through the predictions, $\hat y_i$), only the observations and predicted values.
I actually think the bounty comments ask a different question than the original post. Since the bounty message will go away in a few days, I’ll include the text below, as I believe I am addressing the bounty question more than the original question. Additionally, the bounty asks for a reputable source. I’d expect most regression textbooks to give the decomposition of the total sum of squares to explain how $R^2$ winds up giving the proportion of variance explained (at least in OLS linear with an intercept). My professor used Agresti’s book, so that’s what I know. What you’re looking for is in chapter 2, pages 47-54.
Agresti, Alan. Foundations of linear and generalized linear models. John Wiley & Sons, 2015.
BOUNTY
I can report an R2 as a crude indicator of the fraction of variation "explained" by numeric predictors, but how can I report a similar metric for categorical predictors? It feels like there ought to be a simple expression for this but I'm struggling to find a reference that expresses it! | Variance explained - equivalent statistics for categorical data? | If you’re comfortable using $R^2$ as a crude approximation for proportion of variance explained, knowing that it is not true in the nonlinear case like in a logistic regression (but you decide how goo | Variance explained - equivalent statistics for categorical data?
If you’re comfortable using $R^2$ as a crude approximation for proportion of variance explained, knowing that it is not true in the nonlinear case like in a logistic regression (but you decide how good it is for your work as an easy-to-compute estimate that your audience thinks it understands), then it doesn’t matter if you’re using categorical or continuous features. The decomposition of the total sum of squares that leads to $R^2$, which is given in the link, never explicitly mentions the features (yes, they’re in there implicitly through the predictions, $\hat y_i$), only the observations and predicted values.
I actually think the bounty comments ask a different question than the original post. Since the bounty message will go away in a few days, I’ll include the text below, as I believe I am addressing the bounty question more than the original question. Additionally, the bounty asks for a reputable source. I’d expect most regression textbooks to give the decomposition of the total sum of squares to explain how $R^2$ winds up giving the proportion of variance explained (at least in OLS linear with an intercept). My professor used Agresti’s book, so that’s what I know. What you’re looking for is in chapter 2, pages 47-54.
Agresti, Alan. Foundations of linear and generalized linear models. John Wiley & Sons, 2015.
BOUNTY
I can report an R2 as a crude indicator of the fraction of variation "explained" by numeric predictors, but how can I report a similar metric for categorical predictors? It feels like there ought to be a simple expression for this but I'm struggling to find a reference that expresses it! | Variance explained - equivalent statistics for categorical data?
If you’re comfortable using $R^2$ as a crude approximation for proportion of variance explained, knowing that it is not true in the nonlinear case like in a logistic regression (but you decide how goo |
35,963 | Variance explained - equivalent statistics for categorical data? | Given that you only have one dependent and one independent variable, you can perform correlation tests tailored for categorical variables and report the correlation statistics (and their p-values). This article is a good overview of your options.
That said, alternatively you can simply perform cross validation and report the accuracy metrics from there. However, although this is a straightforward and a more "universal" solution let's say, be aware that this is a biased estimator of the variance. | Variance explained - equivalent statistics for categorical data? | Given that you only have one dependent and one independent variable, you can perform correlation tests tailored for categorical variables and report the correlation statistics (and their p-values). Th | Variance explained - equivalent statistics for categorical data?
Given that you only have one dependent and one independent variable, you can perform correlation tests tailored for categorical variables and report the correlation statistics (and their p-values). This article is a good overview of your options.
That said, alternatively you can simply perform cross validation and report the accuracy metrics from there. However, although this is a straightforward and a more "universal" solution let's say, be aware that this is a biased estimator of the variance. | Variance explained - equivalent statistics for categorical data?
Given that you only have one dependent and one independent variable, you can perform correlation tests tailored for categorical variables and report the correlation statistics (and their p-values). Th |
35,964 | Variance explained - equivalent statistics for categorical data? | Based on Dave's and Vasilis' response (I'm sorry Ben my non-statistical brain could not quite absorb your answer - which is on me) I wrote and roughly validated a python function to produce a grid of "R2 equivalents".
It worries me a bit that I'm needing to write code to do what I would have thought was a common task. Thus I'm posting my code (a) so others with a similar need may use it and (b) for review by anybody with both experience and time to provide feedback!
def explain_variation_by_category_grid(data_in, cat_cols, cont_cols, dropna = True):
'''
Generate a grid of quasi r2 values of for the approximate fraction of squared
difference in the continuous variable associated with each category.
Influenced by discussion here:
https://stats.stackexchange.com/questions/215606/variance-explained-equivalent-statistics-for-categorical-data
Parameters:
data_in (pandas dataframe):
cat_cols (list): category columns
cont_cols (list): continuous columns
dropna (Bool): exlude NA values
Returns:
quasi R2 (dataframe): indexed on the continuous variables and one column for each category
'''
def sum_of_var_sqd(a):
''' calculate sum of sqared difference from group mean '''
mean = a.mean()
return ((a - mean) ** 2).sum()
res_dict = {}
for split in cat_cols:
res_dict[split] = {}
for var in cont_cols:
f = [split, var]
data = data_in[f].dropna() if dropna else data_in[f]
all_sum_of_var_sqd = sum_of_var_sqd(data[data.columns[1]])
agg = pd.pivot_table(data, index=split, aggfunc=[sum_of_var_sqd])
cat_sum_of_var_sqd = agg[agg.columns[0]].sum()
r2 = 1 - (cat_sum_of_var_sqd /all_sum_of_var_sqd )
res_dict[split][var] = r2
return pd.DataFrame.from_dict(res_dict)
My test data consists of about 150k records, which I've used to train NN and boosted forest (BF) models, and generated out of sample predictions (_pred) and calculated residuals (*_act - *_pre = *_d) for two outcomes of interest (Recovery time = ~ minutes to wake up after surgery; DAOH30 = "Days alive and out of hospital").
As you can see the categorical predictors are all weakly associated with the actual values but the models (using the data they had) all made predictions more strongly associated with predictors. The residuals had values close to zero which is what I would hope for given the expectation is the that the model would be cancelling out the impact of the variables. | Variance explained - equivalent statistics for categorical data? | Based on Dave's and Vasilis' response (I'm sorry Ben my non-statistical brain could not quite absorb your answer - which is on me) I wrote and roughly validated a python function to produce a grid of | Variance explained - equivalent statistics for categorical data?
Based on Dave's and Vasilis' response (I'm sorry Ben my non-statistical brain could not quite absorb your answer - which is on me) I wrote and roughly validated a python function to produce a grid of "R2 equivalents".
It worries me a bit that I'm needing to write code to do what I would have thought was a common task. Thus I'm posting my code (a) so others with a similar need may use it and (b) for review by anybody with both experience and time to provide feedback!
def explain_variation_by_category_grid(data_in, cat_cols, cont_cols, dropna = True):
'''
Generate a grid of quasi r2 values of for the approximate fraction of squared
difference in the continuous variable associated with each category.
Influenced by discussion here:
https://stats.stackexchange.com/questions/215606/variance-explained-equivalent-statistics-for-categorical-data
Parameters:
data_in (pandas dataframe):
cat_cols (list): category columns
cont_cols (list): continuous columns
dropna (Bool): exlude NA values
Returns:
quasi R2 (dataframe): indexed on the continuous variables and one column for each category
'''
def sum_of_var_sqd(a):
''' calculate sum of sqared difference from group mean '''
mean = a.mean()
return ((a - mean) ** 2).sum()
res_dict = {}
for split in cat_cols:
res_dict[split] = {}
for var in cont_cols:
f = [split, var]
data = data_in[f].dropna() if dropna else data_in[f]
all_sum_of_var_sqd = sum_of_var_sqd(data[data.columns[1]])
agg = pd.pivot_table(data, index=split, aggfunc=[sum_of_var_sqd])
cat_sum_of_var_sqd = agg[agg.columns[0]].sum()
r2 = 1 - (cat_sum_of_var_sqd /all_sum_of_var_sqd )
res_dict[split][var] = r2
return pd.DataFrame.from_dict(res_dict)
My test data consists of about 150k records, which I've used to train NN and boosted forest (BF) models, and generated out of sample predictions (_pred) and calculated residuals (*_act - *_pre = *_d) for two outcomes of interest (Recovery time = ~ minutes to wake up after surgery; DAOH30 = "Days alive and out of hospital").
As you can see the categorical predictors are all weakly associated with the actual values but the models (using the data they had) all made predictions more strongly associated with predictors. The residuals had values close to zero which is what I would hope for given the expectation is the that the model would be cancelling out the impact of the variables. | Variance explained - equivalent statistics for categorical data?
Based on Dave's and Vasilis' response (I'm sorry Ben my non-statistical brain could not quite absorb your answer - which is on me) I wrote and roughly validated a python function to produce a grid of |
35,965 | Intuition of Bayesian normalizing constant | I needed to do this for a course I'm preparing, so I created this demonstration website: A demonstration of Bayes' theorem as "selecting a subset" in the binomial case (make sure to hide the toolbars, bottom right). Basically, if you show the joint distribution -- which is just $p(y\mid\theta)p(\theta)$ -- you can see the "subsets" of the joint distribution that you need to select, which are those $\theta$ values that correspond to $Y=y$ (whatever you observed).
The source code for that page can be found here: Rmarkdown source for page.
(I used $\theta$ for the binomial probability instead of $p$ because $p(p)$ looks confusing...) | Intuition of Bayesian normalizing constant | I needed to do this for a course I'm preparing, so I created this demonstration website: A demonstration of Bayes' theorem as "selecting a subset" in the binomial case (make sure to hide the toolbars, | Intuition of Bayesian normalizing constant
I needed to do this for a course I'm preparing, so I created this demonstration website: A demonstration of Bayes' theorem as "selecting a subset" in the binomial case (make sure to hide the toolbars, bottom right). Basically, if you show the joint distribution -- which is just $p(y\mid\theta)p(\theta)$ -- you can see the "subsets" of the joint distribution that you need to select, which are those $\theta$ values that correspond to $Y=y$ (whatever you observed).
The source code for that page can be found here: Rmarkdown source for page.
(I used $\theta$ for the binomial probability instead of $p$ because $p(p)$ looks confusing...) | Intuition of Bayesian normalizing constant
I needed to do this for a course I'm preparing, so I created this demonstration website: A demonstration of Bayes' theorem as "selecting a subset" in the binomial case (make sure to hide the toolbars, |
35,966 | Intuition of Bayesian normalizing constant | Besides the interpretations you mention, you can think of the normalizing constant as the value of the prior predictive distribution at the observed x. If the prior predictive is discrete then this is a probability mass, and if the prior predictive is continuous it is a probability density.
The prior predictive is in the continuous case is
$$
p(x) = \int_\Theta p(\theta)p(x|\theta)
$$
Which is a distribution that assigns probability mass/density to the outcomes in the sample space. Then when x is observed it is fixed at the observed x and fits in the denominator of Bayes' theorem.
However, note that with continuous distributions there is no mathematical constraint on the density value assigned to a set with measure zero (i.e., zero probability), and since any specific point on a continuous distribution indeed has measure zero then technically the value of the density on the prior predictive at exactly x can be set arbitrarily. But that aside, I think this way of visualizing the normalizing constant is fairly intuitive.
You can read more here. (Let me know if you don't have access)
This too, which is a bit more modern. | Intuition of Bayesian normalizing constant | Besides the interpretations you mention, you can think of the normalizing constant as the value of the prior predictive distribution at the observed x. If the prior predictive is discrete then this is | Intuition of Bayesian normalizing constant
Besides the interpretations you mention, you can think of the normalizing constant as the value of the prior predictive distribution at the observed x. If the prior predictive is discrete then this is a probability mass, and if the prior predictive is continuous it is a probability density.
The prior predictive is in the continuous case is
$$
p(x) = \int_\Theta p(\theta)p(x|\theta)
$$
Which is a distribution that assigns probability mass/density to the outcomes in the sample space. Then when x is observed it is fixed at the observed x and fits in the denominator of Bayes' theorem.
However, note that with continuous distributions there is no mathematical constraint on the density value assigned to a set with measure zero (i.e., zero probability), and since any specific point on a continuous distribution indeed has measure zero then technically the value of the density on the prior predictive at exactly x can be set arbitrarily. But that aside, I think this way of visualizing the normalizing constant is fairly intuitive.
You can read more here. (Let me know if you don't have access)
This too, which is a bit more modern. | Intuition of Bayesian normalizing constant
Besides the interpretations you mention, you can think of the normalizing constant as the value of the prior predictive distribution at the observed x. If the prior predictive is discrete then this is |
35,967 | Intuition of Bayesian normalizing constant | Richard's 3-d graphic was very helpful. What I need, however, is something I can paste as graph in a manuscript. After some searching, I located this image from Westfall and Henning, Understanding Advanced Statistical Methods, Chapman & Hall/CRC, 2013.
Relabeling the axes as the binomial probability p on the left and the number of successes y on the right then illustrates a binomial distribution, and the face of the joint distribution then is the marginal distribution to be integrated.
Further, this joint distribution made me realize that our vocabulary for this is lacking. We use the term “marginal” for the relevant subset for the normalizing constant because that vocabulary comes from a two way contingency table with discrete data where the sum of the probabilities is written in the margins of the table. We continue to use the same vocabulary in the joint distribution continuous case, but it is not descriptive.
But the figure from Westfall and Henning makes clear that for the normalizing constant we are integrating over a “slice” of the joint distribution for the value of y, the number of successes in the binomial case. “Slice” is much clearer than marginal and this figure makes instantly clear what is the relevant subset for integration. | Intuition of Bayesian normalizing constant | Richard's 3-d graphic was very helpful. What I need, however, is something I can paste as graph in a manuscript. After some searching, I located this image from Westfall and Henning, Understanding Adv | Intuition of Bayesian normalizing constant
Richard's 3-d graphic was very helpful. What I need, however, is something I can paste as graph in a manuscript. After some searching, I located this image from Westfall and Henning, Understanding Advanced Statistical Methods, Chapman & Hall/CRC, 2013.
Relabeling the axes as the binomial probability p on the left and the number of successes y on the right then illustrates a binomial distribution, and the face of the joint distribution then is the marginal distribution to be integrated.
Further, this joint distribution made me realize that our vocabulary for this is lacking. We use the term “marginal” for the relevant subset for the normalizing constant because that vocabulary comes from a two way contingency table with discrete data where the sum of the probabilities is written in the margins of the table. We continue to use the same vocabulary in the joint distribution continuous case, but it is not descriptive.
But the figure from Westfall and Henning makes clear that for the normalizing constant we are integrating over a “slice” of the joint distribution for the value of y, the number of successes in the binomial case. “Slice” is much clearer than marginal and this figure makes instantly clear what is the relevant subset for integration. | Intuition of Bayesian normalizing constant
Richard's 3-d graphic was very helpful. What I need, however, is something I can paste as graph in a manuscript. After some searching, I located this image from Westfall and Henning, Understanding Adv |
35,968 | Why does the reconstruction error of truncated SVD equal the sum of squared singular values? | Let $$X = U\Sigma V^\prime$$ be the SVD of the $n\times r$ matrix $X$. Let $||\quad ||$ be any matrix norm that is left- and right-invariant under orthogonal transformations (reflections and rotations); that is, whenever $P$ is an $n\times n$ orthogonal matrix or $Q$ is an $r\times r$ orthogonal matrix, then
$$||P^\prime X Q|| = ||X||.$$
Then, by the very definition of the SVD, the orthogonality of $U$ and $V$ imply $$||U^\prime (X-A) V||^2 = ||\Sigma - U^\prime A V||^2.$$
Since $A$ is formulated to make $U^\prime A V$ a diagonal matrix that agrees with the first $k$ entries of the diagonal matrix $\Sigma$, the right hand side is just the squared norm of $\Sigma$ after those $k$ diagonal entries have been zeroed out.
For the Frobenius norm (whose square is the sum of squared entries of its argument), the squared norm of this zeroed-out copy of $\Sigma$ is the sum of squares of its remaining entries, precisely
$$ ||\Sigma - U^\prime A V||^2 = \sum_{j=k+1}^r \delta_j^2.$$
But the Frobenius norm obviously is invariant under left- and right-multiplication by orthogonal matrices, since orthogonality by definition means preservation of the Euclidean norm and the Frobenius norm (when squared) is both (a) the sum of squared Euclidean norms of the rows (and so is invariant under left multiplication, which preserves each row norm) and (b) the sum of squared Euclidean norms of the columns (and so is invariant under right multiplication, which preserves each column norm). | Why does the reconstruction error of truncated SVD equal the sum of squared singular values? | Let $$X = U\Sigma V^\prime$$ be the SVD of the $n\times r$ matrix $X$. Let $||\quad ||$ be any matrix norm that is left- and right-invariant under orthogonal transformations (reflections and rotation | Why does the reconstruction error of truncated SVD equal the sum of squared singular values?
Let $$X = U\Sigma V^\prime$$ be the SVD of the $n\times r$ matrix $X$. Let $||\quad ||$ be any matrix norm that is left- and right-invariant under orthogonal transformations (reflections and rotations); that is, whenever $P$ is an $n\times n$ orthogonal matrix or $Q$ is an $r\times r$ orthogonal matrix, then
$$||P^\prime X Q|| = ||X||.$$
Then, by the very definition of the SVD, the orthogonality of $U$ and $V$ imply $$||U^\prime (X-A) V||^2 = ||\Sigma - U^\prime A V||^2.$$
Since $A$ is formulated to make $U^\prime A V$ a diagonal matrix that agrees with the first $k$ entries of the diagonal matrix $\Sigma$, the right hand side is just the squared norm of $\Sigma$ after those $k$ diagonal entries have been zeroed out.
For the Frobenius norm (whose square is the sum of squared entries of its argument), the squared norm of this zeroed-out copy of $\Sigma$ is the sum of squares of its remaining entries, precisely
$$ ||\Sigma - U^\prime A V||^2 = \sum_{j=k+1}^r \delta_j^2.$$
But the Frobenius norm obviously is invariant under left- and right-multiplication by orthogonal matrices, since orthogonality by definition means preservation of the Euclidean norm and the Frobenius norm (when squared) is both (a) the sum of squared Euclidean norms of the rows (and so is invariant under left multiplication, which preserves each row norm) and (b) the sum of squared Euclidean norms of the columns (and so is invariant under right multiplication, which preserves each column norm). | Why does the reconstruction error of truncated SVD equal the sum of squared singular values?
Let $$X = U\Sigma V^\prime$$ be the SVD of the $n\times r$ matrix $X$. Let $||\quad ||$ be any matrix norm that is left- and right-invariant under orthogonal transformations (reflections and rotation |
35,969 | Differentiating the RSS w.r.t. $\beta$ in Linear Model | Note that $-y^TX - y^TX + \beta^T(X^TX + X^TX) \ne -2\beta^T(y^TX + X^TX)$. You pulled an extra $\beta^T$ out from the first term. So you were wrong with the algebra there.
You have,
$$RSS = y^Ty - \beta^TX^Ty - y^TX \beta + \beta^TX^TX\beta.$$
Notice that $-y^TX \beta$ is a scalar quantity, thus $y^TX\beta = (y^TX \beta)^T = \beta^TX^Ty$.
$$\frac{\partial{RSS}}{\partial{\beta}} = -2X^Ty + 2X^TX\beta.$$
\begin{align*}
-2X^Ty + 2X^TX \beta & = -2X^T(y - X\beta).
\end{align*} | Differentiating the RSS w.r.t. $\beta$ in Linear Model | Note that $-y^TX - y^TX + \beta^T(X^TX + X^TX) \ne -2\beta^T(y^TX + X^TX)$. You pulled an extra $\beta^T$ out from the first term. So you were wrong with the algebra there.
You have,
$$RSS = y^Ty - \ | Differentiating the RSS w.r.t. $\beta$ in Linear Model
Note that $-y^TX - y^TX + \beta^T(X^TX + X^TX) \ne -2\beta^T(y^TX + X^TX)$. You pulled an extra $\beta^T$ out from the first term. So you were wrong with the algebra there.
You have,
$$RSS = y^Ty - \beta^TX^Ty - y^TX \beta + \beta^TX^TX\beta.$$
Notice that $-y^TX \beta$ is a scalar quantity, thus $y^TX\beta = (y^TX \beta)^T = \beta^TX^Ty$.
$$\frac{\partial{RSS}}{\partial{\beta}} = -2X^Ty + 2X^TX\beta.$$
\begin{align*}
-2X^Ty + 2X^TX \beta & = -2X^T(y - X\beta).
\end{align*} | Differentiating the RSS w.r.t. $\beta$ in Linear Model
Note that $-y^TX - y^TX + \beta^T(X^TX + X^TX) \ne -2\beta^T(y^TX + X^TX)$. You pulled an extra $\beta^T$ out from the first term. So you were wrong with the algebra there.
You have,
$$RSS = y^Ty - \ |
35,970 | Is removing duplicate data necessary for Gaussian Process Regression (GPR)? | The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence.
That said, numerically, the kernel matrix $K$ will occasionally become numerically singular if some points are too close together (but not necessarily identical). In this scenario, you can either identify and deal with the problem points (deletion, merging them, whatever) or you can some (small) noise: $\hat{K}=K+\epsilon I$. Usually $\epsilon=10^{-6}$ is sufficient for me, or you can perform a spectral decomposition of $K$ and then for each eigenvalue $\lambda_i$, replace it with $\hat{\lambda_i}=\max{\{\lambda_i, \epsilon\lambda_{\max}\}}$ for some small $\epsilon.$ The idea here is that you've effectively pinned the smallest eigenvalue of the matrix relative to the largest, and this may be a more "minimal" intervention into the matrix. This is an area where I'm not sure there are any good solutions.
The numerical component of the problem is considered in more detail on this thread:
Ill-conditioned covariance matrix in GP regression for Bayesian optimization | Is removing duplicate data necessary for Gaussian Process Regression (GPR)? | The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence.
That said, numerically, the | Is removing duplicate data necessary for Gaussian Process Regression (GPR)?
The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence.
That said, numerically, the kernel matrix $K$ will occasionally become numerically singular if some points are too close together (but not necessarily identical). In this scenario, you can either identify and deal with the problem points (deletion, merging them, whatever) or you can some (small) noise: $\hat{K}=K+\epsilon I$. Usually $\epsilon=10^{-6}$ is sufficient for me, or you can perform a spectral decomposition of $K$ and then for each eigenvalue $\lambda_i$, replace it with $\hat{\lambda_i}=\max{\{\lambda_i, \epsilon\lambda_{\max}\}}$ for some small $\epsilon.$ The idea here is that you've effectively pinned the smallest eigenvalue of the matrix relative to the largest, and this may be a more "minimal" intervention into the matrix. This is an area where I'm not sure there are any good solutions.
The numerical component of the problem is considered in more detail on this thread:
Ill-conditioned covariance matrix in GP regression for Bayesian optimization | Is removing duplicate data necessary for Gaussian Process Regression (GPR)?
The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence.
That said, numerically, the |
35,971 | Sampling 100000 times from normal distribution in R : strange distribution of samples' standard deviation | You've got a bug; you're taking sd of moy rather than sam. I bet your code is also pretty slow; a more R-like method would be as follows.
N <- 100000
n <- 10
d <- matrix(rnorm(N*n), nrow=10)
m <- colMeans(d)
s <- apply(d, 2, sd)
hist(s, 10000) | Sampling 100000 times from normal distribution in R : strange distribution of samples' standard devi | You've got a bug; you're taking sd of moy rather than sam. I bet your code is also pretty slow; a more R-like method would be as follows.
N <- 100000
n <- 10
d <- matrix(rnorm(N*n), nrow=10)
m <- colM | Sampling 100000 times from normal distribution in R : strange distribution of samples' standard deviation
You've got a bug; you're taking sd of moy rather than sam. I bet your code is also pretty slow; a more R-like method would be as follows.
N <- 100000
n <- 10
d <- matrix(rnorm(N*n), nrow=10)
m <- colMeans(d)
s <- apply(d, 2, sd)
hist(s, 10000) | Sampling 100000 times from normal distribution in R : strange distribution of samples' standard devi
You've got a bug; you're taking sd of moy rather than sam. I bet your code is also pretty slow; a more R-like method would be as follows.
N <- 100000
n <- 10
d <- matrix(rnorm(N*n), nrow=10)
m <- colM |
35,972 | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model? | Both conditional logistic regression and survival analysis are forms of semi-parametric inference where complex relationships between unmeasured risk factors (such as a baseline hazard function or unmeasured risk factors) are controlled by organizing data into risk sets.
Formally, a risk set in survival analysis is a collection of individuals at risk for the event at each time point in which a failure is observed. The distribution of measured risk factors of the survivng cohort is compared to those of individual who failed at the event time. This ratio allows us to control for the complex, unmeasured baseline hazard function which other factors mediate multiplicatively using a hazard ratio. We ignore the amount of time that actually elapses between each failure time, and consider each risk set to be incrementally at "greater risk" by some unknown amount due to their longer duration of follow-up.
A conditional logistic regression does not have a risk set, per se, but a matched set. These are individuals among whom all unmeasured risk factors are assumed to be the same. Conditional logistic regression iteratively predicts what the cumulative risk of events is in each matched set insofar as matched sets can be ranked in terms of their unmeasured risk. Using a Cox model, each ranked matched set is treated like a risk set in a Cox Model, and then the odds ratios for events are calculated using the same partial likelihood from the Cox Model. Using predictions from these estimated odds ratios, the ranking is updated to account for what is now known about these matched sets' risk due to unmeasured factors (since our updated predictions take better account of the measured risk factors using odds ratios). This process iterates until there is agreement (or convergence) using an expectation maximization framework. This is why clogit takes so much longer to converge than a simple Cox Model.
Formally, because there is a "little bit of estimation" in terms of risk of unmeasured factors in the conditional logistic regression, this method is a "conditional likelihood" maximization whereas the Cox Model is a "partial likelihood" maximization.
So
Data structure) risk sets / matched sets
Why) both account for unmeasured sources of risk. | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model? | Both conditional logistic regression and survival analysis are forms of semi-parametric inference where complex relationships between unmeasured risk factors (such as a baseline hazard function or unm | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model?
Both conditional logistic regression and survival analysis are forms of semi-parametric inference where complex relationships between unmeasured risk factors (such as a baseline hazard function or unmeasured risk factors) are controlled by organizing data into risk sets.
Formally, a risk set in survival analysis is a collection of individuals at risk for the event at each time point in which a failure is observed. The distribution of measured risk factors of the survivng cohort is compared to those of individual who failed at the event time. This ratio allows us to control for the complex, unmeasured baseline hazard function which other factors mediate multiplicatively using a hazard ratio. We ignore the amount of time that actually elapses between each failure time, and consider each risk set to be incrementally at "greater risk" by some unknown amount due to their longer duration of follow-up.
A conditional logistic regression does not have a risk set, per se, but a matched set. These are individuals among whom all unmeasured risk factors are assumed to be the same. Conditional logistic regression iteratively predicts what the cumulative risk of events is in each matched set insofar as matched sets can be ranked in terms of their unmeasured risk. Using a Cox model, each ranked matched set is treated like a risk set in a Cox Model, and then the odds ratios for events are calculated using the same partial likelihood from the Cox Model. Using predictions from these estimated odds ratios, the ranking is updated to account for what is now known about these matched sets' risk due to unmeasured factors (since our updated predictions take better account of the measured risk factors using odds ratios). This process iterates until there is agreement (or convergence) using an expectation maximization framework. This is why clogit takes so much longer to converge than a simple Cox Model.
Formally, because there is a "little bit of estimation" in terms of risk of unmeasured factors in the conditional logistic regression, this method is a "conditional likelihood" maximization whereas the Cox Model is a "partial likelihood" maximization.
So
Data structure) risk sets / matched sets
Why) both account for unmeasured sources of risk. | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model?
Both conditional logistic regression and survival analysis are forms of semi-parametric inference where complex relationships between unmeasured risk factors (such as a baseline hazard function or unm |
35,973 | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model? | The data structure is with all observed event times tied at the same time and only happens with one particular handling of these ties - namely the one that ensures that the test matches up with the standard log-rank test. This has an underlying assumption that tied event times truly indicate events happening at the exactly same time. This is as opposed to them having occurred at some point before, but only observed/identified/diagnosed at the recorded event time. In case of two groups being compared, in both cases the hypergeometric distribution is used to compare the groups and the effect sizes are parameterized the same way. | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model? | The data structure is with all observed event times tied at the same time and only happens with one particular handling of these ties - namely the one that ensures that the test matches up with the st | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model?
The data structure is with all observed event times tied at the same time and only happens with one particular handling of these ties - namely the one that ensures that the test matches up with the standard log-rank test. This has an underlying assumption that tied event times truly indicate events happening at the exactly same time. This is as opposed to them having occurred at some point before, but only observed/identified/diagnosed at the recorded event time. In case of two groups being compared, in both cases the hypergeometric distribution is used to compare the groups and the effect sizes are parameterized the same way. | When (and why) is a conditional logistic regression equivalent to a Cox proportional hazards model?
The data structure is with all observed event times tied at the same time and only happens with one particular handling of these ties - namely the one that ensures that the test matches up with the st |
35,974 | Sample mean and variance independence in the case of correlated observations | The sample mean of a multivariate normal vector $\mathbf{X}=(X_1, X_2, \ldots, X_n)$ is a function of
$$M = X_1+X_2+\cdots X_n$$
and the sample variance is a function of the residual vector with components
$$Z_i = -X_1 - X_2 - \cdots - X_{i-1} + (n-1)X_i - X_{i+1} - \cdots - X_n,$$
$i=1, 2, \ldots, n$.
Let $\Sigma$ be the covariance matrix of $\mathbf{X}$. Write $\sigma_i$ for the sum of column (or row) $i$ of $\Sigma$, $\sigma_i = \Sigma_{1i} + \Sigma_{2i} + \cdots + \Sigma_{ni}$, and let $\sigma$ be the sum of all the entries of $\Sigma$. We may compute
$$\operatorname{Cov}(M, Z_i) = n\sigma_i - \sigma.$$
Because both $M$ and $Z_i$ are linear combinations of multivariate Normal variables, they are jointly Normal, whence they are independent if and only if their covariance is zero. Consequently $M$ is independent of all the $Z_i$ if and only if
$$n\sigma_1 = n\sigma_2 = \cdots = n\sigma_n = \sigma.$$
In other words, equality of the column sums guarantees independence of the mean and the components of the sample variance, whence it will guarantee independence of the mean and the sample variance itself.
Although the converse is not true--it is possible for $M$ not to be independent of the $Z_i$, yet for $M$ to be independent of the sample mean--this requires exceptional circumstances. In almost all cases, inequality of the column sums creates a dependence between the sample mean and sample standard deviation.
By definition, in a stationary process the covariances $\Sigma_{ij}$ may depend only on $i-j$. Although this does not guarantee the column sums are all equal, for large $n$ and a covariance that decays sufficiently rapidly with $|i-j|$, it will approximately be true, because in the limit the column sums are all equal:
$$\sigma_i = \sum_{j=-\infty}^\infty \Sigma_{ji} =\sum_{j=-\infty}^\infty \Sigma_{jk} =\sigma_k.$$
All that is required is the convergence of these sums.
A good way to see the dependence in the scatterplot is to render the points more carefully. When they are made semitransparent, you can see the underlying density better. A lowess smooth helps demonstrate a variation in the standard deviation with the mean in this example where $n=8$ and the column sums of $\Sigma$ vary appreciably.
Here is the R code that generated it.
library(MASS) # mvrnorm()
set.seed(17)
n <- 5e4 # Simulation size
d <- 8 # Dimension
k <- 4 # Size of upper block of Sigma
rho <- 0.99 # Correlation in upper block
mu <- rep(0, d) # Mean
Sigma <- outer(1:d, 1:d, function(i,j) ifelse(i <= k & j <= k, rho^abs(i-j), i==j))
colSums(Sigma)
x <- mvrnorm(n, mu, Sigma)
sim <- t(apply(x, 1, function(y) c(mean(y), sd(y))))
plot(sim, pch=16, cex=0.5, col="#00000008",
xlab="Mean", ylab="SD")
i <- order(sim[, 1])
lines(sim[i, 1], lowess(sim[i, 2], f=1/20)$y, col="Red", lwd=2)
# g <- cut(sim[, 1], quantile(sim[, 1], seq(0, 1, by=0.025)))
# boxplot(sim[, 2] ~ g) | Sample mean and variance independence in the case of correlated observations | The sample mean of a multivariate normal vector $\mathbf{X}=(X_1, X_2, \ldots, X_n)$ is a function of
$$M = X_1+X_2+\cdots X_n$$
and the sample variance is a function of the residual vector with comp | Sample mean and variance independence in the case of correlated observations
The sample mean of a multivariate normal vector $\mathbf{X}=(X_1, X_2, \ldots, X_n)$ is a function of
$$M = X_1+X_2+\cdots X_n$$
and the sample variance is a function of the residual vector with components
$$Z_i = -X_1 - X_2 - \cdots - X_{i-1} + (n-1)X_i - X_{i+1} - \cdots - X_n,$$
$i=1, 2, \ldots, n$.
Let $\Sigma$ be the covariance matrix of $\mathbf{X}$. Write $\sigma_i$ for the sum of column (or row) $i$ of $\Sigma$, $\sigma_i = \Sigma_{1i} + \Sigma_{2i} + \cdots + \Sigma_{ni}$, and let $\sigma$ be the sum of all the entries of $\Sigma$. We may compute
$$\operatorname{Cov}(M, Z_i) = n\sigma_i - \sigma.$$
Because both $M$ and $Z_i$ are linear combinations of multivariate Normal variables, they are jointly Normal, whence they are independent if and only if their covariance is zero. Consequently $M$ is independent of all the $Z_i$ if and only if
$$n\sigma_1 = n\sigma_2 = \cdots = n\sigma_n = \sigma.$$
In other words, equality of the column sums guarantees independence of the mean and the components of the sample variance, whence it will guarantee independence of the mean and the sample variance itself.
Although the converse is not true--it is possible for $M$ not to be independent of the $Z_i$, yet for $M$ to be independent of the sample mean--this requires exceptional circumstances. In almost all cases, inequality of the column sums creates a dependence between the sample mean and sample standard deviation.
By definition, in a stationary process the covariances $\Sigma_{ij}$ may depend only on $i-j$. Although this does not guarantee the column sums are all equal, for large $n$ and a covariance that decays sufficiently rapidly with $|i-j|$, it will approximately be true, because in the limit the column sums are all equal:
$$\sigma_i = \sum_{j=-\infty}^\infty \Sigma_{ji} =\sum_{j=-\infty}^\infty \Sigma_{jk} =\sigma_k.$$
All that is required is the convergence of these sums.
A good way to see the dependence in the scatterplot is to render the points more carefully. When they are made semitransparent, you can see the underlying density better. A lowess smooth helps demonstrate a variation in the standard deviation with the mean in this example where $n=8$ and the column sums of $\Sigma$ vary appreciably.
Here is the R code that generated it.
library(MASS) # mvrnorm()
set.seed(17)
n <- 5e4 # Simulation size
d <- 8 # Dimension
k <- 4 # Size of upper block of Sigma
rho <- 0.99 # Correlation in upper block
mu <- rep(0, d) # Mean
Sigma <- outer(1:d, 1:d, function(i,j) ifelse(i <= k & j <= k, rho^abs(i-j), i==j))
colSums(Sigma)
x <- mvrnorm(n, mu, Sigma)
sim <- t(apply(x, 1, function(y) c(mean(y), sd(y))))
plot(sim, pch=16, cex=0.5, col="#00000008",
xlab="Mean", ylab="SD")
i <- order(sim[, 1])
lines(sim[i, 1], lowess(sim[i, 2], f=1/20)$y, col="Red", lwd=2)
# g <- cut(sim[, 1], quantile(sim[, 1], seq(0, 1, by=0.025)))
# boxplot(sim[, 2] ~ g) | Sample mean and variance independence in the case of correlated observations
The sample mean of a multivariate normal vector $\mathbf{X}=(X_1, X_2, \ldots, X_n)$ is a function of
$$M = X_1+X_2+\cdots X_n$$
and the sample variance is a function of the residual vector with comp |
35,975 | Sample mean and variance independence in the case of correlated observations | Only if $\Sigma$ is diagonal it appears. (see edit). I'm going to look at the covariance between $[\bar{X}, \ldots, \bar{X}]'$ and $X - [\bar{X}, \ldots, \bar{X}]'$. It's easier to work with the unsquared deviates rather than the sum of squared deviates.
Let $X \sim \text{Normal}(\mu, \Sigma)$, where $X, \mu \in R^n$. Let $\Sigma$ look like whatever you want it to. Not diagonal, because you don't want independence between samples. Just as long as it's symmetric and positive definite (I'm assuming we're dealing with full rank normals here).
Then I use bilinearity of $Cov(\cdot, \cdot)$.
$Cov(1\frac{1}{n}1'X, (I-\frac{1}{n}11')X) = \frac{1}{n}11' \Sigma (I-\frac{1}{n}11') = \frac{1}{n}11' \Sigma-\frac{1}{n}11' \Sigma \frac{1}{n}11' = \frac{1}{n}11' \Sigma-\frac{1}{n^2}11' \Sigma 11'$
As whuber points out, 0 covariance/correlation only implies independence in the case of normal vectors. This can be shown by writing down the density and seeing it factors.
Edit: whuber is correct that my final conclusion is incorrect. He came up with a more general criterion that guarantees independence (+1). Below I continue on verifying his answer with my notation. Let $e_i = (0,\ldots, 1,\ldots, 0)'$ be the vector with a one in the $i$th spot. Whuber's condition that the row or column sums be equal is equivalent to assuming that $\Sigma e_i = \Sigma e_k$ for $i \neq k$ or $e_i' \Sigma = e_k' \Sigma$. If we re-write $\Sigma 1$ as $ \sum_{i=l}^n \Sigma e_l$ then $\Sigma 1 = n \Sigma e_1$ and $Cov(1\frac{1}{n}1'X, (I-\frac{1}{n}11')X)$ becomes diagonal using the derivation above. Side note: apologies for overloading $\Sigma$. | Sample mean and variance independence in the case of correlated observations | Only if $\Sigma$ is diagonal it appears. (see edit). I'm going to look at the covariance between $[\bar{X}, \ldots, \bar{X}]'$ and $X - [\bar{X}, \ldots, \bar{X}]'$. It's easier to work with the unsq | Sample mean and variance independence in the case of correlated observations
Only if $\Sigma$ is diagonal it appears. (see edit). I'm going to look at the covariance between $[\bar{X}, \ldots, \bar{X}]'$ and $X - [\bar{X}, \ldots, \bar{X}]'$. It's easier to work with the unsquared deviates rather than the sum of squared deviates.
Let $X \sim \text{Normal}(\mu, \Sigma)$, where $X, \mu \in R^n$. Let $\Sigma$ look like whatever you want it to. Not diagonal, because you don't want independence between samples. Just as long as it's symmetric and positive definite (I'm assuming we're dealing with full rank normals here).
Then I use bilinearity of $Cov(\cdot, \cdot)$.
$Cov(1\frac{1}{n}1'X, (I-\frac{1}{n}11')X) = \frac{1}{n}11' \Sigma (I-\frac{1}{n}11') = \frac{1}{n}11' \Sigma-\frac{1}{n}11' \Sigma \frac{1}{n}11' = \frac{1}{n}11' \Sigma-\frac{1}{n^2}11' \Sigma 11'$
As whuber points out, 0 covariance/correlation only implies independence in the case of normal vectors. This can be shown by writing down the density and seeing it factors.
Edit: whuber is correct that my final conclusion is incorrect. He came up with a more general criterion that guarantees independence (+1). Below I continue on verifying his answer with my notation. Let $e_i = (0,\ldots, 1,\ldots, 0)'$ be the vector with a one in the $i$th spot. Whuber's condition that the row or column sums be equal is equivalent to assuming that $\Sigma e_i = \Sigma e_k$ for $i \neq k$ or $e_i' \Sigma = e_k' \Sigma$. If we re-write $\Sigma 1$ as $ \sum_{i=l}^n \Sigma e_l$ then $\Sigma 1 = n \Sigma e_1$ and $Cov(1\frac{1}{n}1'X, (I-\frac{1}{n}11')X)$ becomes diagonal using the derivation above. Side note: apologies for overloading $\Sigma$. | Sample mean and variance independence in the case of correlated observations
Only if $\Sigma$ is diagonal it appears. (see edit). I'm going to look at the covariance between $[\bar{X}, \ldots, \bar{X}]'$ and $X - [\bar{X}, \ldots, \bar{X}]'$. It's easier to work with the unsq |
35,976 | Cross-correlation between two seasonal series | The essential question is, what problem are you trying to solve?
If you intend to build a good model for the data (and later use the model for hypothesis testing, forecasting or whatever), you need to account for all the patterns there are. If there is seasonality, you should include seasonal patterns in your model. If you fail to do so, the model might not be adequate; it might yield unreliable hypothesis test results, poor forecasts, etc.
Now you say you want to determine (which I will interpret as estimate) cross correlation between two series. I understand that cross correlation is just the regular correlation estimated for different lags versus leads of the two series. For the intuition it is enough to consider regular correlation, which I will do henceforth. The idea can be carried over seamlessly from regular correlation to cross correlation.
If both of your time series were bivariate $i.i.d.$, the sample correlation would correspond to a population correlation. Hence, you could have a meaningful point estimate, a confidence interval and what not. However, if at least one of the time series is not $i.i.d.$, defining a population counterpart of the sample correlation becomes difficult, and subsequently the estimates are difficult to interpret. Then it becomes easier to specify a model for your data and start asking questions in terms of the model.
Now assume that both series are bivariate $i.i.d.$ except for seasonal patterns in their means. Then you can remove those and estimate the correlation of the seasonally adjusted series (which at this point should be roughly bivariate $i.i.d.$). But be aware that the correlation you are getting after the seasonal adjustment is not informative of you original question, "What is the correlation between the two series?" For example, your two series might have exactly the same seasonal pattern and just minor random variations around it. Thus the two series are almost the same, and you would intuitively think their correlation should be positive and really high (close to unity). But the sample correlation that you get after the seasonal adjustment might be anywhere between [-1,1] since the (estimated, but also the true underlying) random noise components of the two series may or may not be correlated. Thus you would be getting an answer to a question you are not really interested in; there is no guarantee that the answer would be anywhere close to what you are actually looking for.
Therefore, I recommend you to rely on a fully specified model (unless both of your time series are bivariate $i.i.d.$) and ask questions in terms of the model. On the other hand, if you have no time for building a model and you need a quick answer (that can happen), I believe the most relevant point estimate of the correlation between the two series would be just the regular sample correlation (even though it has the problem of not having a meaningful counterpart in population and its confidence interval would be difficult to define, as explained above). | Cross-correlation between two seasonal series | The essential question is, what problem are you trying to solve?
If you intend to build a good model for the data (and later use the model for hypothesis testing, forecasting or whatever), you need to | Cross-correlation between two seasonal series
The essential question is, what problem are you trying to solve?
If you intend to build a good model for the data (and later use the model for hypothesis testing, forecasting or whatever), you need to account for all the patterns there are. If there is seasonality, you should include seasonal patterns in your model. If you fail to do so, the model might not be adequate; it might yield unreliable hypothesis test results, poor forecasts, etc.
Now you say you want to determine (which I will interpret as estimate) cross correlation between two series. I understand that cross correlation is just the regular correlation estimated for different lags versus leads of the two series. For the intuition it is enough to consider regular correlation, which I will do henceforth. The idea can be carried over seamlessly from regular correlation to cross correlation.
If both of your time series were bivariate $i.i.d.$, the sample correlation would correspond to a population correlation. Hence, you could have a meaningful point estimate, a confidence interval and what not. However, if at least one of the time series is not $i.i.d.$, defining a population counterpart of the sample correlation becomes difficult, and subsequently the estimates are difficult to interpret. Then it becomes easier to specify a model for your data and start asking questions in terms of the model.
Now assume that both series are bivariate $i.i.d.$ except for seasonal patterns in their means. Then you can remove those and estimate the correlation of the seasonally adjusted series (which at this point should be roughly bivariate $i.i.d.$). But be aware that the correlation you are getting after the seasonal adjustment is not informative of you original question, "What is the correlation between the two series?" For example, your two series might have exactly the same seasonal pattern and just minor random variations around it. Thus the two series are almost the same, and you would intuitively think their correlation should be positive and really high (close to unity). But the sample correlation that you get after the seasonal adjustment might be anywhere between [-1,1] since the (estimated, but also the true underlying) random noise components of the two series may or may not be correlated. Thus you would be getting an answer to a question you are not really interested in; there is no guarantee that the answer would be anywhere close to what you are actually looking for.
Therefore, I recommend you to rely on a fully specified model (unless both of your time series are bivariate $i.i.d.$) and ask questions in terms of the model. On the other hand, if you have no time for building a model and you need a quick answer (that can happen), I believe the most relevant point estimate of the correlation between the two series would be just the regular sample correlation (even though it has the problem of not having a meaningful counterpart in population and its confidence interval would be difficult to define, as explained above). | Cross-correlation between two seasonal series
The essential question is, what problem are you trying to solve?
If you intend to build a good model for the data (and later use the model for hypothesis testing, forecasting or whatever), you need to |
35,977 | Cross-correlation between two seasonal series | If you are regressing two (unrelated) time series with seasonality you might get what is called spurious correlation. An example of that is available here
"it is important to consider whether significant trends exist in the series;
if we ignore a common trend, we may be estimating a spurious regression, in which both $y$ and the $X$ variables appear to be correlated because of the influence on both of an omitted factor, the passage of time" -- source
The commond trend could be a drift or a seasonality pattern. In order to avoid spurious correlation it is cornerstone to whiten your data, netting off the effect of trend and seasonality. You can then regress on the residuals.
A more formal intro to the problem is available here | Cross-correlation between two seasonal series | If you are regressing two (unrelated) time series with seasonality you might get what is called spurious correlation. An example of that is available here
"it is important to consider whether signific | Cross-correlation between two seasonal series
If you are regressing two (unrelated) time series with seasonality you might get what is called spurious correlation. An example of that is available here
"it is important to consider whether significant trends exist in the series;
if we ignore a common trend, we may be estimating a spurious regression, in which both $y$ and the $X$ variables appear to be correlated because of the influence on both of an omitted factor, the passage of time" -- source
The commond trend could be a drift or a seasonality pattern. In order to avoid spurious correlation it is cornerstone to whiten your data, netting off the effect of trend and seasonality. You can then regress on the residuals.
A more formal intro to the problem is available here | Cross-correlation between two seasonal series
If you are regressing two (unrelated) time series with seasonality you might get what is called spurious correlation. An example of that is available here
"it is important to consider whether signific |
35,978 | Cross-correlation between two seasonal series | The link to the "more formal solution" seems to go to a generic uni page. Could you you update it to point to the article, if still available? | Cross-correlation between two seasonal series | The link to the "more formal solution" seems to go to a generic uni page. Could you you update it to point to the article, if still available? | Cross-correlation between two seasonal series
The link to the "more formal solution" seems to go to a generic uni page. Could you you update it to point to the article, if still available? | Cross-correlation between two seasonal series
The link to the "more formal solution" seems to go to a generic uni page. Could you you update it to point to the article, if still available? |
35,979 | Neural networks with complex weights | Gradient Descent will work in your case. You can use Theano. It supports differentiation with respect to complex variables. You can check this link if you need more information. One important note is that your error function needs to return a scalar.
I'm not quit sure that other algorithms like Conjugate gradient or quasi-Newton will work fine for complex numbers. If you want to implement other learning algorithms, you will need to verify their proves to make sure that they are possible as well. | Neural networks with complex weights | Gradient Descent will work in your case. You can use Theano. It supports differentiation with respect to complex variables. You can check this link if you need more information. One important note is | Neural networks with complex weights
Gradient Descent will work in your case. You can use Theano. It supports differentiation with respect to complex variables. You can check this link if you need more information. One important note is that your error function needs to return a scalar.
I'm not quit sure that other algorithms like Conjugate gradient or quasi-Newton will work fine for complex numbers. If you want to implement other learning algorithms, you will need to verify their proves to make sure that they are possible as well. | Neural networks with complex weights
Gradient Descent will work in your case. You can use Theano. It supports differentiation with respect to complex variables. You can check this link if you need more information. One important note is |
35,980 | Neural networks with complex weights | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The differences between complex-valued and real-valued networks have been thoroughly explained in an ESANN 2011 paper by Zimmermann, Minin and Kusherbaeva (Comparison of the Complex Valued and Real
Valued Neural Networks Trained with
Gradient Descent and Random Search
Algorithms) which reading I recommend.
Main points :
The feed-forward function may have singularity points ; you can avoid that by using a sigmoid complex function that has a singularity at infinite values only, and use a bounded range for your inputs to stay safe ;
The back-propagation algorithm can be recovered by basing the weight adaptation procedure on the Taylor expansion for the error
$E(W+\Delta W)= E(W) + G^T\Delta W + 1/2 \Delta W^T G \Delta W $
then the rule for weight adaptation can be written as
$\Delta w = -\eta . \delta E/\delta W $ with $\eta$ as learning rate.
The sensitivity to initial conditions led the authors of the article to recommend the use of RSA (Random Search Algorithm) rather than direct random initialization of the weights.
Once RSA is used the complex-valued neural network will always converge, and is reported to be on par with real-valued network (slightly better but not significantly in the experiments).
In conclusion : you can use complex weights in neural networks if your domain requires it. However you should devote some time to build a sound specific library, as there are subtle pitfalls. | Neural networks with complex weights | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Neural networks with complex weights
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The differences between complex-valued and real-valued networks have been thoroughly explained in an ESANN 2011 paper by Zimmermann, Minin and Kusherbaeva (Comparison of the Complex Valued and Real
Valued Neural Networks Trained with
Gradient Descent and Random Search
Algorithms) which reading I recommend.
Main points :
The feed-forward function may have singularity points ; you can avoid that by using a sigmoid complex function that has a singularity at infinite values only, and use a bounded range for your inputs to stay safe ;
The back-propagation algorithm can be recovered by basing the weight adaptation procedure on the Taylor expansion for the error
$E(W+\Delta W)= E(W) + G^T\Delta W + 1/2 \Delta W^T G \Delta W $
then the rule for weight adaptation can be written as
$\Delta w = -\eta . \delta E/\delta W $ with $\eta$ as learning rate.
The sensitivity to initial conditions led the authors of the article to recommend the use of RSA (Random Search Algorithm) rather than direct random initialization of the weights.
Once RSA is used the complex-valued neural network will always converge, and is reported to be on par with real-valued network (slightly better but not significantly in the experiments).
In conclusion : you can use complex weights in neural networks if your domain requires it. However you should devote some time to build a sound specific library, as there are subtle pitfalls. | Neural networks with complex weights
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
35,981 | randomForest vs randomForestSRC discrepancies | One of the causes of the packages producing different results is the way nodesize is implemented internally. In randomForest, the value appears to be a strict lower bound. In randomForestSRC, while we (unfortunately) don't document the subtlety, we will not attempt to split a node without at least 2 * nodesize replicates in a node. But when we do, it can result in one daughter < nodesize, and the other daughter >= nodesize. What we can say is that "on average" our terminal nodes across the forest will be of size = nodesize. The result is that we can grow slightly better trees than RF with the "same" setting.
If you set nodesize = 1 to avoid this issue, and accommodate for Monte Carlo effects by growing multiple forests with multiple simulations you will find that the MSE for both packages are coincident. | randomForest vs randomForestSRC discrepancies | One of the causes of the packages producing different results is the way nodesize is implemented internally. In randomForest, the value appears to be a strict lower bound. In randomForestSRC, while w | randomForest vs randomForestSRC discrepancies
One of the causes of the packages producing different results is the way nodesize is implemented internally. In randomForest, the value appears to be a strict lower bound. In randomForestSRC, while we (unfortunately) don't document the subtlety, we will not attempt to split a node without at least 2 * nodesize replicates in a node. But when we do, it can result in one daughter < nodesize, and the other daughter >= nodesize. What we can say is that "on average" our terminal nodes across the forest will be of size = nodesize. The result is that we can grow slightly better trees than RF with the "same" setting.
If you set nodesize = 1 to avoid this issue, and accommodate for Monte Carlo effects by growing multiple forests with multiple simulations you will find that the MSE for both packages are coincident. | randomForest vs randomForestSRC discrepancies
One of the causes of the packages producing different results is the way nodesize is implemented internally. In randomForest, the value appears to be a strict lower bound. In randomForestSRC, while w |
35,982 | randomForest vs randomForestSRC discrepancies | I like randomForestSRC a lot. It has some really nice plots and diagnostics.
There are a lot of choices of how to implement the algorithm. For example, looking at the help pages, rfsrc has a splitrule, where "The default rule is weighted mean-squared error splitting mse". How does randomForest do it? They each can control the size of trees, but do it in two different ways: one by specifying the maximum number of leaves, the other by specifying the maximum depth. There are dozens of such choices, some of which are exposed as parameters (the two I mentioned), but many which are not.
So I can't tell you exactly why they produce different results, but a random forest is not a closed-form like, say, OLS, nor is it an optimization (MLE) procedure. It's more algorithmic in nature so there are no mathematical reasons that would force them to agree.
Why do you ask? Are you simply looking for an explanation? Your question about forcing the same answer makes me think you're doing something like speed benchmarking and want to compare speeds for reaching exactly the same answer. Or something that might better be explicit.
EDIT: OK, per your comment, I'd recommend changing your question to be about the paradox and documenting what you did and your results.
My guess is, to the extent that an RV actually helps results, it's because adding an RV might dilute the effect of RF's preferring to split continuous or categorical variables with lots of levels. If so, trying RVs with randomForestSRC with split of zero (usual, deterministic) and non-zero (random splits) might illustrate this. | randomForest vs randomForestSRC discrepancies | I like randomForestSRC a lot. It has some really nice plots and diagnostics.
There are a lot of choices of how to implement the algorithm. For example, looking at the help pages, rfsrc has a splitrule | randomForest vs randomForestSRC discrepancies
I like randomForestSRC a lot. It has some really nice plots and diagnostics.
There are a lot of choices of how to implement the algorithm. For example, looking at the help pages, rfsrc has a splitrule, where "The default rule is weighted mean-squared error splitting mse". How does randomForest do it? They each can control the size of trees, but do it in two different ways: one by specifying the maximum number of leaves, the other by specifying the maximum depth. There are dozens of such choices, some of which are exposed as parameters (the two I mentioned), but many which are not.
So I can't tell you exactly why they produce different results, but a random forest is not a closed-form like, say, OLS, nor is it an optimization (MLE) procedure. It's more algorithmic in nature so there are no mathematical reasons that would force them to agree.
Why do you ask? Are you simply looking for an explanation? Your question about forcing the same answer makes me think you're doing something like speed benchmarking and want to compare speeds for reaching exactly the same answer. Or something that might better be explicit.
EDIT: OK, per your comment, I'd recommend changing your question to be about the paradox and documenting what you did and your results.
My guess is, to the extent that an RV actually helps results, it's because adding an RV might dilute the effect of RF's preferring to split continuous or categorical variables with lots of levels. If so, trying RVs with randomForestSRC with split of zero (usual, deterministic) and non-zero (random splits) might illustrate this. | randomForest vs randomForestSRC discrepancies
I like randomForestSRC a lot. It has some really nice plots and diagnostics.
There are a lot of choices of how to implement the algorithm. For example, looking at the help pages, rfsrc has a splitrule |
35,983 | Bibliography for linear models | I have read Christensen's books and would include it too. Certainly you'd want to include the following (in no particular order).
P. McCullagh, John A. Nelder. Generalized Linear Models, Second Edition. Volume 37 of Chapman & Hall/CRC Monographs on Statistics & Applied Probability. CRC Press, 1989.
Norman R. Draper, Harry Smith, 3rd Edition. Applied Regression Analysis. Wiley Series in Probability and Statistics. John Wiley & Sons, 2014.
Michael H. Kutner, Chris J. Nachtsheim, John Neter. Applied Linear Regression Models. McGraw-Hill Higher Education, 2003.
Shayle R. Searle. Linear Models. Wiley Series in Probability and Statistics - Applied Probability and Statistics Section. Wiley, 2012.
Alvin C. Rencher, G. Bruce Schaalje. Linear Models in Statistics. John Wiley & Sons, 2008.
John F. Monahan. A Primer on Linear Models. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2008
Nalini Ravishanker, Dipak K. Dey. A First Course in Linear Model Theory. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2001.
John Fox. Applied Regression Analysis, Linear Models, and Related Methods. SAGE Publications, 1997.
Another good list comes from Steve Vardeman at Iowa State: http://www.public.iastate.edu/~vardeman/stat511/Stat%20511%20Useful%20Book%20List.pdf | Bibliography for linear models | I have read Christensen's books and would include it too. Certainly you'd want to include the following (in no particular order).
P. McCullagh, John A. Nelder. Generalized Linear Models, Second Ed | Bibliography for linear models
I have read Christensen's books and would include it too. Certainly you'd want to include the following (in no particular order).
P. McCullagh, John A. Nelder. Generalized Linear Models, Second Edition. Volume 37 of Chapman & Hall/CRC Monographs on Statistics & Applied Probability. CRC Press, 1989.
Norman R. Draper, Harry Smith, 3rd Edition. Applied Regression Analysis. Wiley Series in Probability and Statistics. John Wiley & Sons, 2014.
Michael H. Kutner, Chris J. Nachtsheim, John Neter. Applied Linear Regression Models. McGraw-Hill Higher Education, 2003.
Shayle R. Searle. Linear Models. Wiley Series in Probability and Statistics - Applied Probability and Statistics Section. Wiley, 2012.
Alvin C. Rencher, G. Bruce Schaalje. Linear Models in Statistics. John Wiley & Sons, 2008.
John F. Monahan. A Primer on Linear Models. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2008
Nalini Ravishanker, Dipak K. Dey. A First Course in Linear Model Theory. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2001.
John Fox. Applied Regression Analysis, Linear Models, and Related Methods. SAGE Publications, 1997.
Another good list comes from Steve Vardeman at Iowa State: http://www.public.iastate.edu/~vardeman/stat511/Stat%20511%20Useful%20Book%20List.pdf | Bibliography for linear models
I have read Christensen's books and would include it too. Certainly you'd want to include the following (in no particular order).
P. McCullagh, John A. Nelder. Generalized Linear Models, Second Ed |
35,984 | Derivation of bias-variance decomposition expression for K-nearest neighbor regression | The previous answer is wrong in the part of bias derivation. I think the correct and full answer should be the following.
Firstly, I should note that for kNN we use an important assumption that all $X_i$ are fixed in the training set, i.e. $\mathcal{T}=(x_i,Y_i)_{i=1}^N$ (all randomness arises from the $Y_i$). Hence, here the training set $\mathcal{T}=(x_i,Y_i)_{i=1}^N$ is a random variable and its realizations are fixed training sets $(x_i,y_{ki})_{i=1}^N,~ k = 1,2,\ldots$ (these realizations all have exactly the same subset $(x_i)_{i=1}^N$, but each of them has unique target subset $(y_{ki})_{i=1}^N$, where $k$ is the index of $k$-th realization of the random variable $\mathcal{T}$).
Below I use subscript $_\mathcal{T}$ for all variables which depend on training set $\mathcal{T}$ (estimator $\hat{f}_k$ from Hastie's notation is $\hat{f}_{\mathcal{T}}$ in my notation). All such variables are random variables since they are depend on random variable $\mathcal{T}$. Expectation $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$ is taken over all possible realizations of random variable $\mathcal{T}=(x_i,Y_i)_{i=1}^N$.
For any additive regression model $Y = f(X) + \varepsilon$, where $\mathrm{E}[\varepsilon] =0$, $\mathrm{Var}(\varepsilon) = \sigma^2_\epsilon$, bias-variance decomposition of the expected test error at a fixed test point $X = x_0$ is the following (Hastie p.223 eq.7.9; random variable $Y$ below is a target of $x_0$):
$$\begin{align}\text{Err}(x_0) &= \mathrm{E}_\mathcal{T} \left[\left(Y - \hat{f}_{\mathcal{T}}(x_0) \right)^2 \Big| X= x_0\right] = \\
&= \underbrace{\left(f(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] \right)^{2}}_\mathrm{Bias^2}
+ \underbrace{\mathrm{E}_{\mathcal{T}}\left[\left(\hat{f}_{\mathcal{T}}(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]\right)^{2}\right]}_\mathrm{Variance}
+ \underbrace{\sigma^2_\varepsilon}_\mathrm{Noise}.\end{align}$$
In the case of kNN we can simplify this expression. Firstly, let's evaluate expectation $\mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]$:
$$\begin{align}\mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] &= \mathrm{E}_\mathcal{T} \left[\frac{1}{k} \sum_{\ell=1}^k Y_{\mathcal{T},(\ell)}\right] = \mathrm{E}_\mathcal{T}\left[\frac{1}{k} \sum_{\ell=1}^k \Big(f(x_{(\ell)}) + \varepsilon_{\mathcal{T},(\ell)}\Big) \right] =\\ &= \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) + \frac{1}{k} \sum_{\ell=1}^k \underbrace{\mathrm{E}_\mathcal{T}\left[\varepsilon_{\mathcal{T},(\ell)}\right]}_{=0}
= \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}).\end{align}$$
Here we used the aforementioned assumption that $\mathcal{T}=(x_i,Y_i)_{i=1}^N$, hence all possible realisations of $\mathcal{T}$ have exactly the same values of $(x_i)_{i=1}^N$. This means that $x_{(1)}, \ldots, x_{(k)}$, $k$ nearest neighbors of $x_0$, are fixed in all realizations (they are constant for $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$), therefore $f(x_{(1)}), \ldots, f(x_{(k)})$ are also constant for $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$.
Next, let's evaluate bias of kNN:
$$\mathrm{Bias}_{knn}^2(x_0) = \left(f(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] \right)^{2} = \left(f(x_0) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)})\right)^2.$$
And variance of kNN is the following:
$$\begin{align}\mathrm{Variance}_{knn}(x_0) &= \mathrm{E}_{\mathcal{T}}\left[\left(\hat{f}_{\mathcal{T}}(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]\right)^{2}\right] \\ &= \mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k Y_{\mathcal{T},(\ell)} - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) \right)^2\right] \\&=
\mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k \Big(f(x_{(\ell)}) + \varepsilon_{\mathcal{T},(\ell)}\Big) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) \right)^2\right] \\
&= \mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right)^2\right] = \frac{1}{k^2} \mathrm{E}_\mathcal{T} \left[\left(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right)^2\right] \\ &= \frac{1}{k^2} \mathrm{E}_\mathcal{T} \Bigg[\Bigg(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} - \underbrace{\mathrm{E}_\mathcal{T} \left[\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right]}_{=0} \Bigg)^2\Bigg] = \frac{1}{k^2} \mathrm{Var}_\mathcal{T} \left(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)}\right) \\&= \frac{1}{k^2} \sum_{\ell=1}^k \mathrm{Var}_\mathcal{T}\left(\varepsilon_{\mathcal{T},(\ell)}\right) = \frac{k \sigma^2_\varepsilon}{k^2} = \frac{\sigma^2_\varepsilon}{k}.
\end{align}$$
Finally, we get exactly the same bias-variance decomposition for kNN as Hastie (p.223 eq.7.10):
$$\text{Err}_{knn}(x_0) = \underbrace{\left(f(x_0) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)})\right)^2}_{\mathrm{Bias^2}} + \underbrace{\frac{\sigma^2_\varepsilon}{k}}_{\mathrm{Variance}} + \underbrace{\sigma^2_\varepsilon}_{\mathrm{Noise}}.$$ | Derivation of bias-variance decomposition expression for K-nearest neighbor regression | The previous answer is wrong in the part of bias derivation. I think the correct and full answer should be the following.
Firstly, I should note that for kNN we use an important assumption that all $X | Derivation of bias-variance decomposition expression for K-nearest neighbor regression
The previous answer is wrong in the part of bias derivation. I think the correct and full answer should be the following.
Firstly, I should note that for kNN we use an important assumption that all $X_i$ are fixed in the training set, i.e. $\mathcal{T}=(x_i,Y_i)_{i=1}^N$ (all randomness arises from the $Y_i$). Hence, here the training set $\mathcal{T}=(x_i,Y_i)_{i=1}^N$ is a random variable and its realizations are fixed training sets $(x_i,y_{ki})_{i=1}^N,~ k = 1,2,\ldots$ (these realizations all have exactly the same subset $(x_i)_{i=1}^N$, but each of them has unique target subset $(y_{ki})_{i=1}^N$, where $k$ is the index of $k$-th realization of the random variable $\mathcal{T}$).
Below I use subscript $_\mathcal{T}$ for all variables which depend on training set $\mathcal{T}$ (estimator $\hat{f}_k$ from Hastie's notation is $\hat{f}_{\mathcal{T}}$ in my notation). All such variables are random variables since they are depend on random variable $\mathcal{T}$. Expectation $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$ is taken over all possible realizations of random variable $\mathcal{T}=(x_i,Y_i)_{i=1}^N$.
For any additive regression model $Y = f(X) + \varepsilon$, where $\mathrm{E}[\varepsilon] =0$, $\mathrm{Var}(\varepsilon) = \sigma^2_\epsilon$, bias-variance decomposition of the expected test error at a fixed test point $X = x_0$ is the following (Hastie p.223 eq.7.9; random variable $Y$ below is a target of $x_0$):
$$\begin{align}\text{Err}(x_0) &= \mathrm{E}_\mathcal{T} \left[\left(Y - \hat{f}_{\mathcal{T}}(x_0) \right)^2 \Big| X= x_0\right] = \\
&= \underbrace{\left(f(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] \right)^{2}}_\mathrm{Bias^2}
+ \underbrace{\mathrm{E}_{\mathcal{T}}\left[\left(\hat{f}_{\mathcal{T}}(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]\right)^{2}\right]}_\mathrm{Variance}
+ \underbrace{\sigma^2_\varepsilon}_\mathrm{Noise}.\end{align}$$
In the case of kNN we can simplify this expression. Firstly, let's evaluate expectation $\mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]$:
$$\begin{align}\mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] &= \mathrm{E}_\mathcal{T} \left[\frac{1}{k} \sum_{\ell=1}^k Y_{\mathcal{T},(\ell)}\right] = \mathrm{E}_\mathcal{T}\left[\frac{1}{k} \sum_{\ell=1}^k \Big(f(x_{(\ell)}) + \varepsilon_{\mathcal{T},(\ell)}\Big) \right] =\\ &= \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) + \frac{1}{k} \sum_{\ell=1}^k \underbrace{\mathrm{E}_\mathcal{T}\left[\varepsilon_{\mathcal{T},(\ell)}\right]}_{=0}
= \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}).\end{align}$$
Here we used the aforementioned assumption that $\mathcal{T}=(x_i,Y_i)_{i=1}^N$, hence all possible realisations of $\mathcal{T}$ have exactly the same values of $(x_i)_{i=1}^N$. This means that $x_{(1)}, \ldots, x_{(k)}$, $k$ nearest neighbors of $x_0$, are fixed in all realizations (they are constant for $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$), therefore $f(x_{(1)}), \ldots, f(x_{(k)})$ are also constant for $\mathrm{E}_{\mathcal{T}}[\,\cdot\,]$.
Next, let's evaluate bias of kNN:
$$\mathrm{Bias}_{knn}^2(x_0) = \left(f(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)] \right)^{2} = \left(f(x_0) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)})\right)^2.$$
And variance of kNN is the following:
$$\begin{align}\mathrm{Variance}_{knn}(x_0) &= \mathrm{E}_{\mathcal{T}}\left[\left(\hat{f}_{\mathcal{T}}(x_0) - \mathrm{E}_\mathcal{T}[\hat{f}_{\mathcal{T}}(x_0)]\right)^{2}\right] \\ &= \mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k Y_{\mathcal{T},(\ell)} - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) \right)^2\right] \\&=
\mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k \Big(f(x_{(\ell)}) + \varepsilon_{\mathcal{T},(\ell)}\Big) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)}) \right)^2\right] \\
&= \mathrm{E}_\mathcal{T} \left[\left(\frac{1}{k} \sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right)^2\right] = \frac{1}{k^2} \mathrm{E}_\mathcal{T} \left[\left(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right)^2\right] \\ &= \frac{1}{k^2} \mathrm{E}_\mathcal{T} \Bigg[\Bigg(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} - \underbrace{\mathrm{E}_\mathcal{T} \left[\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)} \right]}_{=0} \Bigg)^2\Bigg] = \frac{1}{k^2} \mathrm{Var}_\mathcal{T} \left(\sum_{\ell=1}^k \varepsilon_{\mathcal{T},(\ell)}\right) \\&= \frac{1}{k^2} \sum_{\ell=1}^k \mathrm{Var}_\mathcal{T}\left(\varepsilon_{\mathcal{T},(\ell)}\right) = \frac{k \sigma^2_\varepsilon}{k^2} = \frac{\sigma^2_\varepsilon}{k}.
\end{align}$$
Finally, we get exactly the same bias-variance decomposition for kNN as Hastie (p.223 eq.7.10):
$$\text{Err}_{knn}(x_0) = \underbrace{\left(f(x_0) - \frac{1}{k} \sum_{\ell=1}^k f(x_{(\ell)})\right)^2}_{\mathrm{Bias^2}} + \underbrace{\frac{\sigma^2_\varepsilon}{k}}_{\mathrm{Variance}} + \underbrace{\sigma^2_\varepsilon}_{\mathrm{Noise}}.$$ | Derivation of bias-variance decomposition expression for K-nearest neighbor regression
The previous answer is wrong in the part of bias derivation. I think the correct and full answer should be the following.
Firstly, I should note that for kNN we use an important assumption that all $X |
35,985 | Derivation of bias-variance decomposition expression for K-nearest neighbor regression | Edit
This is incorrect as others have noted, see correct answer below this one.
OLD
Let the label of $x$ be given by $Y(x) = f(x) + \epsilon$. Let the nearest neighbors of $x_0$ be $x_i$. Then the variance of this estimate is:
\begin{align}
variance &= var \left( \frac{1}{k} \sum_i^k Y(x_i) \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( f(x_i) + \epsilon_i \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( f(x_i) \right) + var \left( \epsilon_i \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( \epsilon_i \right) \\
&= \frac{1}{k^2} k \sigma_\epsilon^2 \\
&= \frac{\sigma^2_\epsilon}{k}
\end{align}
$var(f(x_i))=0$ because we have made the strong assumption that the neighbors $x_i$ are fixed, and hence has no variance. $\sigma_\epsilon^2$ by definition is the variance of $\epsilon$.
The squared bias is the square of the difference between the target function $Y$ and the "average prediction" overall all training sets $\tau$, $E_\tau(\hat{f}_k(x_0))$.
\begin{align}
bias^2 &= \left( Y(x_0) - E_\tau(\hat{f}_k(x_0)) \right) ^2 \\
&= \left( Y(x_0) - E_\tau\left(\frac{1}{k} \sum_i^k Y(x_i) \right)\right) ^2 \\
&= \left( Y(x_0) - \frac{1}{k} \sum_i^k Y(x_i) \right) ^2 \\
&= \left( f(x_0) + \epsilon_0 - \frac{1}{k} \sum_i^k f(x_i) + \epsilon_i \right) ^2 \\
\end{align}
Assuming fixed neighbors, we get $E_\tau\left(\frac{1}{k} \sum_i^k Y(x_i) \right)= \frac{1}{k} \sum_i^k Y(x_i)$ on line two. Here, all the $\epsilon$ values disappear when we take the expectation of the bias over all test samples $x_0$, because it has zero mean. | Derivation of bias-variance decomposition expression for K-nearest neighbor regression | Edit
This is incorrect as others have noted, see correct answer below this one.
OLD
Let the label of $x$ be given by $Y(x) = f(x) + \epsilon$. Let the nearest neighbors of $x_0$ be $x_i$. Then the | Derivation of bias-variance decomposition expression for K-nearest neighbor regression
Edit
This is incorrect as others have noted, see correct answer below this one.
OLD
Let the label of $x$ be given by $Y(x) = f(x) + \epsilon$. Let the nearest neighbors of $x_0$ be $x_i$. Then the variance of this estimate is:
\begin{align}
variance &= var \left( \frac{1}{k} \sum_i^k Y(x_i) \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( f(x_i) + \epsilon_i \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( f(x_i) \right) + var \left( \epsilon_i \right) \\
&= \frac{1}{k^2} \sum_i^k var \left( \epsilon_i \right) \\
&= \frac{1}{k^2} k \sigma_\epsilon^2 \\
&= \frac{\sigma^2_\epsilon}{k}
\end{align}
$var(f(x_i))=0$ because we have made the strong assumption that the neighbors $x_i$ are fixed, and hence has no variance. $\sigma_\epsilon^2$ by definition is the variance of $\epsilon$.
The squared bias is the square of the difference between the target function $Y$ and the "average prediction" overall all training sets $\tau$, $E_\tau(\hat{f}_k(x_0))$.
\begin{align}
bias^2 &= \left( Y(x_0) - E_\tau(\hat{f}_k(x_0)) \right) ^2 \\
&= \left( Y(x_0) - E_\tau\left(\frac{1}{k} \sum_i^k Y(x_i) \right)\right) ^2 \\
&= \left( Y(x_0) - \frac{1}{k} \sum_i^k Y(x_i) \right) ^2 \\
&= \left( f(x_0) + \epsilon_0 - \frac{1}{k} \sum_i^k f(x_i) + \epsilon_i \right) ^2 \\
\end{align}
Assuming fixed neighbors, we get $E_\tau\left(\frac{1}{k} \sum_i^k Y(x_i) \right)= \frac{1}{k} \sum_i^k Y(x_i)$ on line two. Here, all the $\epsilon$ values disappear when we take the expectation of the bias over all test samples $x_0$, because it has zero mean. | Derivation of bias-variance decomposition expression for K-nearest neighbor regression
Edit
This is incorrect as others have noted, see correct answer below this one.
OLD
Let the label of $x$ be given by $Y(x) = f(x) + \epsilon$. Let the nearest neighbors of $x_0$ be $x_i$. Then the |
35,986 | Why do statisticians prove asymptotic normality? | It is for example useful to do so in order to be able to quantify the sampling uncertainty of an estimator, or the null distribution of a test.
Recall that normal random variables take 95% of their realizations in the interval $\mu\pm1.96\sigma$. So if you can demonstrate that (typically, a scaled version of) an estimator is asymptotically normal, then you know it behaves normally at least in large samples, so you can easily construct confidence intervals, for example.
Whether or not the approximation is useful to settings in which (as always in practice) your sample is finite is in general unfortunately indeed not known analytically - if could derive the finite-sample distribution analytically, that is what we would work with. Unfortunately, that only works in very rare cases (for example, when sampling from a normal distribution, the t-statistic follows a t distribution).
Typically, simulations are then used to at least get an idea of the usefulness of the approximation in relevant cases. | Why do statisticians prove asymptotic normality? | It is for example useful to do so in order to be able to quantify the sampling uncertainty of an estimator, or the null distribution of a test.
Recall that normal random variables take 95% of their re | Why do statisticians prove asymptotic normality?
It is for example useful to do so in order to be able to quantify the sampling uncertainty of an estimator, or the null distribution of a test.
Recall that normal random variables take 95% of their realizations in the interval $\mu\pm1.96\sigma$. So if you can demonstrate that (typically, a scaled version of) an estimator is asymptotically normal, then you know it behaves normally at least in large samples, so you can easily construct confidence intervals, for example.
Whether or not the approximation is useful to settings in which (as always in practice) your sample is finite is in general unfortunately indeed not known analytically - if could derive the finite-sample distribution analytically, that is what we would work with. Unfortunately, that only works in very rare cases (for example, when sampling from a normal distribution, the t-statistic follows a t distribution).
Typically, simulations are then used to at least get an idea of the usefulness of the approximation in relevant cases. | Why do statisticians prove asymptotic normality?
It is for example useful to do so in order to be able to quantify the sampling uncertainty of an estimator, or the null distribution of a test.
Recall that normal random variables take 95% of their re |
35,987 | Why do statisticians prove asymptotic normality? | Knowing the distribution of the random variable is knowing everything that is knowable about it. The estimators are random variables because they're functions of random samples. Therefore, it's not just a tradition but an ultimate goal of statistical analysis to establish the probability distribution of the metric. Often, small sample properties of estimators are hard to determine, in these cases asymptotic properties are the next best things. So, this is not much about normality, but the asymptotic distribution of the variable. If it's normal, then it's even better. Due to CLT and laws of large numbers, in many cases the asymptotic distributions end up being normal. | Why do statisticians prove asymptotic normality? | Knowing the distribution of the random variable is knowing everything that is knowable about it. The estimators are random variables because they're functions of random samples. Therefore, it's not ju | Why do statisticians prove asymptotic normality?
Knowing the distribution of the random variable is knowing everything that is knowable about it. The estimators are random variables because they're functions of random samples. Therefore, it's not just a tradition but an ultimate goal of statistical analysis to establish the probability distribution of the metric. Often, small sample properties of estimators are hard to determine, in these cases asymptotic properties are the next best things. So, this is not much about normality, but the asymptotic distribution of the variable. If it's normal, then it's even better. Due to CLT and laws of large numbers, in many cases the asymptotic distributions end up being normal. | Why do statisticians prove asymptotic normality?
Knowing the distribution of the random variable is knowing everything that is knowable about it. The estimators are random variables because they're functions of random samples. Therefore, it's not ju |
35,988 | A possible mistake in a conditional probability derivation | I may also be mistaken but see no difficulty with the decomposition.
When $t\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\le 0)\\
&=P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)
\end{align*}
since the second term is zero, $R$ being multiplied there by a negative term. Thus
$$
P(XY\ge t)
= P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\quad\qquad \\
\ \ \ = P(R(\cos(\pi U)+\varrho)\ge t,U\le\cos^{-1}(-\varrho)/\pi)\\
= \int_0^{\cos^{-1}(-\varrho)/\pi} P(R(\cos(\pi U)+\varrho)\ge t)\,\text{d}u
$$
seems to be correct.
When $t\le 0$, since $$R(\cos(\pi U)+\varrho)\ge t$$ is always true when $\cos(\pi U)+\varrho\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\le 0)\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+P(R(-\cos(\pi U)-\varrho)\le -t,\cos(\pi U)+\varrho\le 0)\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+P\left\{R\le t\big/(\cos(\pi U)+\varrho),U\ge\cos^{-1}(-\varrho)/\pi\right\}\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+\int_{\cos^{-1}(-\varrho)/\pi}^1 P\left\{R\le t\big/(\cos(\pi u)+\varrho\right\}
\end{align*}
so this seems to be correct as well. | A possible mistake in a conditional probability derivation | I may also be mistaken but see no difficulty with the decomposition.
When $t\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi | A possible mistake in a conditional probability derivation
I may also be mistaken but see no difficulty with the decomposition.
When $t\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\le 0)\\
&=P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)
\end{align*}
since the second term is zero, $R$ being multiplied there by a negative term. Thus
$$
P(XY\ge t)
= P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\quad\qquad \\
\ \ \ = P(R(\cos(\pi U)+\varrho)\ge t,U\le\cos^{-1}(-\varrho)/\pi)\\
= \int_0^{\cos^{-1}(-\varrho)/\pi} P(R(\cos(\pi U)+\varrho)\ge t)\,\text{d}u
$$
seems to be correct.
When $t\le 0$, since $$R(\cos(\pi U)+\varrho)\ge t$$ is always true when $\cos(\pi U)+\varrho\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi U)+\varrho\le 0)\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+P(R(-\cos(\pi U)-\varrho)\le -t,\cos(\pi U)+\varrho\le 0)\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+P\left\{R\le t\big/(\cos(\pi U)+\varrho),U\ge\cos^{-1}(-\varrho)/\pi\right\}\\
=P(\cos(\pi U)&+\varrho\ge 0)\\
&+\int_{\cos^{-1}(-\varrho)/\pi}^1 P\left\{R\le t\big/(\cos(\pi u)+\varrho\right\}
\end{align*}
so this seems to be correct as well. | A possible mistake in a conditional probability derivation
I may also be mistaken but see no difficulty with the decomposition.
When $t\ge 0$,
\begin{align*}
P(R(\cos(\pi U)&+\varrho)\ge t,\cos(\pi U)+\varrho\ge 0)\\
&+P(R(\cos(\pi U)+\varrho)\ge t,\cos(\pi |
35,989 | What does the "1" in the R formula "y ~ 1" mean? | In most R regression packages, y ~ 1 means "fit an intercept only".
So in the case of linear regression, this is exactly the same as mean(y). For glm's, it depends on the link function.
Also, y ~ x - 1 means "regress x on y, but leave out the intercept". | What does the "1" in the R formula "y ~ 1" mean? | In most R regression packages, y ~ 1 means "fit an intercept only".
So in the case of linear regression, this is exactly the same as mean(y). For glm's, it depends on the link function.
Also, y ~ x | What does the "1" in the R formula "y ~ 1" mean?
In most R regression packages, y ~ 1 means "fit an intercept only".
So in the case of linear regression, this is exactly the same as mean(y). For glm's, it depends on the link function.
Also, y ~ x - 1 means "regress x on y, but leave out the intercept". | What does the "1" in the R formula "y ~ 1" mean?
In most R regression packages, y ~ 1 means "fit an intercept only".
So in the case of linear regression, this is exactly the same as mean(y). For glm's, it depends on the link function.
Also, y ~ x |
35,990 | What does the "1" in the R formula "y ~ 1" mean? | That means intercept only model. You can use model.matrix to find out. Try the following codes:
library(boot)
library(reshape)
dataset <- data.frame(Person = c(rep("A", 20), rep("B", 10)), Success = c(rbinom(20, 1, 0.25), rbinom(10, 1, 0.75)))
Aggregated <- cast(Person ~ ., data = dataset, value = "Success", fun = list(mean, length))
m0 <- glm(Success ~ 1, data = dataset, family = binomial)
model.matrix(Success ~ 1, data = dataset) | What does the "1" in the R formula "y ~ 1" mean? | That means intercept only model. You can use model.matrix to find out. Try the following codes:
library(boot)
library(reshape)
dataset <- data.frame(Person = c(rep("A", 20), rep("B", 10)), Success = | What does the "1" in the R formula "y ~ 1" mean?
That means intercept only model. You can use model.matrix to find out. Try the following codes:
library(boot)
library(reshape)
dataset <- data.frame(Person = c(rep("A", 20), rep("B", 10)), Success = c(rbinom(20, 1, 0.25), rbinom(10, 1, 0.75)))
Aggregated <- cast(Person ~ ., data = dataset, value = "Success", fun = list(mean, length))
m0 <- glm(Success ~ 1, data = dataset, family = binomial)
model.matrix(Success ~ 1, data = dataset) | What does the "1" in the R formula "y ~ 1" mean?
That means intercept only model. You can use model.matrix to find out. Try the following codes:
library(boot)
library(reshape)
dataset <- data.frame(Person = c(rep("A", 20), rep("B", 10)), Success = |
35,991 | Skewness, kurtosis and how many standard deviations values are from the mean | You can always calculate how many SDs values are from the mean by just plugging in sample values, (value $-$ mean)/SD, and then binning and counting.
Precise numerical facts such as you cite for the normal (Gaussian) in general depend on knowing one or more of the density, distribution or quantile functions, numerically if not analytically.
However, there aren't general relations available on just knowing the skewness or kurtosis. Skewness and kurtosis don't pin down the form of the distribution in general, as higher moments can vary too. | Skewness, kurtosis and how many standard deviations values are from the mean | You can always calculate how many SDs values are from the mean by just plugging in sample values, (value $-$ mean)/SD, and then binning and counting.
Precise numerical facts such as you cite for the | Skewness, kurtosis and how many standard deviations values are from the mean
You can always calculate how many SDs values are from the mean by just plugging in sample values, (value $-$ mean)/SD, and then binning and counting.
Precise numerical facts such as you cite for the normal (Gaussian) in general depend on knowing one or more of the density, distribution or quantile functions, numerically if not analytically.
However, there aren't general relations available on just knowing the skewness or kurtosis. Skewness and kurtosis don't pin down the form of the distribution in general, as higher moments can vary too. | Skewness, kurtosis and how many standard deviations values are from the mean
You can always calculate how many SDs values are from the mean by just plugging in sample values, (value $-$ mean)/SD, and then binning and counting.
Precise numerical facts such as you cite for the |
35,992 | Skewness, kurtosis and how many standard deviations values are from the mean | Here is a precise answer that shows that the median absolute deviation from the mean is not necessarily related to kurtosis.
Consider the family of distributions of $X = \mu + \sigma Z$, where $Z$ has the discrete distribution
$Z = -0.5$, with probability (wp) $.25$
$ = +0.5$, wp $.25$
$ = -1.2$, wp $.25 - \theta/2$
$ = +1.2$, wp $.25 - \theta/2$
$ = -\sqrt{0.155/\theta + 1.44}$, wp $\theta/2$
$ = +\sqrt{0.155/\theta + 1.44}$, wp $\theta/2$.
The family of distributions of $X$ is indexed by three parameters: $\mu$, $\sigma$, and $\theta$, with ranges $(-\infty, +\infty)$, $(0, +\infty)$ and $(0,.5)$.
In this family, $E(X) = \mu$, $Var(X) = \sigma^2$, and the median absolute deviation from the mean is $0.5\sigma$.
The kurtosis of $X$ is as follows:
kurtosis $= E(Z^4) = .5^4 * .5 + 1.2^4 * (.5 - \theta) + (0.155/\theta + 1.44)^2 * \theta$.
Within this family,
(i) kurtosis tends to infinity as $\theta \rightarrow 0$.
(ii) the distribution within the "shoulders" (i.e., within the $\mu \pm \sigma$ range) is constant for all values of kurtosis; it is simply the two points $\mu \pm \sigma/2$, wp $0.25$ each. This provides a counterexample to one interpretation of kurtosis, which states that larger kurtosis implies movement of mass away from the shoulders, simultaneously into the range between the shoulders and into the tails.
(iii) the "peak" of the distribution is also constant for all value of kurtosis; again, it is simply the two points $\mu \pm \sigma/2$, wp $0.25$ each. This provides a counterexample to the often given but obviously incorrect interpretation that larger kurtosis implies a more "peaked" distribution.
In this family, the central portion of the distribution actually becomes flatter as kurtosis increases, since the probabilities on $\mu \pm 1.2\sigma$ and $\mu \pm 0.5\sigma$ converge to the same value, $0.25$, as the kurtosis increases.
(iv) The median absolute deviation from the mean is constant, $0.5\sigma$, for all values of kurtosis. | Skewness, kurtosis and how many standard deviations values are from the mean | Here is a precise answer that shows that the median absolute deviation from the mean is not necessarily related to kurtosis.
Consider the family of distributions of $X = \mu + \sigma Z$, where $Z$ has | Skewness, kurtosis and how many standard deviations values are from the mean
Here is a precise answer that shows that the median absolute deviation from the mean is not necessarily related to kurtosis.
Consider the family of distributions of $X = \mu + \sigma Z$, where $Z$ has the discrete distribution
$Z = -0.5$, with probability (wp) $.25$
$ = +0.5$, wp $.25$
$ = -1.2$, wp $.25 - \theta/2$
$ = +1.2$, wp $.25 - \theta/2$
$ = -\sqrt{0.155/\theta + 1.44}$, wp $\theta/2$
$ = +\sqrt{0.155/\theta + 1.44}$, wp $\theta/2$.
The family of distributions of $X$ is indexed by three parameters: $\mu$, $\sigma$, and $\theta$, with ranges $(-\infty, +\infty)$, $(0, +\infty)$ and $(0,.5)$.
In this family, $E(X) = \mu$, $Var(X) = \sigma^2$, and the median absolute deviation from the mean is $0.5\sigma$.
The kurtosis of $X$ is as follows:
kurtosis $= E(Z^4) = .5^4 * .5 + 1.2^4 * (.5 - \theta) + (0.155/\theta + 1.44)^2 * \theta$.
Within this family,
(i) kurtosis tends to infinity as $\theta \rightarrow 0$.
(ii) the distribution within the "shoulders" (i.e., within the $\mu \pm \sigma$ range) is constant for all values of kurtosis; it is simply the two points $\mu \pm \sigma/2$, wp $0.25$ each. This provides a counterexample to one interpretation of kurtosis, which states that larger kurtosis implies movement of mass away from the shoulders, simultaneously into the range between the shoulders and into the tails.
(iii) the "peak" of the distribution is also constant for all value of kurtosis; again, it is simply the two points $\mu \pm \sigma/2$, wp $0.25$ each. This provides a counterexample to the often given but obviously incorrect interpretation that larger kurtosis implies a more "peaked" distribution.
In this family, the central portion of the distribution actually becomes flatter as kurtosis increases, since the probabilities on $\mu \pm 1.2\sigma$ and $\mu \pm 0.5\sigma$ converge to the same value, $0.25$, as the kurtosis increases.
(iv) The median absolute deviation from the mean is constant, $0.5\sigma$, for all values of kurtosis. | Skewness, kurtosis and how many standard deviations values are from the mean
Here is a precise answer that shows that the median absolute deviation from the mean is not necessarily related to kurtosis.
Consider the family of distributions of $X = \mu + \sigma Z$, where $Z$ has |
35,993 | OLS in terms of means and sample size | The approach is correct, but there's a slight numerical error: there are only $100$ females, not $200$. The mean heights for males and females can be converted to sums via
$$\text{Sum of male heights} = 100 \times 175$$
and
$$\text{Sum of female heights} = 100 \times 165.$$
Therefore the sum of all heights is
$$\text{Sum of all heights} = 100 \times 175 + 100\times 165 = 200 \times 170,$$
as indicated in the question. Consequently the Normal equations are
$$\pmatrix{200 & 100 \\ 100 & 100}\pmatrix{\hat\beta_0 \\ \hat\beta_1} = \pmatrix{200\cdot170 \\ 100 \cdot165}$$
(not $165\cdot 200$ on the right side), with solution
$$(\hat\beta_0, \hat\beta_1) = (175, -10).$$ | OLS in terms of means and sample size | The approach is correct, but there's a slight numerical error: there are only $100$ females, not $200$. The mean heights for males and females can be converted to sums via
$$\text{Sum of male heights | OLS in terms of means and sample size
The approach is correct, but there's a slight numerical error: there are only $100$ females, not $200$. The mean heights for males and females can be converted to sums via
$$\text{Sum of male heights} = 100 \times 175$$
and
$$\text{Sum of female heights} = 100 \times 165.$$
Therefore the sum of all heights is
$$\text{Sum of all heights} = 100 \times 175 + 100\times 165 = 200 \times 170,$$
as indicated in the question. Consequently the Normal equations are
$$\pmatrix{200 & 100 \\ 100 & 100}\pmatrix{\hat\beta_0 \\ \hat\beta_1} = \pmatrix{200\cdot170 \\ 100 \cdot165}$$
(not $165\cdot 200$ on the right side), with solution
$$(\hat\beta_0, \hat\beta_1) = (175, -10).$$ | OLS in terms of means and sample size
The approach is correct, but there's a slight numerical error: there are only $100$ females, not $200$. The mean heights for males and females can be converted to sums via
$$\text{Sum of male heights |
35,994 | OLS in terms of means and sample size | Im quite confused. What does $u$ mean? Are these residuals? If so, then
$\mathbf{X'X}$ = $\begin{bmatrix}
200 & 100 \\ 100 & 100
\end{bmatrix}$
since
$\mathbf{X} = \frac{\partial{y}}{\partial\beta} =
\left[\begin{array}{cccc|cccc}
\frac{\partial{y_1}}{\partial\beta_1} & \frac{\partial{y_2}}{\partial\beta_1} & ... & \frac{\partial{y_{n_f}}}{\partial\beta_1} & \frac{\partial{y_{{n_f}+1}}}{\partial\beta_1} & \frac{\partial{y_{{n_f}+2}}}{\partial\beta_1} & ... & \frac{\partial{y_{n_{{n_f}+{n_m}}}}}{\partial\beta_1} \\
\frac{\partial{y_1}}{\partial\beta_2} & \frac{\partial{y_2}}{\partial\beta_2} & ... & \frac{\partial{y_{n_f}}}{\partial\beta_2} & \frac{\partial{y_{{n_f}+1}}}{\partial\beta_2} & \frac{\partial{y_{{n_f}+2}}}{\partial\beta_2} & ... & \frac{\partial{y_{n_{{n_f}+{n_m}}}}}{\partial\beta_2} \\
\end{array}\right]^T$
=
$\left[\begin{array}{cccc|cccc}
1 & 1 & ... & 1 & 1 & 1 & ... & 1 \\
0 & 0 & ... & 0 & 1 & 1 & ... & 1 \\
\end{array}\right]^T$
Some thoughts:
Given your equation $\beta_1$ IMHO should be 175 and $\beta_2$ = -10. So for the male and female part you get:
$f_m = 175 (+) -10 \times 0 + u = 175 + u$
$f_f = 175 (+) -10 \times 1 + u = 165 + u$
Since you can use
$\mathbf{\beta} = \left(X'X\right)^{-1}X^{T}\mathbf{y}$
to solve for $\beta$ by using the Moore-Penrose Pseudoinverse.
$\left(\left(X'X\right)^{-1}X^{T}\right)^{+}\beta=\left(\left(X'X\right)^{-1}X^{T}\right)^{+}\begin{bmatrix}
175 \\
-10
\end{bmatrix}=\mathbf{y}$
Now $\mathbf{y}$ contains:
$\mathbf{y}
\approx \begin{bmatrix}
165_{f_1} & 165_{f_2} & ... 165_{f_{100}} & 175_{m_1} & 175_{m_2} & ... 175_{m_{100}}
\end{bmatrix}^T$
Hope it helps! | OLS in terms of means and sample size | Im quite confused. What does $u$ mean? Are these residuals? If so, then
$\mathbf{X'X}$ = $\begin{bmatrix}
200 & 100 \\ 100 & 100
\end{bmatrix}$
since
$\mathbf{X} = \frac{\partial{y}}{\partial\beta} = | OLS in terms of means and sample size
Im quite confused. What does $u$ mean? Are these residuals? If so, then
$\mathbf{X'X}$ = $\begin{bmatrix}
200 & 100 \\ 100 & 100
\end{bmatrix}$
since
$\mathbf{X} = \frac{\partial{y}}{\partial\beta} =
\left[\begin{array}{cccc|cccc}
\frac{\partial{y_1}}{\partial\beta_1} & \frac{\partial{y_2}}{\partial\beta_1} & ... & \frac{\partial{y_{n_f}}}{\partial\beta_1} & \frac{\partial{y_{{n_f}+1}}}{\partial\beta_1} & \frac{\partial{y_{{n_f}+2}}}{\partial\beta_1} & ... & \frac{\partial{y_{n_{{n_f}+{n_m}}}}}{\partial\beta_1} \\
\frac{\partial{y_1}}{\partial\beta_2} & \frac{\partial{y_2}}{\partial\beta_2} & ... & \frac{\partial{y_{n_f}}}{\partial\beta_2} & \frac{\partial{y_{{n_f}+1}}}{\partial\beta_2} & \frac{\partial{y_{{n_f}+2}}}{\partial\beta_2} & ... & \frac{\partial{y_{n_{{n_f}+{n_m}}}}}{\partial\beta_2} \\
\end{array}\right]^T$
=
$\left[\begin{array}{cccc|cccc}
1 & 1 & ... & 1 & 1 & 1 & ... & 1 \\
0 & 0 & ... & 0 & 1 & 1 & ... & 1 \\
\end{array}\right]^T$
Some thoughts:
Given your equation $\beta_1$ IMHO should be 175 and $\beta_2$ = -10. So for the male and female part you get:
$f_m = 175 (+) -10 \times 0 + u = 175 + u$
$f_f = 175 (+) -10 \times 1 + u = 165 + u$
Since you can use
$\mathbf{\beta} = \left(X'X\right)^{-1}X^{T}\mathbf{y}$
to solve for $\beta$ by using the Moore-Penrose Pseudoinverse.
$\left(\left(X'X\right)^{-1}X^{T}\right)^{+}\beta=\left(\left(X'X\right)^{-1}X^{T}\right)^{+}\begin{bmatrix}
175 \\
-10
\end{bmatrix}=\mathbf{y}$
Now $\mathbf{y}$ contains:
$\mathbf{y}
\approx \begin{bmatrix}
165_{f_1} & 165_{f_2} & ... 165_{f_{100}} & 175_{m_1} & 175_{m_2} & ... 175_{m_{100}}
\end{bmatrix}^T$
Hope it helps! | OLS in terms of means and sample size
Im quite confused. What does $u$ mean? Are these residuals? If so, then
$\mathbf{X'X}$ = $\begin{bmatrix}
200 & 100 \\ 100 & 100
\end{bmatrix}$
since
$\mathbf{X} = \frac{\partial{y}}{\partial\beta} = |
35,995 | What is the necessary condition for a unbiased estimator to be UMVUE? | Let us show that there can be a UMVUE which is not a sufficient statistic.
First of all, if the estimator $T$ takes (say) value $0$ on all samples, then clearly $T$ is a UMVUE of $0$, which latter can be considered a (constant) function of $\theta$. On the other hand, this estimator $T$ is clearly not sufficient in general.
It is a bit harder to find a UMVUE $Y$ of the "entire" unknown parameter $\theta$ (rather than a UMVUE of a function of it) such that $Y$ is not sufficient for $\theta$. E.g., suppose the "data" are given just by one normal r.v. $X\sim N(\tau,1)$, where $\tau\in\mathbb{R}$ is unknown. Clearly, $X$ is sufficient and complete for $\tau$.
Let $Y=1$ if $X\ge0$ and $Y=0$ if $X<0$, and let
$\theta:=\mathsf{E}_\tau Y=\mathsf{P}_\tau(X\ge0)=\Phi(\tau)$; as usual, we denote by $\Phi$ and $\varphi$, respectively, the cdf and pdf of $N(0,1)$.
So, the estimator $Y$ is unbiased for $\theta=\Phi(\tau)$ and is a function of the complete sufficient statistic $X$. Hence,
$Y$ is a UMVUE of $\theta=\Phi(\tau)$.
On the other hand, the function $\Phi$ is continuous and strictly increasing on $\mathbb{R}$, from $0$ to $1$. So, the correspondence $\mathbb{R}\ni\tau=\Phi^{-1}(\theta)\leftrightarrow\theta=\Phi(\tau)\in(0,1)$ is a bijection. That is, we can re-parametirize the problem, from $\tau$ to $\theta$, in a one-to-one manner. Thus, $Y$ is a UMVUE of $\theta$, not just for the "old" parameter $\tau$, but for the "new" parameter $\theta\in(0,1)$ as well. However, $Y$ is not sufficient for $\tau$ and therefore not sufficient for $\theta$. Indeed,
\begin{multline*}
\mathsf{P}_\tau(X<-1|Y=0)=\mathsf{P}_\tau(X<-1|X<0)=\frac{\mathsf{P}_\tau(X<-1)}{\mathsf{P}_\tau(X<0)} \\
=\frac{\Phi(-\tau-1)}{\Phi(-\tau)}
\sim\frac{\varphi(-\tau-1)/(\tau+1)}{\varphi(-\tau)/\tau}\sim\frac{\varphi(-\tau-1)}{\varphi(-\tau)}=e^{-\tau-1/2}
\end{multline*}
as $\tau\to\infty$; here we used the known asymptotic equivalence $\Phi(-\tau)\sim\varphi(-\tau)/\tau$ as $\tau\to\infty$, which follows by
the l'Hospital rule.
So, $\mathsf{P}_\tau(X<-1|Y=0)$ depends on $\tau$ and hence on $\theta$, which shows that $Y$ is not sufficient for $\theta$ (whereas $Y$ is a UMVUE for $\theta$). | What is the necessary condition for a unbiased estimator to be UMVUE? | Let us show that there can be a UMVUE which is not a sufficient statistic.
First of all, if the estimator $T$ takes (say) value $0$ on all samples, then clearly $T$ is a UMVUE of $0$, which latter ca | What is the necessary condition for a unbiased estimator to be UMVUE?
Let us show that there can be a UMVUE which is not a sufficient statistic.
First of all, if the estimator $T$ takes (say) value $0$ on all samples, then clearly $T$ is a UMVUE of $0$, which latter can be considered a (constant) function of $\theta$. On the other hand, this estimator $T$ is clearly not sufficient in general.
It is a bit harder to find a UMVUE $Y$ of the "entire" unknown parameter $\theta$ (rather than a UMVUE of a function of it) such that $Y$ is not sufficient for $\theta$. E.g., suppose the "data" are given just by one normal r.v. $X\sim N(\tau,1)$, where $\tau\in\mathbb{R}$ is unknown. Clearly, $X$ is sufficient and complete for $\tau$.
Let $Y=1$ if $X\ge0$ and $Y=0$ if $X<0$, and let
$\theta:=\mathsf{E}_\tau Y=\mathsf{P}_\tau(X\ge0)=\Phi(\tau)$; as usual, we denote by $\Phi$ and $\varphi$, respectively, the cdf and pdf of $N(0,1)$.
So, the estimator $Y$ is unbiased for $\theta=\Phi(\tau)$ and is a function of the complete sufficient statistic $X$. Hence,
$Y$ is a UMVUE of $\theta=\Phi(\tau)$.
On the other hand, the function $\Phi$ is continuous and strictly increasing on $\mathbb{R}$, from $0$ to $1$. So, the correspondence $\mathbb{R}\ni\tau=\Phi^{-1}(\theta)\leftrightarrow\theta=\Phi(\tau)\in(0,1)$ is a bijection. That is, we can re-parametirize the problem, from $\tau$ to $\theta$, in a one-to-one manner. Thus, $Y$ is a UMVUE of $\theta$, not just for the "old" parameter $\tau$, but for the "new" parameter $\theta\in(0,1)$ as well. However, $Y$ is not sufficient for $\tau$ and therefore not sufficient for $\theta$. Indeed,
\begin{multline*}
\mathsf{P}_\tau(X<-1|Y=0)=\mathsf{P}_\tau(X<-1|X<0)=\frac{\mathsf{P}_\tau(X<-1)}{\mathsf{P}_\tau(X<0)} \\
=\frac{\Phi(-\tau-1)}{\Phi(-\tau)}
\sim\frac{\varphi(-\tau-1)/(\tau+1)}{\varphi(-\tau)/\tau}\sim\frac{\varphi(-\tau-1)}{\varphi(-\tau)}=e^{-\tau-1/2}
\end{multline*}
as $\tau\to\infty$; here we used the known asymptotic equivalence $\Phi(-\tau)\sim\varphi(-\tau)/\tau$ as $\tau\to\infty$, which follows by
the l'Hospital rule.
So, $\mathsf{P}_\tau(X<-1|Y=0)$ depends on $\tau$ and hence on $\theta$, which shows that $Y$ is not sufficient for $\theta$ (whereas $Y$ is a UMVUE for $\theta$). | What is the necessary condition for a unbiased estimator to be UMVUE?
Let us show that there can be a UMVUE which is not a sufficient statistic.
First of all, if the estimator $T$ takes (say) value $0$ on all samples, then clearly $T$ is a UMVUE of $0$, which latter ca |
35,996 | What is the necessary condition for a unbiased estimator to be UMVUE? | On Uniformly Minimum Variance Unbiased Estimation when no Complete Sufficient Statistics Exist by L. Bondesson gives some examples of UMVUEs which are not complete sufficient statistics, including the following one:
Let $X_1, \ldots, X_n$ be independent observations of a random variable $X = \mu + \sigma Y$, where $\mu$ and $\sigma$ are unknown, and $Y$ is gamma distributed with known shape parameter $k$ and known scale parameter $\theta$. Then $\bar{X}$ is the UMVUE of $E(X) = \mu + k\theta\sigma$. However, when $k \neq 1$ then there is no complete sufficient statistic for $(\mu, \sigma)$. | What is the necessary condition for a unbiased estimator to be UMVUE? | On Uniformly Minimum Variance Unbiased Estimation when no Complete Sufficient Statistics Exist by L. Bondesson gives some examples of UMVUEs which are not complete sufficient statistics, including the | What is the necessary condition for a unbiased estimator to be UMVUE?
On Uniformly Minimum Variance Unbiased Estimation when no Complete Sufficient Statistics Exist by L. Bondesson gives some examples of UMVUEs which are not complete sufficient statistics, including the following one:
Let $X_1, \ldots, X_n$ be independent observations of a random variable $X = \mu + \sigma Y$, where $\mu$ and $\sigma$ are unknown, and $Y$ is gamma distributed with known shape parameter $k$ and known scale parameter $\theta$. Then $\bar{X}$ is the UMVUE of $E(X) = \mu + k\theta\sigma$. However, when $k \neq 1$ then there is no complete sufficient statistic for $(\mu, \sigma)$. | What is the necessary condition for a unbiased estimator to be UMVUE?
On Uniformly Minimum Variance Unbiased Estimation when no Complete Sufficient Statistics Exist by L. Bondesson gives some examples of UMVUEs which are not complete sufficient statistics, including the |
35,997 | Non-normally distributed data - Box-Cox transformation? | The data are highly skewed & take just a few discrete values: the within-pair differences must consist of predominantly noughts & ones; no transformation will make them look much like normal variates. This is typical of count data where counts are fairly low.
If you assume that counts for each individual $j$ follow a different Poisson distribution, & that the change from low to high load condition has the same multiplicative effect on the rate parameter of each, you can extend the idea in significance of difference between two counts to a matched-pair design by conditioning on the total count for each pair, $n_j$:
$ \sum_{j=1}^m X_{1j} \sim \mathrm{Bin} (\sum_{j=1}^n n_j, \theta)$
where $m$ is the no. pairs. So the analysis reduces to inference about the Bernoulli parameter in a binomial experiment— 7 "successes" out of 24 trials if I read your graphs right.
Check the homogeneity of proportions across pairs—& note if they're too homogeneous it might indicate underdispersion (relative to a Poisson) of the original count variables.
Note that this approach is equivalent to the generalized linear model suggested for Poisson Repeated Measures ANOVA†: while it tells you nothing about the nuisance parameters, point & interval estimates for the parameter of interest can be worked out on the back of a fag packet (so you don't need to worry about software requirements).
† Parametrize your model with the log odds $\zeta=\log_\mathrm{e} \frac{\theta}{1-\theta}$: then the maximum-likelihood estimator is $$\hat\zeta=\log_\mathrm{e}\frac{\sum x_{1j}}{\sum n_j - \sum x_{1j}}=\log_\mathrm{e}\frac{7}{24-7}\approx -0.887$$ with standard error $$\sqrt\frac{\sum n_j}{\sum x_{1j}(\sum n_j-\sum x_{1j})}=\sqrt\frac{24}{7\cdot(24-7)}\approx 0.449$$ for Wald tests & confidence intervals. If you want to adjust for over-/under-dispersion (i.e. use "quasi-Poisson" regression) , estimate the dispersion parameter as Pearson's chi-squared statistic (for association) divided by its degrees of freedom (22) & multiply the standard error by its square root. | Non-normally distributed data - Box-Cox transformation? | The data are highly skewed & take just a few discrete values: the within-pair differences must consist of predominantly noughts & ones; no transformation will make them look much like normal variates. | Non-normally distributed data - Box-Cox transformation?
The data are highly skewed & take just a few discrete values: the within-pair differences must consist of predominantly noughts & ones; no transformation will make them look much like normal variates. This is typical of count data where counts are fairly low.
If you assume that counts for each individual $j$ follow a different Poisson distribution, & that the change from low to high load condition has the same multiplicative effect on the rate parameter of each, you can extend the idea in significance of difference between two counts to a matched-pair design by conditioning on the total count for each pair, $n_j$:
$ \sum_{j=1}^m X_{1j} \sim \mathrm{Bin} (\sum_{j=1}^n n_j, \theta)$
where $m$ is the no. pairs. So the analysis reduces to inference about the Bernoulli parameter in a binomial experiment— 7 "successes" out of 24 trials if I read your graphs right.
Check the homogeneity of proportions across pairs—& note if they're too homogeneous it might indicate underdispersion (relative to a Poisson) of the original count variables.
Note that this approach is equivalent to the generalized linear model suggested for Poisson Repeated Measures ANOVA†: while it tells you nothing about the nuisance parameters, point & interval estimates for the parameter of interest can be worked out on the back of a fag packet (so you don't need to worry about software requirements).
† Parametrize your model with the log odds $\zeta=\log_\mathrm{e} \frac{\theta}{1-\theta}$: then the maximum-likelihood estimator is $$\hat\zeta=\log_\mathrm{e}\frac{\sum x_{1j}}{\sum n_j - \sum x_{1j}}=\log_\mathrm{e}\frac{7}{24-7}\approx -0.887$$ with standard error $$\sqrt\frac{\sum n_j}{\sum x_{1j}(\sum n_j-\sum x_{1j})}=\sqrt\frac{24}{7\cdot(24-7)}\approx 0.449$$ for Wald tests & confidence intervals. If you want to adjust for over-/under-dispersion (i.e. use "quasi-Poisson" regression) , estimate the dispersion parameter as Pearson's chi-squared statistic (for association) divided by its degrees of freedom (22) & multiply the standard error by its square root. | Non-normally distributed data - Box-Cox transformation?
The data are highly skewed & take just a few discrete values: the within-pair differences must consist of predominantly noughts & ones; no transformation will make them look much like normal variates. |
35,998 | Email and IP String preprocessing for classification task | This is a really interesting question! String vectorization is an area of active research right now, and a there's a ton of interesting approaches out there.
First of all, ip addresses are hierarchical, and can be split by decimals into 4 categorical variables, each with 256 levels (watch out for IPv4 vs IPv6 though)! In a linear model, you can use the top level ip block directly, perhaps interacted with the 2nd, 3rd, and 4th block depending on how much data you have. In a tree-based model (e.g. a random forest or GBM), try converting the ip address to an integer and modeling it directly. A random forest or GBM should be able to identify interesting blocks of the ip range for your model. Most databases have functions to do this conversion, and I know there's a really good R package too.
For email addresses, start by splitting on the @ symbol into address, domain. Domain is probably useful on it's own as a categorical variable, but you might want to further add a variable for .com vs .edu vs .gov, etc. (The urltools package in R can help you extract top-level domains— someone really needs to write an emailtools package!) For the address part (the bit before the @ symbol), you could use a character n-gram vectorizer to create a very wide, very sparse matrix which you can then use directly in your model, or can further process using something like SVD to reduce it's dimensionality. You could also try a word vectorizer, splitting on symbols like ., -, and _.
There's a TON of information in those 2 fields— good luck extracting it! | Email and IP String preprocessing for classification task | This is a really interesting question! String vectorization is an area of active research right now, and a there's a ton of interesting approaches out there.
First of all, ip addresses are hierarchica | Email and IP String preprocessing for classification task
This is a really interesting question! String vectorization is an area of active research right now, and a there's a ton of interesting approaches out there.
First of all, ip addresses are hierarchical, and can be split by decimals into 4 categorical variables, each with 256 levels (watch out for IPv4 vs IPv6 though)! In a linear model, you can use the top level ip block directly, perhaps interacted with the 2nd, 3rd, and 4th block depending on how much data you have. In a tree-based model (e.g. a random forest or GBM), try converting the ip address to an integer and modeling it directly. A random forest or GBM should be able to identify interesting blocks of the ip range for your model. Most databases have functions to do this conversion, and I know there's a really good R package too.
For email addresses, start by splitting on the @ symbol into address, domain. Domain is probably useful on it's own as a categorical variable, but you might want to further add a variable for .com vs .edu vs .gov, etc. (The urltools package in R can help you extract top-level domains— someone really needs to write an emailtools package!) For the address part (the bit before the @ symbol), you could use a character n-gram vectorizer to create a very wide, very sparse matrix which you can then use directly in your model, or can further process using something like SVD to reduce it's dimensionality. You could also try a word vectorizer, splitting on symbols like ., -, and _.
There's a TON of information in those 2 fields— good luck extracting it! | Email and IP String preprocessing for classification task
This is a really interesting question! String vectorization is an area of active research right now, and a there's a ton of interesting approaches out there.
First of all, ip addresses are hierarchica |
35,999 | Spearman's correlation as a parameter | There's an interpretation given in some work on copulas.
e.g. see p 15 of Embrechts et al (2001) [1], which has for the Spearman correlation of $(X,Y)^T$:
$\rho_S(X,Y)=3(\mathbb{P}\{(X-\tilde{X})(Y-Y')>0\}-\mathbb{P}\{(X-\tilde{X})(Y-Y')<0\})$
where $(X, Y)^T$, $(\tilde{X},\tilde{Y})^T$
and $(X',Y')^T$ are independent copies. (It then goes on to show your interpretation holds for that definition.)
[1] Paul Embrechts, Filip Lindskog and Alexander McNeil (2001),
"Modelling Dependence with Copulas and Applications to Risk Management"
http://www.risklab.ch/ftp/papers/DependenceWithCopulas.pdf
(alternative link) | Spearman's correlation as a parameter | There's an interpretation given in some work on copulas.
e.g. see p 15 of Embrechts et al (2001) [1], which has for the Spearman correlation of $(X,Y)^T$:
$\rho_S(X,Y)=3(\mathbb{P}\{(X-\tilde{X})(Y-Y' | Spearman's correlation as a parameter
There's an interpretation given in some work on copulas.
e.g. see p 15 of Embrechts et al (2001) [1], which has for the Spearman correlation of $(X,Y)^T$:
$\rho_S(X,Y)=3(\mathbb{P}\{(X-\tilde{X})(Y-Y')>0\}-\mathbb{P}\{(X-\tilde{X})(Y-Y')<0\})$
where $(X, Y)^T$, $(\tilde{X},\tilde{Y})^T$
and $(X',Y')^T$ are independent copies. (It then goes on to show your interpretation holds for that definition.)
[1] Paul Embrechts, Filip Lindskog and Alexander McNeil (2001),
"Modelling Dependence with Copulas and Applications to Risk Management"
http://www.risklab.ch/ftp/papers/DependenceWithCopulas.pdf
(alternative link) | Spearman's correlation as a parameter
There's an interpretation given in some work on copulas.
e.g. see p 15 of Embrechts et al (2001) [1], which has for the Spearman correlation of $(X,Y)^T$:
$\rho_S(X,Y)=3(\mathbb{P}\{(X-\tilde{X})(Y-Y' |
36,000 | Difference between "computational statistics" and "statistical computing"? | Roughly speaking, Computational Statistics refers to the statistical topics that require heavy computation, while Statistical Computing refers to the computational/numerical methods that can be applied to statistics.
We apply the computational methods from Statistical Computing to implement the statistical methods from Computational Statistics.
However, the computational methods from Statistical Computing can be applied to all of Statistics (not just Computational Statistics).
Statistical Computing and Computational Statistics overlap but neither discipline is a subset of the other. | Difference between "computational statistics" and "statistical computing"? | Roughly speaking, Computational Statistics refers to the statistical topics that require heavy computation, while Statistical Computing refers to the computational/numerical methods that can be applie | Difference between "computational statistics" and "statistical computing"?
Roughly speaking, Computational Statistics refers to the statistical topics that require heavy computation, while Statistical Computing refers to the computational/numerical methods that can be applied to statistics.
We apply the computational methods from Statistical Computing to implement the statistical methods from Computational Statistics.
However, the computational methods from Statistical Computing can be applied to all of Statistics (not just Computational Statistics).
Statistical Computing and Computational Statistics overlap but neither discipline is a subset of the other. | Difference between "computational statistics" and "statistical computing"?
Roughly speaking, Computational Statistics refers to the statistical topics that require heavy computation, while Statistical Computing refers to the computational/numerical methods that can be applie |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.