idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
12,101
|
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
|
This is an algebraic counterpart to @Martijn's beautiful geometric answer.
First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\lambda\|\boldsymbol\beta\|^2\Big\} \:\:\text{s.t.}\:\: \|\mathbf X \boldsymbol\beta\|^2=1$$ when $\lambda\to\infty$ is very simple to obtain: in the limit, the first term in the loss function becomes negligible and can thus be disregarded. The optimization problem becomes $$\lim_{\lambda\to\infty}\hat{\boldsymbol\beta}_\lambda^* = \hat{\boldsymbol\beta}_\infty^* = \operatorname*{arg\,min}_{\|\mathbf X \boldsymbol\beta\|^2=1}\|\boldsymbol\beta\|^2 \sim \operatorname*{arg\,max}_{\| \boldsymbol\beta\|^2=1}\|\mathbf X\boldsymbol\beta\|^2,$$ which is the first principal component of $\mathbf X$ (appropriately scaled). This answers the question.
Now let us consider the solution for any value of $\lambda$ that I referred to in point #2 of my question. Adding to the loss function the Lagrange multiplier $\mu(\|\mathbf X\boldsymbol\beta\|^2-1)$ and differentiating, we obtain
$$\hat{\boldsymbol\beta}_\lambda^*=\big((1+\mu)\mathbf X^\top \mathbf X + \lambda \mathbf I\big)^{-1}\mathbf X^\top \mathbf y\:\:\text{with $\mu$ needed to satisfy the constraint}.$$
How does this solution behave when $\lambda$ grows from zero to infinity?
When $\lambda=0$, we obtain a scaled version of the OLS solution: $$\hat{\boldsymbol\beta}_0^* \sim \hat{\boldsymbol\beta}_0.$$
For positive but small values of $\lambda$, the solution is a scaled version of some ridge estimator: $$\hat{\boldsymbol\beta}_\lambda^* \sim \hat{\boldsymbol\beta}_{\lambda^*}.$$
When $\lambda=\|\mathbf X\mathbf X^\top \mathbf y\|$, the value of $(1+\mu)$ needed to satisfy the constraint is $0$. This means that the solution is a scaled version of the first PLS component (meaning that $\lambda^*$ of the corresponding ridge estimator is $\infty$): $$\hat{\boldsymbol\beta}_{\|\mathbf X\mathbf X^\top \mathbf y\|}^* \sim \mathbf X^\top \mathbf y.$$
When $\lambda$ becomes larger than that, the necessary $(1+\mu)$ term becomes negative. From now on, the solution is a scaled version of a pseudo-ridge estimator with negative regularization parameter (negative ridge). In terms of directions, we are now past ridge regression with infinite lambda.
When $\lambda\to\infty$, the term $\big((1+\mu)\mathbf X^\top \mathbf X + \lambda \mathbf I\big)^{-1}$ would go to zero (or diverge to infinity) unless $\mu = -\lambda/ s^2_\mathrm{max} + \alpha$ where $s_\mathrm{max}$ is the largest singular value of $\mathbf X=\mathbf{USV}^\top$. This will make $\hat{\boldsymbol\beta}_\lambda^*$ finite and proportionate to the first principal axis $\mathbf V_1$. We need to set $\mu = -\lambda/ s^2_\mathrm{max} + \mathbf U_1^\top \mathbf y -1$ to satisfy the constraint. Thus, we obtain that $$\hat{\boldsymbol\beta}_\infty^* \sim \mathbf V_1.$$
Overall, we see that this constrained minimization problem encompasses unit-variance versions of OLS, RR, PLS, and PCA on the following spectrum:
$$\boxed{\text{OLS} \to \text{RR} \to \text{PLS} \to \text{negative RR} \to \text{PCA}}$$
This seems to be equivalent to an obscure (?) chemometrics framework called "continuum regression" (see https://scholar.google.de/scholar?q="continuum+regression", in particular Stone & Brooks 1990, Sundberg 1993, Björkström & Sundberg 1999, etc.) which allows the same unification by maximizing an ad hoc criterion $$\mathcal T = \operatorname{corr}^2(\mathbf y, \mathbf X \boldsymbol\beta)\cdot \operatorname{Var}^\gamma(\mathbf X\boldsymbol\beta) \;\;\text{s.t.}\;\;\|\boldsymbol\beta\|=1.$$ This obviously yields scaled OLS when $\gamma=0$, PLS when $\gamma=1$, PCA when $\gamma\to\infty$, and can be shown to yield scaled RR for $0<\gamma<1$ and scaled negative RR for $1<\gamma<\infty$, see Sundberg 1993.
Despite having quite a bit of experience with RR/PLS/PCA/etc, I have to admit I have never heard about "continuum regression" before. I should also say that I dislike this term.
A schematic that I did based on the @Martijn's one:
Update: Figure updated with the negative ridge path, huge thanks to @Martijn for suggesting how it should look. See my answer in Understanding negative ridge regression for more details.
|
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
|
This is an algebraic counterpart to @Martijn's beautiful geometric answer.
First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\
|
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
This is an algebraic counterpart to @Martijn's beautiful geometric answer.
First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\lambda\|\boldsymbol\beta\|^2\Big\} \:\:\text{s.t.}\:\: \|\mathbf X \boldsymbol\beta\|^2=1$$ when $\lambda\to\infty$ is very simple to obtain: in the limit, the first term in the loss function becomes negligible and can thus be disregarded. The optimization problem becomes $$\lim_{\lambda\to\infty}\hat{\boldsymbol\beta}_\lambda^* = \hat{\boldsymbol\beta}_\infty^* = \operatorname*{arg\,min}_{\|\mathbf X \boldsymbol\beta\|^2=1}\|\boldsymbol\beta\|^2 \sim \operatorname*{arg\,max}_{\| \boldsymbol\beta\|^2=1}\|\mathbf X\boldsymbol\beta\|^2,$$ which is the first principal component of $\mathbf X$ (appropriately scaled). This answers the question.
Now let us consider the solution for any value of $\lambda$ that I referred to in point #2 of my question. Adding to the loss function the Lagrange multiplier $\mu(\|\mathbf X\boldsymbol\beta\|^2-1)$ and differentiating, we obtain
$$\hat{\boldsymbol\beta}_\lambda^*=\big((1+\mu)\mathbf X^\top \mathbf X + \lambda \mathbf I\big)^{-1}\mathbf X^\top \mathbf y\:\:\text{with $\mu$ needed to satisfy the constraint}.$$
How does this solution behave when $\lambda$ grows from zero to infinity?
When $\lambda=0$, we obtain a scaled version of the OLS solution: $$\hat{\boldsymbol\beta}_0^* \sim \hat{\boldsymbol\beta}_0.$$
For positive but small values of $\lambda$, the solution is a scaled version of some ridge estimator: $$\hat{\boldsymbol\beta}_\lambda^* \sim \hat{\boldsymbol\beta}_{\lambda^*}.$$
When $\lambda=\|\mathbf X\mathbf X^\top \mathbf y\|$, the value of $(1+\mu)$ needed to satisfy the constraint is $0$. This means that the solution is a scaled version of the first PLS component (meaning that $\lambda^*$ of the corresponding ridge estimator is $\infty$): $$\hat{\boldsymbol\beta}_{\|\mathbf X\mathbf X^\top \mathbf y\|}^* \sim \mathbf X^\top \mathbf y.$$
When $\lambda$ becomes larger than that, the necessary $(1+\mu)$ term becomes negative. From now on, the solution is a scaled version of a pseudo-ridge estimator with negative regularization parameter (negative ridge). In terms of directions, we are now past ridge regression with infinite lambda.
When $\lambda\to\infty$, the term $\big((1+\mu)\mathbf X^\top \mathbf X + \lambda \mathbf I\big)^{-1}$ would go to zero (or diverge to infinity) unless $\mu = -\lambda/ s^2_\mathrm{max} + \alpha$ where $s_\mathrm{max}$ is the largest singular value of $\mathbf X=\mathbf{USV}^\top$. This will make $\hat{\boldsymbol\beta}_\lambda^*$ finite and proportionate to the first principal axis $\mathbf V_1$. We need to set $\mu = -\lambda/ s^2_\mathrm{max} + \mathbf U_1^\top \mathbf y -1$ to satisfy the constraint. Thus, we obtain that $$\hat{\boldsymbol\beta}_\infty^* \sim \mathbf V_1.$$
Overall, we see that this constrained minimization problem encompasses unit-variance versions of OLS, RR, PLS, and PCA on the following spectrum:
$$\boxed{\text{OLS} \to \text{RR} \to \text{PLS} \to \text{negative RR} \to \text{PCA}}$$
This seems to be equivalent to an obscure (?) chemometrics framework called "continuum regression" (see https://scholar.google.de/scholar?q="continuum+regression", in particular Stone & Brooks 1990, Sundberg 1993, Björkström & Sundberg 1999, etc.) which allows the same unification by maximizing an ad hoc criterion $$\mathcal T = \operatorname{corr}^2(\mathbf y, \mathbf X \boldsymbol\beta)\cdot \operatorname{Var}^\gamma(\mathbf X\boldsymbol\beta) \;\;\text{s.t.}\;\;\|\boldsymbol\beta\|=1.$$ This obviously yields scaled OLS when $\gamma=0$, PLS when $\gamma=1$, PCA when $\gamma\to\infty$, and can be shown to yield scaled RR for $0<\gamma<1$ and scaled negative RR for $1<\gamma<\infty$, see Sundberg 1993.
Despite having quite a bit of experience with RR/PLS/PCA/etc, I have to admit I have never heard about "continuum regression" before. I should also say that I dislike this term.
A schematic that I did based on the @Martijn's one:
Update: Figure updated with the negative ridge path, huge thanks to @Martijn for suggesting how it should look. See my answer in Understanding negative ridge regression for more details.
|
The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
This is an algebraic counterpart to @Martijn's beautiful geometric answer.
First of all, the limit of $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\
|
12,102
|
How does minibatch gradient descent update the weights for each example in a batch?
|
Gradient descent doesn't quite work the way you suggested but a similar problem can occur.
We don't calculate the average loss from the batch, we calculate the average gradients of the loss function. The gradients are the derivative of the loss with respect to the weight and in a neural network the gradient for one weight depends on the inputs of that specific example and it also depends on many other weights in the model.
If your model has 5 weights and you have a mini-batch size of 2 then you might get this:
Example 1. Loss=2, $\text{gradients}=(1.5,-2.0,1.1,0.4,-0.9)$
Example 2. Loss=3, $\text{gradients}=(1.2,2.3,-1.1,-0.8,-0.7)$
The average of the gradients in this mini-batch are calculated, they are $(1.35,0.15,0,-0.2,-0.8)$
The benefit of averaging over several examples is that the variation in the gradient is lower so the learning is more consistent and less dependent on the specifics of one example. Notice how the average gradient for the third weight is $0$, this weight won't change this weight update but it will likely be non-zero for the next examples chosen which get computed with different weights.
edit in response to comments:
In my example above the average of the gradients is computed. For a mini-batch size of $k$ where we calculate the loss $L_i$ for each example we and aim to get the average gradient of the loss with respect to a weight $w_j$.
The way I wrote it in my example I averaged each gradient like: $\frac{\partial L}{\partial w_j} = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$
The tutorial code you linked to in the comments uses Tensorflow to minimize the average loss.
Tensorflow aims to minimize $\frac{1}{k} \sum_{i=1}^{k} L_i$
To minimize this it computes the gradients of the average loss with respect to each weight and uses gradient-descent to update the weights:
$\frac{\partial L}{\partial w_j} = \frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i$
The differentiation can be brought inside the sum so it's the same as the expression from the approach in my example.
$\frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$
|
How does minibatch gradient descent update the weights for each example in a batch?
|
Gradient descent doesn't quite work the way you suggested but a similar problem can occur.
We don't calculate the average loss from the batch, we calculate the average gradients of the loss function.
|
How does minibatch gradient descent update the weights for each example in a batch?
Gradient descent doesn't quite work the way you suggested but a similar problem can occur.
We don't calculate the average loss from the batch, we calculate the average gradients of the loss function. The gradients are the derivative of the loss with respect to the weight and in a neural network the gradient for one weight depends on the inputs of that specific example and it also depends on many other weights in the model.
If your model has 5 weights and you have a mini-batch size of 2 then you might get this:
Example 1. Loss=2, $\text{gradients}=(1.5,-2.0,1.1,0.4,-0.9)$
Example 2. Loss=3, $\text{gradients}=(1.2,2.3,-1.1,-0.8,-0.7)$
The average of the gradients in this mini-batch are calculated, they are $(1.35,0.15,0,-0.2,-0.8)$
The benefit of averaging over several examples is that the variation in the gradient is lower so the learning is more consistent and less dependent on the specifics of one example. Notice how the average gradient for the third weight is $0$, this weight won't change this weight update but it will likely be non-zero for the next examples chosen which get computed with different weights.
edit in response to comments:
In my example above the average of the gradients is computed. For a mini-batch size of $k$ where we calculate the loss $L_i$ for each example we and aim to get the average gradient of the loss with respect to a weight $w_j$.
The way I wrote it in my example I averaged each gradient like: $\frac{\partial L}{\partial w_j} = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$
The tutorial code you linked to in the comments uses Tensorflow to minimize the average loss.
Tensorflow aims to minimize $\frac{1}{k} \sum_{i=1}^{k} L_i$
To minimize this it computes the gradients of the average loss with respect to each weight and uses gradient-descent to update the weights:
$\frac{\partial L}{\partial w_j} = \frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i$
The differentiation can be brought inside the sum so it's the same as the expression from the approach in my example.
$\frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$
|
How does minibatch gradient descent update the weights for each example in a batch?
Gradient descent doesn't quite work the way you suggested but a similar problem can occur.
We don't calculate the average loss from the batch, we calculate the average gradients of the loss function.
|
12,103
|
How does minibatch gradient descent update the weights for each example in a batch?
|
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datasets could require a huge quantity of memory.
One important fact is that the error that you evaluate is always a distance between your predicted output and the real output: that means that it can't be negative, so you can't have, as you said, an error of 2 and -2 that cancel out, but it would instead become an error of 4.
You then evaluate the gradient of the error with respect to all the weights, so you can compute which change in the weights would reduce it the most. Once you do so, you take a "step" in that direction, based on the magnitude of your learning rate alpha. (This is the basic concepts,
I'm not going into detail about backpropagation for deep NN)
After running this training on your dataset for a certain number of epochs, you can expect your network to converge if your learning step is not too big to make it diverge. You could still end up in a local minimum, this can be avoided by initializing differently your weights, using differenr optimizers, and trying to regularize.
|
How does minibatch gradient descent update the weights for each example in a batch?
|
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datas
|
How does minibatch gradient descent update the weights for each example in a batch?
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datasets could require a huge quantity of memory.
One important fact is that the error that you evaluate is always a distance between your predicted output and the real output: that means that it can't be negative, so you can't have, as you said, an error of 2 and -2 that cancel out, but it would instead become an error of 4.
You then evaluate the gradient of the error with respect to all the weights, so you can compute which change in the weights would reduce it the most. Once you do so, you take a "step" in that direction, based on the magnitude of your learning rate alpha. (This is the basic concepts,
I'm not going into detail about backpropagation for deep NN)
After running this training on your dataset for a certain number of epochs, you can expect your network to converge if your learning step is not too big to make it diverge. You could still end up in a local minimum, this can be avoided by initializing differently your weights, using differenr optimizers, and trying to regularize.
|
How does minibatch gradient descent update the weights for each example in a batch?
The reason to use mini batches is to have a good amount of training example such that the possible noise of it is reduced by averaging their effects, but also it's not a full batch that for many datas
|
12,104
|
Group differences on a five point Likert item
|
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ordinal logistic regression) would be interesting too since it allows to check for response distribution across the item categories, see Agresti's book on Categorical Data Analysis (Chapter 7 on Logit models for multinomial responses).
Aside from this, you can imagine situations where the t-test or any other non-parametric tests would fail if the response distribution is strongly imbalanced between the two groups. For example, if all people from group A answer 1 or 5 (in equally proportion) whereas all people in group B answer 3, then you end up with identical within-group mean and the test is not meaningful at all, though in this case the homoscedasticity assumption is largely violated.
|
Group differences on a five point Likert item
|
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distribution
|
Group differences on a five point Likert item
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ordinal logistic regression) would be interesting too since it allows to check for response distribution across the item categories, see Agresti's book on Categorical Data Analysis (Chapter 7 on Logit models for multinomial responses).
Aside from this, you can imagine situations where the t-test or any other non-parametric tests would fail if the response distribution is strongly imbalanced between the two groups. For example, if all people from group A answer 1 or 5 (in equally proportion) whereas all people in group B answer 3, then you end up with identical within-group mean and the test is not meaningful at all, though in this case the homoscedasticity assumption is largely violated.
|
Group differences on a five point Likert item
Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distribution
|
12,105
|
Group differences on a five point Likert item
|
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
|
Group differences on a five point Likert item
|
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
|
Group differences on a five point Likert item
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
|
Group differences on a five point Likert item
Depending on the size of the dataset in question, a permutation test might be preferable to a bootstrap in that it may be able to provide an exact test of the hypothesis (and an exact CI).
|
12,106
|
Group differences on a five point Likert item
|
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus needs interval data. You can map Likert scale scores to interval data ("totally dissatisfied" is 1 and so on) but nobody guarantees that "totally dissatisfied" is the same distance to "somehow dissatisfied" as "somehow dissatisfied" is from "neither nor". By the way: what is the difference between "totally dissatisfied" and "somehow dissatisfied"? So in the end, you'd do a t-test on the coded values of your ordinal data but that just doesn't make any sense.
|
Group differences on a five point Likert item
|
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied".
|
Group differences on a five point Likert item
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus needs interval data. You can map Likert scale scores to interval data ("totally dissatisfied" is 1 and so on) but nobody guarantees that "totally dissatisfied" is the same distance to "somehow dissatisfied" as "somehow dissatisfied" is from "neither nor". By the way: what is the difference between "totally dissatisfied" and "somehow dissatisfied"? So in the end, you'd do a t-test on the coded values of your ordinal data but that just doesn't make any sense.
|
Group differences on a five point Likert item
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied".
|
12,107
|
Group differences on a five point Likert item
|
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree" and "agree" is the same as that between "strongly disagree" and "disagree", then why would the summation of all these ordinal level scales produce a value that shares the properties of true interval level data?
For example, if we are interpreting the results from a depression inventory, it doesn't make sense (to me at least) to say that a person with a score of "20" is twice as depressed as a person with a score of "10". This is because each item in the questionnaire isn't measuring actual differences in levels of depression (assuming that depression is a stable, intenal, organic disorder) but rather the person's subjective rating of agreement with a particular statement. When asked, "how depressed would you say your mood is on a scale of 1-4, 1 being very depressed and 4 being not depresed at all", how do I know that one respondent's subjective rating of 1 is the same as another respondent's? Or how can I know if the difference between 4 and 3 is the same as that of 3 and 4 in terms of the person's current level of depression.If we can't know any of this, then it doesn't make any sense to treat the summation of all these ordinal items as interval level data. Even if the data do form a normal distribution, I don't think it is appropriate to treat the differences between scores as interval level data if they were computed by adding up all the responses to a likert-items. A normal distribution of data just means that the responses are probably representative of the greather population; it doesn't imply that the values obtained from the inventories share important properties of interval level data.
We need to be careful in the behavioural sciences about how we use statistics to speak to the latent variables we are studying, for since there is no direct way of measuring these hypothetical constructs, there are going to significant problems when we attempt to quantify subject them to parametric tests. Again, simply because we have assigned values to a set of responses doesn't mean that differences between these values are meaningful.
|
Group differences on a five point Likert item
|
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree"
|
Group differences on a five point Likert item
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree" and "agree" is the same as that between "strongly disagree" and "disagree", then why would the summation of all these ordinal level scales produce a value that shares the properties of true interval level data?
For example, if we are interpreting the results from a depression inventory, it doesn't make sense (to me at least) to say that a person with a score of "20" is twice as depressed as a person with a score of "10". This is because each item in the questionnaire isn't measuring actual differences in levels of depression (assuming that depression is a stable, intenal, organic disorder) but rather the person's subjective rating of agreement with a particular statement. When asked, "how depressed would you say your mood is on a scale of 1-4, 1 being very depressed and 4 being not depresed at all", how do I know that one respondent's subjective rating of 1 is the same as another respondent's? Or how can I know if the difference between 4 and 3 is the same as that of 3 and 4 in terms of the person's current level of depression.If we can't know any of this, then it doesn't make any sense to treat the summation of all these ordinal items as interval level data. Even if the data do form a normal distribution, I don't think it is appropriate to treat the differences between scores as interval level data if they were computed by adding up all the responses to a likert-items. A normal distribution of data just means that the responses are probably representative of the greather population; it doesn't imply that the values obtained from the inventories share important properties of interval level data.
We need to be careful in the behavioural sciences about how we use statistics to speak to the latent variables we are studying, for since there is no direct way of measuring these hypothetical constructs, there are going to significant problems when we attempt to quantify subject them to parametric tests. Again, simply because we have assigned values to a set of responses doesn't mean that differences between these values are meaningful.
|
Group differences on a five point Likert item
If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree"
|
12,108
|
Group differences on a five point Likert item
|
Proportional odds ratio model is better then t-test for Likert item scale.
|
Group differences on a five point Likert item
|
Proportional odds ratio model is better then t-test for Likert item scale.
|
Group differences on a five point Likert item
Proportional odds ratio model is better then t-test for Likert item scale.
|
Group differences on a five point Likert item
Proportional odds ratio model is better then t-test for Likert item scale.
|
12,109
|
Group differences on a five point Likert item
|
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question.
The score test of a proportional odds model is equivalent to the Wilcoxon rank sum test.
More precisely, the score test statistic for no effect of a single dichotomous covariate in a proportional odds cumulative logistic regression model (McCullagh 1980) for ordinal outcome was shown to be equal to the Wilcoxon rank sum test statistic. (Proof in An extension of the Wilcoxon Rank-Sum test for complex sample survey data.)
Just like Wilcoxon rank sum test, this test detect whether two samples were drawn from different distributions, regardless of the expected values.
This test is invalid if you only want to detect whether two samples were drawn from distributions with different expected values, just like Wilcoxon rank sum test.
|
Group differences on a five point Likert item
|
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question.
The score test of a proportional odds model is equivale
|
Group differences on a five point Likert item
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question.
The score test of a proportional odds model is equivalent to the Wilcoxon rank sum test.
More precisely, the score test statistic for no effect of a single dichotomous covariate in a proportional odds cumulative logistic regression model (McCullagh 1980) for ordinal outcome was shown to be equal to the Wilcoxon rank sum test statistic. (Proof in An extension of the Wilcoxon Rank-Sum test for complex sample survey data.)
Just like Wilcoxon rank sum test, this test detect whether two samples were drawn from different distributions, regardless of the expected values.
This test is invalid if you only want to detect whether two samples were drawn from distributions with different expected values, just like Wilcoxon rank sum test.
|
Group differences on a five point Likert item
I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question.
The score test of a proportional odds model is equivale
|
12,110
|
How to interpret smooth l1 loss?
|
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of the argument is close to zero. The equation is:
$L_{1;smooth} = \begin{cases}|x| & \text{if $|x|>\alpha$;} \\
\frac{1}{|\alpha|}x^2 & \text{if $|x| \leq \alpha$}\end{cases}$
$\alpha$ is a hyper-parameter here and is usually taken as 1. $\frac{1}{\alpha}$ appears near $x^2$ term to make it continuous.
Smooth L1-loss combines the advantages of L1-loss (steady gradients for large values of $x$) and L2-loss (less oscillations during updates when $x$ is small).
Another form of smooth L1-loss is Huber loss. They achieve the same thing. Taken from Wikipedia, Huber loss is
$
L_\delta (a) = \begin{cases}
\frac{1}{2}{a^2} & \text{for } |a| \le \delta, \\
\delta (|a| - \frac{1}{2}\delta), & \text{otherwise.}
\end{cases}
$
|
How to interpret smooth l1 loss?
|
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of
|
How to interpret smooth l1 loss?
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of the argument is close to zero. The equation is:
$L_{1;smooth} = \begin{cases}|x| & \text{if $|x|>\alpha$;} \\
\frac{1}{|\alpha|}x^2 & \text{if $|x| \leq \alpha$}\end{cases}$
$\alpha$ is a hyper-parameter here and is usually taken as 1. $\frac{1}{\alpha}$ appears near $x^2$ term to make it continuous.
Smooth L1-loss combines the advantages of L1-loss (steady gradients for large values of $x$) and L2-loss (less oscillations during updates when $x$ is small).
Another form of smooth L1-loss is Huber loss. They achieve the same thing. Taken from Wikipedia, Huber loss is
$
L_\delta (a) = \begin{cases}
\frac{1}{2}{a^2} & \text{for } |a| \le \delta, \\
\delta (|a| - \frac{1}{2}\delta), & \text{otherwise.}
\end{cases}
$
|
How to interpret smooth l1 loss?
Smooth L1-loss can be interpreted as a combination of L1-loss and L2-loss. It behaves as L1-loss when the absolute value of the argument is high, and it behaves like L2-loss when the absolute value of
|
12,111
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
|
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm.
No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centered, those two classes will do the same.
In practice TruncatedSVD is useful on large sparse datasets which cannot be centered without making the memory usage explode.
numpy.linalg.svd and scipy.linalg.svd both rely on LAPACK _GESDD described here: http://www.netlib.org/lapack/lug/node32.html (divide and conquer driver)
scipy.sparse.linalg.svds relies on ARPACK to do a eigen value decomposition of XT . X or X . X.T (depending on the shape of the data) via the Arnoldi iteration method. The HTML user guide of ARPACK has a broken formatting which hides the computational details but the Arnoldi iteration is well described on wikipedia: https://en.wikipedia.org/wiki/Arnoldi_iteration
Here is the code for the ARPACK-based SVD in scipy:
https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/eigen/arpack/arpack.py#L1642 (search for the string for "def svds" in case of line change in the source code).
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
|
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm.
No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centere
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm.
No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centered, those two classes will do the same.
In practice TruncatedSVD is useful on large sparse datasets which cannot be centered without making the memory usage explode.
numpy.linalg.svd and scipy.linalg.svd both rely on LAPACK _GESDD described here: http://www.netlib.org/lapack/lug/node32.html (divide and conquer driver)
scipy.sparse.linalg.svds relies on ARPACK to do a eigen value decomposition of XT . X or X . X.T (depending on the shape of the data) via the Arnoldi iteration method. The HTML user guide of ARPACK has a broken formatting which hides the computational details but the Arnoldi iteration is well described on wikipedia: https://en.wikipedia.org/wiki/Arnoldi_iteration
Here is the code for the ARPACK-based SVD in scipy:
https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/eigen/arpack/arpack.py#L1642 (search for the string for "def svds" in case of line change in the source code).
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm.
No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centere
|
12,112
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
|
There is also a difference in how attribute explained_variance_ is calculated.
Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. And $\mathbf{X}_c$ is the centered data matrix, i.e. column means have been subtracted and are now equal to zero in this matrix. Assume that we are reducing the dimensionality of the data from $p$ to $k \lt p$.
Then for sklearn.decomposition.PCA we have the following expressions:
$$\mathbf{X}_c \approx \mathbf{U}_k \mathbf{S}_k \mathbf{V}_k^T \qquad (\text{truncated SVD of } \mathbf{X}_c);$$
$$\mathbf{L}_k = \frac{1}{n-1} \mathbf{S}^2_k \quad \Longleftrightarrow \quad \lambda_j = \frac{s_j^2}{n-1}, \quad \forall j =1,\ldots,k; \qquad(*)$$
where $\mathbf{L}_k = \mathrm{diag}(\lambda_1, \ldots, \lambda_k)$ is the matrix of $k$ largest eigenvalues of the covariance matrix $\mathbf{C} = \frac{1}{n-1} \mathbf{X}_c^T\mathbf{X}_c$, and $\mathbf{S}_k = \mathrm{diag}(s_1, \ldots, s_k)$ is the matrix of $k$ largest singular values of $\mathbf{X}_c$. You can simply prove $(*)$ if you substitute truncated SVD of $\mathbf{X}_c$ in the expression for the covariance matrix $\mathbf{C}$ and compare the result with the truncated eigendecomposition $\mathbf{C} \approx \mathbf{V}_k \mathbf{L} \mathbf{V}_k^T$ (here it was done for full decompositions). Matrix $\mathbf{L}_k$ is called explained_variance_ attribute in sklearn.decomposition.PCA.
But for sklearn.decomposition.TruncatedSVD only the following holds:
$$\mathbf{X} \approx \tilde{\mathbf{U}}_k \tilde{\mathbf{S}}_k \tilde{\mathbf{V}}_k^T \qquad (\text{truncated SVD of } \mathbf{X}).$$
In this case we can't get simple equiation like $(*)$ that will link $\mathbf{L}_k$ and $\tilde{\mathbf{S}}_k$, because substitution of truncated SVD of $\mathbf{X}$ in the expression $\mathbf{C} = \frac{1}{n-1} \mathbf{X}_c^T\mathbf{X}_c = \frac{1}{n-1}\mathbf{X}^T\mathbf{X} - \frac{n}{n-1}\bar{x}\bar{x}^T$ will not be very useful. So explained_variance_ in sklearn.decomposition.TruncatedSVD is calculated instead by np.var(X_transformed, axis=0), where X_transformed = $\mathbf{X} \tilde{\mathbf{V}}_k$ – matrix of scores (new features).
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
|
There is also a difference in how attribute explained_variance_ is calculated.
Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of vari
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
There is also a difference in how attribute explained_variance_ is calculated.
Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. And $\mathbf{X}_c$ is the centered data matrix, i.e. column means have been subtracted and are now equal to zero in this matrix. Assume that we are reducing the dimensionality of the data from $p$ to $k \lt p$.
Then for sklearn.decomposition.PCA we have the following expressions:
$$\mathbf{X}_c \approx \mathbf{U}_k \mathbf{S}_k \mathbf{V}_k^T \qquad (\text{truncated SVD of } \mathbf{X}_c);$$
$$\mathbf{L}_k = \frac{1}{n-1} \mathbf{S}^2_k \quad \Longleftrightarrow \quad \lambda_j = \frac{s_j^2}{n-1}, \quad \forall j =1,\ldots,k; \qquad(*)$$
where $\mathbf{L}_k = \mathrm{diag}(\lambda_1, \ldots, \lambda_k)$ is the matrix of $k$ largest eigenvalues of the covariance matrix $\mathbf{C} = \frac{1}{n-1} \mathbf{X}_c^T\mathbf{X}_c$, and $\mathbf{S}_k = \mathrm{diag}(s_1, \ldots, s_k)$ is the matrix of $k$ largest singular values of $\mathbf{X}_c$. You can simply prove $(*)$ if you substitute truncated SVD of $\mathbf{X}_c$ in the expression for the covariance matrix $\mathbf{C}$ and compare the result with the truncated eigendecomposition $\mathbf{C} \approx \mathbf{V}_k \mathbf{L} \mathbf{V}_k^T$ (here it was done for full decompositions). Matrix $\mathbf{L}_k$ is called explained_variance_ attribute in sklearn.decomposition.PCA.
But for sklearn.decomposition.TruncatedSVD only the following holds:
$$\mathbf{X} \approx \tilde{\mathbf{U}}_k \tilde{\mathbf{S}}_k \tilde{\mathbf{V}}_k^T \qquad (\text{truncated SVD of } \mathbf{X}).$$
In this case we can't get simple equiation like $(*)$ that will link $\mathbf{L}_k$ and $\tilde{\mathbf{S}}_k$, because substitution of truncated SVD of $\mathbf{X}$ in the expression $\mathbf{C} = \frac{1}{n-1} \mathbf{X}_c^T\mathbf{X}_c = \frac{1}{n-1}\mathbf{X}^T\mathbf{X} - \frac{n}{n-1}\bar{x}\bar{x}^T$ will not be very useful. So explained_variance_ in sklearn.decomposition.TruncatedSVD is calculated instead by np.var(X_transformed, axis=0), where X_transformed = $\mathbf{X} \tilde{\mathbf{V}}_k$ – matrix of scores (new features).
|
Difference between scikit-learn implementations of PCA and TruncatedSVD
There is also a difference in how attribute explained_variance_ is calculated.
Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of vari
|
12,113
|
Explanation of what Nate Silver said about loess
|
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degree polynomials provide excellent and flexible modeling of trends whereas extrapolating beyond the range of observed data, they explode. Had you observed later data in the time series, you definitely would have needed to include another breakpoint in the splines to obtain good fit.
Forecasting models, though, are well explored elsewhere in the literature. Filtering process like the Kalman filter and the particle filter provide excellent forecasts. Basically, a good forecast model will be anything based on Markov chains where time is not treated as a parameter in the model, but previous model state(s) are used to inform forecasts.
|
Explanation of what Nate Silver said about loess
|
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degr
|
Explanation of what Nate Silver said about loess
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degree polynomials provide excellent and flexible modeling of trends whereas extrapolating beyond the range of observed data, they explode. Had you observed later data in the time series, you definitely would have needed to include another breakpoint in the splines to obtain good fit.
Forecasting models, though, are well explored elsewhere in the literature. Filtering process like the Kalman filter and the particle filter provide excellent forecasts. Basically, a good forecast model will be anything based on Markov chains where time is not treated as a parameter in the model, but previous model state(s) are used to inform forecasts.
|
Explanation of what Nate Silver said about loess
The problem with lowess or loess is that it uses a polynomial interpolation. It is well known in forecasting that polynomials have erratic behavior in the tails. When interpolating, piecewise 3rd degr
|
12,114
|
Next steps after "Bayesian Reasoning and Machine Learning"
|
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good.
Unless you've got a particular field you want to look into I'd suggest the following (some/many of which you've probably already heard of):
Information theory, inference and learning algorithms, by D.J.C Mackay. A classic, and the author makes a .pdf of it available for free online, so you've no excuse.
Pattern Recognition and Machine Learning, by C.M.Bishop. Frequently cited, though there looks to be a lot of crossover between this and the Barber book.
Probability theory, the logic of science, by E.T.Jaynes. In some areas perhaps a bit more basic. However the explanations are excellent. I found it cleared up a couple of misunderstandings I didn't even know I had.
Elements of Information Theory, by T.M. Cover and J.A.Thomas. Attacks probability from the perspective of, yes, you guessed it, information theory. Some very neat stuff on channel capacity and max ent. A bit different from the more bayesian stuff (I can only remember seeing one prior in the whole book).
Statistical Learning Theory, by V.Vapnik. Thoroughly un-baysian, which may not appeal to you. Focuses on probablisitc upper bound on structural risk. Explains where support vector machines come from.
Sir Karl Popper produced a series of works on the philosophy of scientific discovery, which feature quite a lot of stats (collections of them can be bought, but I don't have any titles to hand - apologies). Again, not bayesian in the slightest, but his discussion on falsifiability and its relationship to occams razor is (in my opinion) fascinating, and should be read by anyone involved in doing science.
|
Next steps after "Bayesian Reasoning and Machine Learning"
|
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good.
Unless you've got a particular field you want to look into I'd suggest the following (some
|
Next steps after "Bayesian Reasoning and Machine Learning"
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good.
Unless you've got a particular field you want to look into I'd suggest the following (some/many of which you've probably already heard of):
Information theory, inference and learning algorithms, by D.J.C Mackay. A classic, and the author makes a .pdf of it available for free online, so you've no excuse.
Pattern Recognition and Machine Learning, by C.M.Bishop. Frequently cited, though there looks to be a lot of crossover between this and the Barber book.
Probability theory, the logic of science, by E.T.Jaynes. In some areas perhaps a bit more basic. However the explanations are excellent. I found it cleared up a couple of misunderstandings I didn't even know I had.
Elements of Information Theory, by T.M. Cover and J.A.Thomas. Attacks probability from the perspective of, yes, you guessed it, information theory. Some very neat stuff on channel capacity and max ent. A bit different from the more bayesian stuff (I can only remember seeing one prior in the whole book).
Statistical Learning Theory, by V.Vapnik. Thoroughly un-baysian, which may not appeal to you. Focuses on probablisitc upper bound on structural risk. Explains where support vector machines come from.
Sir Karl Popper produced a series of works on the philosophy of scientific discovery, which feature quite a lot of stats (collections of them can be bought, but I don't have any titles to hand - apologies). Again, not bayesian in the slightest, but his discussion on falsifiability and its relationship to occams razor is (in my opinion) fascinating, and should be read by anyone involved in doing science.
|
Next steps after "Bayesian Reasoning and Machine Learning"
I'd not heard of the Barber book before, but having had a quick look through it, it does look very very good.
Unless you've got a particular field you want to look into I'd suggest the following (some
|
12,115
|
Next steps after "Bayesian Reasoning and Machine Learning"
|
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to Bayesian methods as Barber.
|
Next steps after "Bayesian Reasoning and Machine Learning"
|
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to B
|
Next steps after "Bayesian Reasoning and Machine Learning"
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to Bayesian methods as Barber.
|
Next steps after "Bayesian Reasoning and Machine Learning"
I recently found a more computational perspective Bayesian reasoning and statistics: "Probabilistic Programming and Bayesian Methods for Hackers". This is probably equally as good an introduction to B
|
12,116
|
What is empirical entropy?
|
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are
$$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n} \sum_{i=1}^n \delta_x(x_i)$$
for $x \in \mathcal{X}$. Here $\delta_x(x_i)$ is one if $x_i = x$ and zero otherwise. That is, $\hat{p}(x)$ is the relative frequency of $x$ in the observed sequence. The entropy of the probability distribution given by the empirical point probabilities is
$$H(\hat{p}) = - \sum_{x \in \mathcal{X}} \hat{p}(x) \log \hat{p}(x) = - \sum_{x \in \mathcal{X}} \frac{1}{n} \sum_{i=1}^n \delta_x(x_i) \log \hat{p}(x) = -\frac{1}{n} \sum_{i=1}^n \log\hat{p}(x_i).$$
The latter identity follows by interchanging the two sums and noting that $$\sum_{x \in \mathcal{X}} \delta_x(x_i) \log\hat{p}(x) = \log\hat{p}(x_i).$$
From this we see that
$$H(\hat{p}) = - \frac{1}{n} \log \hat{p}(x^n)$$
with $\hat{p}(x^n) = \prod_{i=1}^n \hat{p}(x_i)$ and using the terminology from the question this is the empirical entropy of the empirical probability distribution. As pointed out by @cardinal in a comment, $- \frac{1}{n} \log p(x^n)$ is the empirical entropy of a given probability distribution with point probabilities $p$.
|
What is empirical entropy?
|
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are
$$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n}
|
What is empirical entropy?
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are
$$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n} \sum_{i=1}^n \delta_x(x_i)$$
for $x \in \mathcal{X}$. Here $\delta_x(x_i)$ is one if $x_i = x$ and zero otherwise. That is, $\hat{p}(x)$ is the relative frequency of $x$ in the observed sequence. The entropy of the probability distribution given by the empirical point probabilities is
$$H(\hat{p}) = - \sum_{x \in \mathcal{X}} \hat{p}(x) \log \hat{p}(x) = - \sum_{x \in \mathcal{X}} \frac{1}{n} \sum_{i=1}^n \delta_x(x_i) \log \hat{p}(x) = -\frac{1}{n} \sum_{i=1}^n \log\hat{p}(x_i).$$
The latter identity follows by interchanging the two sums and noting that $$\sum_{x \in \mathcal{X}} \delta_x(x_i) \log\hat{p}(x) = \log\hat{p}(x_i).$$
From this we see that
$$H(\hat{p}) = - \frac{1}{n} \log \hat{p}(x^n)$$
with $\hat{p}(x^n) = \prod_{i=1}^n \hat{p}(x_i)$ and using the terminology from the question this is the empirical entropy of the empirical probability distribution. As pointed out by @cardinal in a comment, $- \frac{1}{n} \log p(x^n)$ is the empirical entropy of a given probability distribution with point probabilities $p$.
|
What is empirical entropy?
If the data is $x^n = x_1 \ldots x_n$, that is, an $n$-sequence from a sample space $\mathcal{X}$, the empirical point probabilities are
$$\hat{p}(x) = \frac{1}{n}|\{ i \mid x_i = x\}| = \frac{1}{n}
|
12,117
|
What is empirical entropy?
|
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for discrete (multinomial) distributions, as shown in another answer, but can also be done for other distributions by binning, etc.
A problem with empirical entropy is that it is biased for small samples. The naive estimate of the probability distribution shows extra variation due to sampling noise. Of course one can use a better estimator, e.g., a suitable prior for the multinomial parameters, but getting it really unbiased is not easy.
The above applies to conditional distributions as well. In addition, everything is relative to binning (or kernelization), so you actually have a kind of differential entropy.
|
What is empirical entropy?
|
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for
|
What is empirical entropy?
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for discrete (multinomial) distributions, as shown in another answer, but can also be done for other distributions by binning, etc.
A problem with empirical entropy is that it is biased for small samples. The naive estimate of the probability distribution shows extra variation due to sampling noise. Of course one can use a better estimator, e.g., a suitable prior for the multinomial parameters, but getting it really unbiased is not easy.
The above applies to conditional distributions as well. In addition, everything is relative to binning (or kernelization), so you actually have a kind of differential entropy.
|
What is empirical entropy?
Entropy is defined for probability distributions. When you do not have one, but only data, and plug in a naive estimator of the probability distribution, you get empirical entropy. This is easiest for
|
12,118
|
On the "strength" of weak learners
|
This may be more in bagging spirit, but nevertheless:
If you really have a strong learner, there is no need to improve it by any ensemble stuff.
I would say... irrelevant. In blending and bagging trivially, in boosting making a too strong classifier may lead to some breaches in convergence (i.e. a lucky prediction may make the next iteration to predict pure noise and thus decrease performance), but this is usually repaired in proceeding iterations.
Again, this is not the real problem. The very core of those methods is to
force the partial classifiers to look deeper in the problem.
join their predictions to attenuate the noise and amplify the signal.
1) needs some attention in boosting (i.e. good boosting scheme, well behaving partial learner -- but this is mostly to be judged by experiments on the whole boost), 2) in bagging and blending (mostly how to ensure lack of correlation between learners and do not overnoise the ensemble). As long as this is OK, the accuracy of partial classifier is a third order problem.
|
On the "strength" of weak learners
|
This may be more in bagging spirit, but nevertheless:
If you really have a strong learner, there is no need to improve it by any ensemble stuff.
I would say... irrelevant. In blending and bagging tri
|
On the "strength" of weak learners
This may be more in bagging spirit, but nevertheless:
If you really have a strong learner, there is no need to improve it by any ensemble stuff.
I would say... irrelevant. In blending and bagging trivially, in boosting making a too strong classifier may lead to some breaches in convergence (i.e. a lucky prediction may make the next iteration to predict pure noise and thus decrease performance), but this is usually repaired in proceeding iterations.
Again, this is not the real problem. The very core of those methods is to
force the partial classifiers to look deeper in the problem.
join their predictions to attenuate the noise and amplify the signal.
1) needs some attention in boosting (i.e. good boosting scheme, well behaving partial learner -- but this is mostly to be judged by experiments on the whole boost), 2) in bagging and blending (mostly how to ensure lack of correlation between learners and do not overnoise the ensemble). As long as this is OK, the accuracy of partial classifier is a third order problem.
|
On the "strength" of weak learners
This may be more in bagging spirit, but nevertheless:
If you really have a strong learner, there is no need to improve it by any ensemble stuff.
I would say... irrelevant. In blending and bagging tri
|
12,119
|
On the "strength" of weak learners
|
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. With this in mind, my reply to three of the points are as follows.
Computational as I see it. Most weak learners I know of are computationally fast (and otherwise not worth consideration). A major point in ensemble learning is precisely that we can combine simple and fast, but not so good, learners and improve on the error rate. If we use stronger (and computationally more demanding) learners the room for improvements become smaller yet the computational cost becomes larger, which makes the use of ensemble methods less interesting. Moreover, a single strong learner may be easier to interpret. However, what is weak and what is strong depends on the problem and the optimal Bayes rate that we attempt to achieve. Hence, if a learner that is often considered strong still leaves room for improvements when boosting it and boosting is computationally feasible, then do boost ...
This will depend on the criteria you use to measure "optimal". In terms of error rate I would say no (I welcome any corrections if others have a different experience). In terms of speed, maybe, but I would imagine that this is highly problem dependent. I don't know any literature addressing this, sorry.
?
Cross validation, cross validation, cross validation. Like any other comparison of methods for training with the goal of making predictions we need unbiased estimates of the generalization error for the comparison, which can be achieved by setting aside a test data set or approximating this by cross validation.
|
On the "strength" of weak learners
|
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. W
|
On the "strength" of weak learners
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. With this in mind, my reply to three of the points are as follows.
Computational as I see it. Most weak learners I know of are computationally fast (and otherwise not worth consideration). A major point in ensemble learning is precisely that we can combine simple and fast, but not so good, learners and improve on the error rate. If we use stronger (and computationally more demanding) learners the room for improvements become smaller yet the computational cost becomes larger, which makes the use of ensemble methods less interesting. Moreover, a single strong learner may be easier to interpret. However, what is weak and what is strong depends on the problem and the optimal Bayes rate that we attempt to achieve. Hence, if a learner that is often considered strong still leaves room for improvements when boosting it and boosting is computationally feasible, then do boost ...
This will depend on the criteria you use to measure "optimal". In terms of error rate I would say no (I welcome any corrections if others have a different experience). In terms of speed, maybe, but I would imagine that this is highly problem dependent. I don't know any literature addressing this, sorry.
?
Cross validation, cross validation, cross validation. Like any other comparison of methods for training with the goal of making predictions we need unbiased estimates of the generalization error for the comparison, which can be achieved by setting aside a test data set or approximating this by cross validation.
|
On the "strength" of weak learners
First, the notions of "weak" and "strong" are only weakly defined. From my point of view they must be defined relative to the optimal Bayes classifier, which is the target of any training algorithm. W
|
12,120
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
|
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design).
$Y_{ijk}$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of factor $B$ with $1 \leq i \leq n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. The model is $Y_{ijk} = \mu_{jk} + \epsilon_{i(jk)}, \quad \epsilon_{i(jk)} \sim N(0, \sigma_{\epsilon}^2)$
Design:
$\begin{array}{r|ccccc|l}
~ & B 1 & \ldots & B k & \ldots & B q & ~\\\hline
A 1 & \mu_{11} & \ldots & \mu_{1k} & \ldots & \mu_{1q} & \mu_{1.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A j & \mu_{j1} & \ldots & \mu_{jk} & \ldots & \mu_{jq} & \mu_{j.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A p & \mu_{p1} & \ldots & \mu_{pk} & \ldots & \mu_{pq} & \mu_{p.}\\\hline
~ & \mu_{.1} & \ldots & \mu_{.k} & \ldots & \mu_{.q} & \mu
\end{array}$
$\mu_{jk}$ is the expected value in cell $jk$, $\epsilon_{i(jk)}$ is the error associated with the measurement of person $i$ in that cell. The $()$ notation indicates that the indices $jk$ are fixed for any given person $i$ because that person is observed in only one condition. A few definitions for the effects:
$\mu_{j.} = \frac{1}{q} \sum_{k=1}^{q} \mu_{jk}$ (average expected value for treatment $j$ of factor $A$)
$\mu_{.k} = \frac{1}{p} \sum_{j=1}^{p} \mu_{jk}$ (average expected value for treatment $k$ of factor $B$)
$\alpha_{j} = \mu_{j.} - \mu$ (effect of treatment $j$ of factor $A$, $\sum_{j=1}^{p} \alpha_{j} = 0$)
$\beta_{k} = \mu_{.k} - \mu$ (effect of treatment $k$ of factor $B$, $\sum_{k=1}^{q} \beta_{k} = 0$)
$(\alpha \beta)_{jk} = \mu_{jk} - (\mu + \alpha_{j} + \beta_{k}) = \mu_{jk} - \mu_{j.} - \mu_{.k} + \mu$
(interaction effect for the combination of treatment $j$ of factor $A$ with treatment $k$ of factor $B$, $\sum_{j=1}^{p} (\alpha \beta)_{jk} = 0 \, \wedge \, \sum_{k=1}^{q} (\alpha \beta)_{jk} = 0)$
$\alpha_{j}^{(k)} = \mu_{jk} - \mu_{.k}$
(conditional main effect for treatment $j$ of factor $A$ within fixed treatment $k$ of factor $B$, $\sum_{j=1}^{p} \alpha_{j}^{(k)} = 0 \, \wedge \, \frac{1}{q} \sum_{k=1}^{q} \alpha_{j}^{(k)} = \alpha_{j} \quad \forall \, j, k)$
$\beta_{k}^{(j)} = \mu_{jk} - \mu_{j.}$
(conditional main effect for treatment $k$ of factor $B$ within fixed treatment $j$ of factor $A$, $\sum_{k=1}^{q} \beta_{k}^{(j)} = 0 \, \wedge \, \frac{1}{p} \sum_{j=1}^{p} \beta_{k}^{(j)} = \beta_{k} \quad \forall \, j, k)$
With these definitions, the model can also be written as:
$Y_{ijk} = \mu + \alpha_{j} + \beta_{k} + (\alpha \beta)_{jk} + \epsilon_{i(jk)}$
This allows us to express the null hypothesis of no interaction in several equivalent ways:
$H_{0_{I}}: \sum_{j}\sum_{k} (\alpha \beta)^{2}_{jk} = 0$
(all individual interaction terms are $0$, such that $\mu_{jk} = \mu + \alpha_{j} + \beta_{k} \, \forall j, k$. This means that treatment effects of both factors - as defined above - are additive everywhere.)
$H_{0_{I}}: \alpha_{j}^{(k)} - \alpha_{j}^{(k')} = 0 \quad \forall \, j \, \wedge \, \forall \, k, k' \quad (k \neq k')$
(all conditional main effects for any treatment $j$ of factor $A$ are the same, and therefore equal $\alpha_{j}$. This is essentially Dason's answer.)
$H_{0_{I}}: \beta_{k}^{(j)} - \beta_{k}^{(j')} = 0 \quad \forall \, j, j' \, \wedge \, \forall \, k \quad (j \neq j')$
(all conditional main effects for any treatment $k$ of factor $B$ are the same, and therefore equal $\beta_{k}$.)
$H_{0_{I}}$: In a diagramm which shows the expected values $\mu_{jk}$ with the levels of factor $A$ on the $x$-axis and the levels of factor $B$ drawn as separate lines, the $q$ different lines are parallel.
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
|
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Compl
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design).
$Y_{ijk}$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of factor $B$ with $1 \leq i \leq n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. The model is $Y_{ijk} = \mu_{jk} + \epsilon_{i(jk)}, \quad \epsilon_{i(jk)} \sim N(0, \sigma_{\epsilon}^2)$
Design:
$\begin{array}{r|ccccc|l}
~ & B 1 & \ldots & B k & \ldots & B q & ~\\\hline
A 1 & \mu_{11} & \ldots & \mu_{1k} & \ldots & \mu_{1q} & \mu_{1.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A j & \mu_{j1} & \ldots & \mu_{jk} & \ldots & \mu_{jq} & \mu_{j.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A p & \mu_{p1} & \ldots & \mu_{pk} & \ldots & \mu_{pq} & \mu_{p.}\\\hline
~ & \mu_{.1} & \ldots & \mu_{.k} & \ldots & \mu_{.q} & \mu
\end{array}$
$\mu_{jk}$ is the expected value in cell $jk$, $\epsilon_{i(jk)}$ is the error associated with the measurement of person $i$ in that cell. The $()$ notation indicates that the indices $jk$ are fixed for any given person $i$ because that person is observed in only one condition. A few definitions for the effects:
$\mu_{j.} = \frac{1}{q} \sum_{k=1}^{q} \mu_{jk}$ (average expected value for treatment $j$ of factor $A$)
$\mu_{.k} = \frac{1}{p} \sum_{j=1}^{p} \mu_{jk}$ (average expected value for treatment $k$ of factor $B$)
$\alpha_{j} = \mu_{j.} - \mu$ (effect of treatment $j$ of factor $A$, $\sum_{j=1}^{p} \alpha_{j} = 0$)
$\beta_{k} = \mu_{.k} - \mu$ (effect of treatment $k$ of factor $B$, $\sum_{k=1}^{q} \beta_{k} = 0$)
$(\alpha \beta)_{jk} = \mu_{jk} - (\mu + \alpha_{j} + \beta_{k}) = \mu_{jk} - \mu_{j.} - \mu_{.k} + \mu$
(interaction effect for the combination of treatment $j$ of factor $A$ with treatment $k$ of factor $B$, $\sum_{j=1}^{p} (\alpha \beta)_{jk} = 0 \, \wedge \, \sum_{k=1}^{q} (\alpha \beta)_{jk} = 0)$
$\alpha_{j}^{(k)} = \mu_{jk} - \mu_{.k}$
(conditional main effect for treatment $j$ of factor $A$ within fixed treatment $k$ of factor $B$, $\sum_{j=1}^{p} \alpha_{j}^{(k)} = 0 \, \wedge \, \frac{1}{q} \sum_{k=1}^{q} \alpha_{j}^{(k)} = \alpha_{j} \quad \forall \, j, k)$
$\beta_{k}^{(j)} = \mu_{jk} - \mu_{j.}$
(conditional main effect for treatment $k$ of factor $B$ within fixed treatment $j$ of factor $A$, $\sum_{k=1}^{q} \beta_{k}^{(j)} = 0 \, \wedge \, \frac{1}{p} \sum_{j=1}^{p} \beta_{k}^{(j)} = \beta_{k} \quad \forall \, j, k)$
With these definitions, the model can also be written as:
$Y_{ijk} = \mu + \alpha_{j} + \beta_{k} + (\alpha \beta)_{jk} + \epsilon_{i(jk)}$
This allows us to express the null hypothesis of no interaction in several equivalent ways:
$H_{0_{I}}: \sum_{j}\sum_{k} (\alpha \beta)^{2}_{jk} = 0$
(all individual interaction terms are $0$, such that $\mu_{jk} = \mu + \alpha_{j} + \beta_{k} \, \forall j, k$. This means that treatment effects of both factors - as defined above - are additive everywhere.)
$H_{0_{I}}: \alpha_{j}^{(k)} - \alpha_{j}^{(k')} = 0 \quad \forall \, j \, \wedge \, \forall \, k, k' \quad (k \neq k')$
(all conditional main effects for any treatment $j$ of factor $A$ are the same, and therefore equal $\alpha_{j}$. This is essentially Dason's answer.)
$H_{0_{I}}: \beta_{k}^{(j)} - \beta_{k}^{(j')} = 0 \quad \forall \, j, j' \, \wedge \, \forall \, k \quad (j \neq j')$
(all conditional main effects for any treatment $k$ of factor $B$ are the same, and therefore equal $\beta_{k}$.)
$H_{0_{I}}$: In a diagramm which shows the expected values $\mu_{jk}$ with the levels of factor $A$ on the $x$-axis and the levels of factor $B$ drawn as separate lines, the $q$ different lines are parallel.
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Compl
|
12,121
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
|
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2B1 - A2B2) where A1B1 stands for the mean of the group that received A1 and B1 and so on. So here we're looking at A1B1 - A1B2 which is the effect that factor B is having when we're applying A1. If there is no interaction this should be the same as the effect B is having when we apply A2: A2B1 - A2B2. If those are the same then their difference should be 0 so we could use the tests:
$H_0: C = 0\quad\text{vs.}\quad H_A: C \neq 0.$
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
|
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2B1 - A2B2) where A1B1 stands for the mean of the group that received A1 and B1 and so on. So here we're looking at A1B1 - A1B2 which is the effect that factor B is having when we're applying A1. If there is no interaction this should be the same as the effect B is having when we apply A2: A2B1 - A2B2. If those are the same then their difference should be 0 so we could use the tests:
$H_0: C = 0\quad\text{vs.}\quad H_A: C \neq 0.$
|
What is the NULL hypothesis for interaction in a two-way ANOVA?
An interaction tells us that the levels of factor A have different effects based on what level of factor B you're applying. So we can test this through a linear contrast. Let C = (A1B1 - A1B2) - (A2
|
12,122
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,Y)(x,y) = \frac 1\pi$ for $x^2+y^2 < 1$. What is the probability that $(X,Y)$ is closer to the unit circle, that is, closer to the boundary of the unit disc than it is to the origin (center of the circle)? Well, only those points that lie inside the circle of radius $\frac 12$ are at distance $< \frac 12$ from the origin, and so all points outside this smaller circle are at distance $> \frac 12$ from the origin. It is an easy computation to arrive at
$$P\left(\frac 12 < \sqrt{X^2+Y^2} < 1\right) = 1- P\left(0\leq \sqrt{X^2+Y^2} < \frac 12\right) = 1 - \frac 1\pi \cdot \pi\left(\frac 12\right)^2 = \frac 34.$$
A similar calculation for a uniform distribution on the interior of a unit sphere in 3 dimensions (the pdf has value $\frac{3}{4\pi}$ on the interior) gives
\begin{align}
P\left(\frac 12 < \sqrt{X^2+Y^2+Z^2} < 1\right) &= 1- P\left(0\leq \sqrt{X^2+Y^2+Z^2} < \frac 12\right)\\
&= 1 - \frac{3}{4\pi} \cdot \frac{4\pi}{3}\left(\frac 12\right)^3\\
&= \frac 78.
\end{align}
Generalizing to $n > 3$ dimensions and remembering that the volume of an $n$-dimeensional hypersphere or radius $r$ is proportional to $r^n$, we get by very similar calculations that
$$P\left(\frac 12 < \sqrt{\sum_{i=1}^n X_i^2} < 1\right) = \frac{2^n-1}{2^n},$$
that is, most of the probability mass_ lies closer to the surface of the sphere than to the origin. As a final comment, note that the $X_i$ are NIBNID random variables which acronym stands for Not Independent But Nonetheless Identically Distributed.
Turning to IID standard Gaussian random variables, the joint density is not uniformly distributed but has a very pronounced peak at the origin. But, there is so little volume near the center of a hypersphere as compared to closer to the surface that when we integrate the density over the volume of a hypersphere of small radius $r$ to find $P\left(\sqrt{\sum_{i=1}^n X_i^2} < r\right)$, most of this probability mass is obtained from the small contributions from the periphery (there are so many of them) and very little from the few but larger contributions from the core; that is, most of the probability mass lies closer to the skin of the orange than to the center. But things change as $r$ increases. Since $\sum_{i=1}^n X_i^2$ is a $\chi^2$ random variable with $n$ degrees of freedom (with mean $n$ and variance $2n$), which for large $n$ can be approximated as a Gaussian random variable with the same mean and variance) most of its probability was lies in the range $\left[n-\sqrt{18n},n-\sqrt{18n}\right] = [\mu-3\sigma,\mu+3\sigma]$. Put another way,
the quantity $P\left({\sum_{i=1}^n X_i^2} < r^2\right)$ is close to $0$ for small $r$ (the nearly empty space inside the soap bubble), and then (regarded as a function of $r$) increases very rapidly with $r$ in the close vicinity of $r=\sqrt n$ (this is the thin skin of the bubble where most of the mass is) to almost $1$, and then very slowly to its asymptotic value of $1$ (the nearly empty space outside the bubble). In short, the soap bubble analogy is very apt for Gaussian distributions; almost all the probability mass of the joint pdf of $n$ standard Gaussian random variables does indeed lie in a very thin shell of radius $\approx \sqrt n$ and there is very little probability mass that is not in the shell -- both the interior and the exterior of the shell is mostly empty as is the case with soap bubbles.
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,Y)(x,y) = \frac 1\pi$ for $x^2+y^2 < 1$. What is the probability that $(X,Y)$ is closer to the unit circle, that is, closer to the boundary of the unit disc than it is to the origin (center of the circle)? Well, only those points that lie inside the circle of radius $\frac 12$ are at distance $< \frac 12$ from the origin, and so all points outside this smaller circle are at distance $> \frac 12$ from the origin. It is an easy computation to arrive at
$$P\left(\frac 12 < \sqrt{X^2+Y^2} < 1\right) = 1- P\left(0\leq \sqrt{X^2+Y^2} < \frac 12\right) = 1 - \frac 1\pi \cdot \pi\left(\frac 12\right)^2 = \frac 34.$$
A similar calculation for a uniform distribution on the interior of a unit sphere in 3 dimensions (the pdf has value $\frac{3}{4\pi}$ on the interior) gives
\begin{align}
P\left(\frac 12 < \sqrt{X^2+Y^2+Z^2} < 1\right) &= 1- P\left(0\leq \sqrt{X^2+Y^2+Z^2} < \frac 12\right)\\
&= 1 - \frac{3}{4\pi} \cdot \frac{4\pi}{3}\left(\frac 12\right)^3\\
&= \frac 78.
\end{align}
Generalizing to $n > 3$ dimensions and remembering that the volume of an $n$-dimeensional hypersphere or radius $r$ is proportional to $r^n$, we get by very similar calculations that
$$P\left(\frac 12 < \sqrt{\sum_{i=1}^n X_i^2} < 1\right) = \frac{2^n-1}{2^n},$$
that is, most of the probability mass_ lies closer to the surface of the sphere than to the origin. As a final comment, note that the $X_i$ are NIBNID random variables which acronym stands for Not Independent But Nonetheless Identically Distributed.
Turning to IID standard Gaussian random variables, the joint density is not uniformly distributed but has a very pronounced peak at the origin. But, there is so little volume near the center of a hypersphere as compared to closer to the surface that when we integrate the density over the volume of a hypersphere of small radius $r$ to find $P\left(\sqrt{\sum_{i=1}^n X_i^2} < r\right)$, most of this probability mass is obtained from the small contributions from the periphery (there are so many of them) and very little from the few but larger contributions from the core; that is, most of the probability mass lies closer to the skin of the orange than to the center. But things change as $r$ increases. Since $\sum_{i=1}^n X_i^2$ is a $\chi^2$ random variable with $n$ degrees of freedom (with mean $n$ and variance $2n$), which for large $n$ can be approximated as a Gaussian random variable with the same mean and variance) most of its probability was lies in the range $\left[n-\sqrt{18n},n-\sqrt{18n}\right] = [\mu-3\sigma,\mu+3\sigma]$. Put another way,
the quantity $P\left({\sum_{i=1}^n X_i^2} < r^2\right)$ is close to $0$ for small $r$ (the nearly empty space inside the soap bubble), and then (regarded as a function of $r$) increases very rapidly with $r$ in the close vicinity of $r=\sqrt n$ (this is the thin skin of the bubble where most of the mass is) to almost $1$, and then very slowly to its asymptotic value of $1$ (the nearly empty space outside the bubble). In short, the soap bubble analogy is very apt for Gaussian distributions; almost all the probability mass of the joint pdf of $n$ standard Gaussian random variables does indeed lie in a very thin shell of radius $\approx \sqrt n$ and there is very little probability mass that is not in the shell -- both the interior and the exterior of the shell is mostly empty as is the case with soap bubbles.
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I can't answer about what the OP's famous post claims, but let us consider the simpler case of uniform distributions on the unit disc: $(X,Y)$ is uniformly distributed on the unit disc (that is, $f(X,
|
12,123
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbol{I}$ be the $m$-dimensional identity matrix and consider a normal random vector:
$$\mathbf{X} \equiv (X_1,...,X_m) \sim \text{N}(\mathbf{0}, \sigma^2 \boldsymbol{I}).$$
A well-known property of this distribution is that a centered and normed normal random vector is uniformly distributed on the unit sphere. That is, if we let $\mathcal{S}_r^m \equiv \{ \mathbf{x} \in \mathbb{R}^m | \sum x_i^2 = r^2 \}$ denote the $m$-dimensional sphere with radius $r$, then we have:
$$\frac{\mathbf{X}}{||\mathbf{X}||} \sim \text{U}(\mathcal{S}_1^m).$$
It is also well-known that the distribution of the scaled-norm of the random vector is:
$$\frac{||\mathbf{X}||}{\sigma \sqrt{m}} \sim \frac{\chi_m}{\sqrt{m}}.$$
Taking $m \rightarrow \infty$, the right-hand-side convergences in probability to one. Thus, for large $m$ we have:
$$\mathbf{X} \overset{\text{Approx}}{\sim} \text{U}(\mathcal{S}_{\sigma \sqrt{m}}^m)$$
This shows that when $m$ becomes large, the points from this normal random vector are approximately distributed on the surface of a unit sphere with radius $\sigma \sqrt{m}$. This is what the linked post is referring to when it notes that "...in high dimensions, Gaussian distributions are practically indistinguishable from uniform distributions on the unit sphere".
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbo
|
Why is Gaussian distribution on high dimensional space like a soap bubble
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbol{I}$ be the $m$-dimensional identity matrix and consider a normal random vector:
$$\mathbf{X} \equiv (X_1,...,X_m) \sim \text{N}(\mathbf{0}, \sigma^2 \boldsymbol{I}).$$
A well-known property of this distribution is that a centered and normed normal random vector is uniformly distributed on the unit sphere. That is, if we let $\mathcal{S}_r^m \equiv \{ \mathbf{x} \in \mathbb{R}^m | \sum x_i^2 = r^2 \}$ denote the $m$-dimensional sphere with radius $r$, then we have:
$$\frac{\mathbf{X}}{||\mathbf{X}||} \sim \text{U}(\mathcal{S}_1^m).$$
It is also well-known that the distribution of the scaled-norm of the random vector is:
$$\frac{||\mathbf{X}||}{\sigma \sqrt{m}} \sim \frac{\chi_m}{\sqrt{m}}.$$
Taking $m \rightarrow \infty$, the right-hand-side convergences in probability to one. Thus, for large $m$ we have:
$$\mathbf{X} \overset{\text{Approx}}{\sim} \text{U}(\mathcal{S}_{\sigma \sqrt{m}}^m)$$
This shows that when $m$ becomes large, the points from this normal random vector are approximately distributed on the surface of a unit sphere with radius $\sigma \sqrt{m}$. This is what the linked post is referring to when it notes that "...in high dimensions, Gaussian distributions are practically indistinguishable from uniform distributions on the unit sphere".
|
Why is Gaussian distribution on high dimensional space like a soap bubble
The post you link to concerns the use of the normal distribution in high-dimensional problems. So, suppose you are working in a space $\mathbb{R}^m$ where the dimension $m$ is large. Let $\boldsymbo
|
12,124
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I really think that the vision of an empty bubble is misleading.
(tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure with the length of its border going to $\infty$). But still more dense in the center.
The higher the dimension is, there are more points close to the border of a n-sphere than to its center (easy to see moving from 2D to 3D), but for a multivariate normal distribution: $$P\left( \left\{ \text{n-sphere center in} \left(0,...,0\right) \text{ and } r=\delta \right\}\right ) > P\left( \left\{ \text{n-sphere center in} \left(\delta,...,0\right) \text{ and } r=\delta \right\}\right )$$
Always, for any dimension, for any $\delta$, the probability is higher the closer to the center (for similar areas of course). So: is still more dense in the center, but is has less center (center is never 0 because remember $dimension \rightarrow \infty$ and not $dimension=\infty$).
Yes... if you pick a point at random is more likely to be in the border than in the center, but not because the probability in the center is "empty as a bubble", but just becasue there are more points close to the border.
This already happens in 2D: in a 2D-Sphere of radius=1 "there are" $\pi/4$ points at distance ½ of the center, while there are $3/4*\pi$ points at distance less than ½ of the border. And actually with a bivariate normal distribution the probability of picking a point next to the border is higher than picking one in the center
(see code below).
Another way to visualize it is comparing a 2D sphere vrs a 2D star, as the star has “More border” the probability of picking a point next the border in a star is higher, but the start is not “empty”.
Instead of an empty bubble It will be better to say: it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure with the length of its border going to $\infty$. But still more dense in the center.
PD:
Using r
library(shotGroups)
#using 2d multivariate normal distribution
#probability of choosing a point inside the cercle with radius delta=0.5
inner_cercle_prob <- pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
#probabiliyt of choosing a point inside the cercle of radius=1
full_cercle_prob <- pmvnEll(r=1, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
#probability of choosin a point inside the cercle but closer to the border
corona_prob=full_cercle_prob-inner_cercle_prob
#probability of choosin a point outside the cercle
outside_cercle_prob=1-full_cercle_prob
outside_cercle_prob
[1] 0.6065307
corona_prob
[1] 0.2759662
inner_cercle_prob
[1] 0.1175031
#but for cercles with same radius, the one closer to the center as higher prob.
pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
[1] 0.1175031
pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0.5,0))
[1] 0.1044914
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I really think that the vision of an empty bubble is misleading.
(tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I really think that the vision of an empty bubble is misleading.
(tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure with the length of its border going to $\infty$). But still more dense in the center.
The higher the dimension is, there are more points close to the border of a n-sphere than to its center (easy to see moving from 2D to 3D), but for a multivariate normal distribution: $$P\left( \left\{ \text{n-sphere center in} \left(0,...,0\right) \text{ and } r=\delta \right\}\right ) > P\left( \left\{ \text{n-sphere center in} \left(\delta,...,0\right) \text{ and } r=\delta \right\}\right )$$
Always, for any dimension, for any $\delta$, the probability is higher the closer to the center (for similar areas of course). So: is still more dense in the center, but is has less center (center is never 0 because remember $dimension \rightarrow \infty$ and not $dimension=\infty$).
Yes... if you pick a point at random is more likely to be in the border than in the center, but not because the probability in the center is "empty as a bubble", but just becasue there are more points close to the border.
This already happens in 2D: in a 2D-Sphere of radius=1 "there are" $\pi/4$ points at distance ½ of the center, while there are $3/4*\pi$ points at distance less than ½ of the border. And actually with a bivariate normal distribution the probability of picking a point next to the border is higher than picking one in the center
(see code below).
Another way to visualize it is comparing a 2D sphere vrs a 2D star, as the star has “More border” the probability of picking a point next the border in a star is higher, but the start is not “empty”.
Instead of an empty bubble It will be better to say: it resembles to a star with n-vertices where $n\rightarrow\infty$, or some kind of a non empty fractal structure with the length of its border going to $\infty$. But still more dense in the center.
PD:
Using r
library(shotGroups)
#using 2d multivariate normal distribution
#probability of choosing a point inside the cercle with radius delta=0.5
inner_cercle_prob <- pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
#probabiliyt of choosing a point inside the cercle of radius=1
full_cercle_prob <- pmvnEll(r=1, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
#probability of choosin a point inside the cercle but closer to the border
corona_prob=full_cercle_prob-inner_cercle_prob
#probability of choosin a point outside the cercle
outside_cercle_prob=1-full_cercle_prob
outside_cercle_prob
[1] 0.6065307
corona_prob
[1] 0.2759662
inner_cercle_prob
[1] 0.1175031
#but for cercles with same radius, the one closer to the center as higher prob.
pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0,0))
[1] 0.1175031
pmvnEll(r=0.5, sigma=diag(2), mu=c(0,0), e=diag(2), x0=c(0.5,0))
[1] 0.1044914
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I really think that the vision of an empty bubble is misleading.
(tl-dr: Instead of an empty bubble I think is better to say that it resembles to a star with n-vertices where $n\rightarrow\infty$, or
|
12,125
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
This is an old post with some great responses but I'd like to give a different perspective.
Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensional Gaussian is hollow, then that would mean at least one coordinate of our sample $x$ deviates from the mean. By the CDF of the normal distribution, the chance of $x$'s first coordinate being within one standard deviation is about $~68.2\%$. Now what is the chance that both the first and second coordinates are within one standard deviation? They are independent, so it's $0.682^2$. By extension, the probability that a sample in $D$-dimensional space is within one standard deviation along every axis is $0.68^D$. Naturally, this goes to 0 very quickly.
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
This is an old post with some great responses but I'd like to give a different perspective.
Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensiona
|
Why is Gaussian distribution on high dimensional space like a soap bubble
This is an old post with some great responses but I'd like to give a different perspective.
Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensional Gaussian is hollow, then that would mean at least one coordinate of our sample $x$ deviates from the mean. By the CDF of the normal distribution, the chance of $x$'s first coordinate being within one standard deviation is about $~68.2\%$. Now what is the chance that both the first and second coordinates are within one standard deviation? They are independent, so it's $0.682^2$. By extension, the probability that a sample in $D$-dimensional space is within one standard deviation along every axis is $0.68^D$. Naturally, this goes to 0 very quickly.
|
Why is Gaussian distribution on high dimensional space like a soap bubble
This is an old post with some great responses but I'd like to give a different perspective.
Assume we take a sample $x$ from $\mathcal{N}(\vec0, \mathcal{I})$ in $D$ dimensions. If the high-dimensiona
|
12,126
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might be led to think so.
In Cartesian coordinates in $D$ dimensions, after standardization, the probability density looks like
$$ p(\vec{X}) ~d\vec{X} \sim V^{-D/2} \exp(-\frac{||\vec{X}||}{2V}) ~d\vec{X}$$
where $V$ is the 1D variance of each variable.
We can rewrite this in spherical coordinates, and use spherical symmetry to integrate over the D-1 dimensional spheres. I'll leave out the factor corresponding to the volume of the D-1 sphere (not the ball which is the "interior" of the sphere). The radial distribution is
$$ p(r)dr \sim V^{-\frac{D}2} r^{D-1} \exp(-\frac{r^2}{2V})~dr = V^{-\frac{D}2} r^{D-2} \exp(-\frac{r^2}{2V}) ~ \frac12 dr^2, ~r > 0 $$
Introducing $z = \frac{r^2}2$, the distribution is:
$$p(z)dz \sim V^{-\frac{D}2} z^{\frac{D}2-1} \exp(-\frac{z}{V})~dz$$
which is just a Gamma Distribution (in $z$).
Now you can look up https://en.wikipedia.org/wiki/Gamma_distribution or calculate the mean radius (modulo some irrelevant factors and $\pm 1$):
$$<r> \sim D \sqrt{V}$$
and the variance of the radius $<(r-<r>)^2>$:
$$var(r) \sim DV$$
which means the standard deviation of $r$
$$SD(r) \sim\sqrt{DV}$$.
The relative SD:
$$ \frac{SD(r)}{<r>} \sim \frac1{\sqrt{D}}$$
which, as the number of dimensions $D$ increases, tends to 0.
So we think that looks like a bubble. But, the question is, as a distribution, does the radial distribution $\rightarrow \delta(r-r_0)$ in the limit $D\rightarrow\infty$?
Now let's go back to that D-dimensional distribution in Cartesian coordinates. It is peaked at the origin and falls off as your distance from the origin increases. It looks nothing like a bubble. If you had a mass density distributed like this you would not encounter any bubble that you have to pierce nor any "thickening" at $r_0$, in fact the density would continue to increase as you travelled towards the center. It is only when we integrate over the shells of fixed radius and collapse them to a single radial point do we get the relative standard deviation tending to 0 with increased dimensionality.
So, no, it is not true that "a Gaussian distribution in higher dimensions looks like a soap bubble".
|
Why is Gaussian distribution on high dimensional space like a soap bubble
|
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might be led to think so.
In Cartesian coordinates in $D$ dimensions, after standardization, the probability density looks like
$$ p(\vec{X}) ~d\vec{X} \sim V^{-D/2} \exp(-\frac{||\vec{X}||}{2V}) ~d\vec{X}$$
where $V$ is the 1D variance of each variable.
We can rewrite this in spherical coordinates, and use spherical symmetry to integrate over the D-1 dimensional spheres. I'll leave out the factor corresponding to the volume of the D-1 sphere (not the ball which is the "interior" of the sphere). The radial distribution is
$$ p(r)dr \sim V^{-\frac{D}2} r^{D-1} \exp(-\frac{r^2}{2V})~dr = V^{-\frac{D}2} r^{D-2} \exp(-\frac{r^2}{2V}) ~ \frac12 dr^2, ~r > 0 $$
Introducing $z = \frac{r^2}2$, the distribution is:
$$p(z)dz \sim V^{-\frac{D}2} z^{\frac{D}2-1} \exp(-\frac{z}{V})~dz$$
which is just a Gamma Distribution (in $z$).
Now you can look up https://en.wikipedia.org/wiki/Gamma_distribution or calculate the mean radius (modulo some irrelevant factors and $\pm 1$):
$$<r> \sim D \sqrt{V}$$
and the variance of the radius $<(r-<r>)^2>$:
$$var(r) \sim DV$$
which means the standard deviation of $r$
$$SD(r) \sim\sqrt{DV}$$.
The relative SD:
$$ \frac{SD(r)}{<r>} \sim \frac1{\sqrt{D}}$$
which, as the number of dimensions $D$ increases, tends to 0.
So we think that looks like a bubble. But, the question is, as a distribution, does the radial distribution $\rightarrow \delta(r-r_0)$ in the limit $D\rightarrow\infty$?
Now let's go back to that D-dimensional distribution in Cartesian coordinates. It is peaked at the origin and falls off as your distance from the origin increases. It looks nothing like a bubble. If you had a mass density distributed like this you would not encounter any bubble that you have to pierce nor any "thickening" at $r_0$, in fact the density would continue to increase as you travelled towards the center. It is only when we integrate over the shells of fixed radius and collapse them to a single radial point do we get the relative standard deviation tending to 0 with increased dimensionality.
So, no, it is not true that "a Gaussian distribution in higher dimensions looks like a soap bubble".
|
Why is Gaussian distribution on high dimensional space like a soap bubble
I don't think it is true that "a Gaussian distribution in higher dimensions looks like a soap bubble". But first let's see why, in accordance with some of the very detailed responses above, one might
|
12,127
|
When to "add" layers and when to "concatenate" in neural networks?
|
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successively refining the feature maps. Concatenating may be more natural if the two inputs aren't very closely related. However, the difference is smaller than you may think.
Note that $W[x,y] = W_1x + W_2y$ where $[\ ]$ denotes concat and $W$ is split horizontally into $W_1$ and $W_2$. Compare this to $W(x+y) = Wx + Wy$. So you can interpret adding as a form of concatenation where the two halves of the weight matrix are constrained to $W_1 = W_2$.
|
When to "add" layers and when to "concatenate" in neural networks?
|
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successivel
|
When to "add" layers and when to "concatenate" in neural networks?
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successively refining the feature maps. Concatenating may be more natural if the two inputs aren't very closely related. However, the difference is smaller than you may think.
Note that $W[x,y] = W_1x + W_2y$ where $[\ ]$ denotes concat and $W$ is split horizontally into $W_1$ and $W_2$. Compare this to $W(x+y) = Wx + Wy$. So you can interpret adding as a form of concatenation where the two halves of the weight matrix are constrained to $W_1 = W_2$.
|
When to "add" layers and when to "concatenate" in neural networks?
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successivel
|
12,128
|
When to "add" layers and when to "concatenate" in neural networks?
|
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is useful as the network goes deeper.
Concatenation is quite confusing when it comes to "how does it help?". As you said, it is adding information in a literal sense, which seems to focus on taking a wider shot by just stacking filters arrived from different operations (after splitting the feature maps) together into a block. It seem to be used widely for 'pre-stemming'.
The two sound similar at first, but functionally shouldn't seem to be compared together.
|
When to "add" layers and when to "concatenate" in neural networks?
|
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is
|
When to "add" layers and when to "concatenate" in neural networks?
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is useful as the network goes deeper.
Concatenation is quite confusing when it comes to "how does it help?". As you said, it is adding information in a literal sense, which seems to focus on taking a wider shot by just stacking filters arrived from different operations (after splitting the feature maps) together into a block. It seem to be used widely for 'pre-stemming'.
The two sound similar at first, but functionally shouldn't seem to be compared together.
|
When to "add" layers and when to "concatenate" in neural networks?
I am not an expert, but based on my light reading, 'addition' is used for 'identity links' in constructs such as Residue Blocks to preserve information prior to convolution, which as the pros said is
|
12,129
|
The definition of natural cubic splines for regression
|
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary knot), and each knot adds one new parameter (because the continuity of cubic splines and derivatives and second derivatives adds three constraints, leaving one free parameter), making a total of $K+4$ parameters for $K$ knots.
A natural cubic spline is linear at both ends. This constrains the cubic and quadratic parts there to 0, each reducing the df by 1. That's 2 df at each of two ends of the curve, reducing $K+4$ to $K$.
Imagine you decide you can spend some total number of degrees of freedom ($p$, say) on your non-parametric curve estimate. Since imposing a natural spline uses 4 fewer degrees of freedom than an ordinary cubic spline (for the same number of knots), with those $p$ parameters you can have 4 more knots (and so 4 more parameters) to model the curve between the boundary knots.
Note that the definition for $N_{k+2}$ is for $k=1,2,...,K-2$ (since there are $K$ basis functions in all). So the last basis function in that list, $N_{K}=d_{K-2}-d_{K-1}$. So the highest $k$ needed for definitions of $d_k$ is for $k=K-1$. (That is, we don't need to try to figure out what some $d_K$ might do, since we don't use it.)
|
The definition of natural cubic splines for regression
|
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary kno
|
The definition of natural cubic splines for regression
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary knot), and each knot adds one new parameter (because the continuity of cubic splines and derivatives and second derivatives adds three constraints, leaving one free parameter), making a total of $K+4$ parameters for $K$ knots.
A natural cubic spline is linear at both ends. This constrains the cubic and quadratic parts there to 0, each reducing the df by 1. That's 2 df at each of two ends of the curve, reducing $K+4$ to $K$.
Imagine you decide you can spend some total number of degrees of freedom ($p$, say) on your non-parametric curve estimate. Since imposing a natural spline uses 4 fewer degrees of freedom than an ordinary cubic spline (for the same number of knots), with those $p$ parameters you can have 4 more knots (and so 4 more parameters) to model the curve between the boundary knots.
Note that the definition for $N_{k+2}$ is for $k=1,2,...,K-2$ (since there are $K$ basis functions in all). So the last basis function in that list, $N_{K}=d_{K-2}-d_{K-1}$. So the highest $k$ needed for definitions of $d_k$ is for $k=K-1$. (That is, we don't need to try to figure out what some $d_K$ might do, since we don't use it.)
|
The definition of natural cubic splines for regression
Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary kno
|
12,130
|
The definition of natural cubic splines for regression
|
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$.
The related intervals are $]-\infty, \xi_1[$, $]\xi_1, \xi_2[$ and $]\xi_2, +\infty[$ (so there are $|I|=3$ intervals and $|I|-1=2$ knots).
For (common) cubic splines
Without regularity constraints, we have $4|I|=12$ equations:
$$\mathbf{1}(X < \xi_1)~~;~~\mathbf{1}(X < \xi_1)X~~;~~\mathbf{1}(X < \xi_1)X^2~~;~~\mathbf{1}(X < \xi_1)X^3~~;$$
$$\mathbf{1}(\xi_1 \leq X < \xi_2)~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^2~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^3~~;$$
$$\mathbf{1}(\xi_2 \leq X)~~;~~\mathbf{1}(\xi_2 \leq X)X~~;~~\mathbf{1}(\xi_2 \leq X)X^2~~;~~\mathbf{1}(\xi_2 \leq X)X^3.$$
By adding the constraints (cubic splines assumes a $\mathcal{C}^r$ regularity with $r=2$ ), we need to add $(r+1)\times(|I|-1) = 3\times(|I|-1) = 6$ constraints on the linear coefficients.
We end up with $12-6=6$ degree of freedom.
For natural cubic splines
"A natural cubic splines adds additional constraints, namely that function is linear beyond the boundary knots."
Without regularity constraints, we have $4|I|-4=12-4$ equations (we have removed $4$ equations, $2$ each in both boundary regions because they involve quadratic and cubic polynomials):
$$\mathbf{1}(X < \xi_1)~~;~~\mathbf{1}(X < \xi_1)X~~;~~$$
$$\mathbf{1}(\xi_1 \leq X < \xi_2)~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^2~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^3~~;$$
$$\mathbf{1}(\xi_2 \leq X)~~;~~\mathbf{1}(\xi_2 \leq X)X.$$
The constraints are the same as before, so we still need to add $3\times(|I|-1) = 6$ constraints on the linear coefficients.
We end up with $8-6=2$ degree of freedom.
|
The definition of natural cubic splines for regression
|
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$.
The related intervals are $]-\infty, \xi_1
|
The definition of natural cubic splines for regression
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$.
The related intervals are $]-\infty, \xi_1[$, $]\xi_1, \xi_2[$ and $]\xi_2, +\infty[$ (so there are $|I|=3$ intervals and $|I|-1=2$ knots).
For (common) cubic splines
Without regularity constraints, we have $4|I|=12$ equations:
$$\mathbf{1}(X < \xi_1)~~;~~\mathbf{1}(X < \xi_1)X~~;~~\mathbf{1}(X < \xi_1)X^2~~;~~\mathbf{1}(X < \xi_1)X^3~~;$$
$$\mathbf{1}(\xi_1 \leq X < \xi_2)~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^2~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^3~~;$$
$$\mathbf{1}(\xi_2 \leq X)~~;~~\mathbf{1}(\xi_2 \leq X)X~~;~~\mathbf{1}(\xi_2 \leq X)X^2~~;~~\mathbf{1}(\xi_2 \leq X)X^3.$$
By adding the constraints (cubic splines assumes a $\mathcal{C}^r$ regularity with $r=2$ ), we need to add $(r+1)\times(|I|-1) = 3\times(|I|-1) = 6$ constraints on the linear coefficients.
We end up with $12-6=6$ degree of freedom.
For natural cubic splines
"A natural cubic splines adds additional constraints, namely that function is linear beyond the boundary knots."
Without regularity constraints, we have $4|I|-4=12-4$ equations (we have removed $4$ equations, $2$ each in both boundary regions because they involve quadratic and cubic polynomials):
$$\mathbf{1}(X < \xi_1)~~;~~\mathbf{1}(X < \xi_1)X~~;~~$$
$$\mathbf{1}(\xi_1 \leq X < \xi_2)~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^2~~;~~\mathbf{1}(\xi_1 \leq X < \xi_2)X^3~~;$$
$$\mathbf{1}(\xi_2 \leq X)~~;~~\mathbf{1}(\xi_2 \leq X)X.$$
The constraints are the same as before, so we still need to add $3\times(|I|-1) = 6$ constraints on the linear coefficients.
We end up with $8-6=2$ degree of freedom.
|
The definition of natural cubic splines for regression
I detail the assertion: "This frees up four degrees of freedom (two constraints each in both boundary regions)" in an example with $2$ knots $\xi_1, \xi_2$.
The related intervals are $]-\infty, \xi_1
|
12,131
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
|
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the independent variables (This is mainly an issue with continuous independent variables.) There is a test called the Box-Tidwell that you can use for this. The stata command is boxtid. I don't know the SPSS command, sorry.
This may be of help --
http://www.ats.ucla.edu/stat/stata/webbooks/logistic/chapter3/statalog3.htm
|
How should I check the assumption of linearity to the logit for the continuous independent variables
|
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the independent variables (This is mainly an issue with continuous independent variables.) There is a test called the Box-Tidwell that you can use for this. The stata command is boxtid. I don't know the SPSS command, sorry.
This may be of help --
http://www.ats.ucla.edu/stat/stata/webbooks/logistic/chapter3/statalog3.htm
|
How should I check the assumption of linearity to the logit for the continuous independent variables
Logistic regression does NOT assume a linear relationship between the dependent and independent variables. It does assume a linear relationship between the log odds of the dependent variable and the
|
12,132
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
|
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought with problems, one of the most important being the distortion of type I error and confidence intervals. Categorization causes even more severe problems, especially lack of fit and arbitrariness.
Instead of thinking about this as a "check for lack of fit" problem, it is better to think of it as specifying a model that is very likely to fit. One way to do this is to allocate parameters to the parts of the model that are likely to be strong and for which linearity is not already known to be a reasonable assumption. In this process one examines the effective sample size (in your case the minimum of the number of events and number of non-events) and allows complexity to the extent that the data's information content allows (using e.g. the 15:1 events:parameter rule of thumb). By pre-specifying a flexible additive parametric model one will only be wrong where it matters by omitting important interactions. Interactions should be pre-specified, generally speaking.
You can check whether nonlinearity was needed in the model with a formal test (made easy with the R rms package) but removing such terms when insignificant creates the inferential distortions I outlined above.
More details may be found at course notes linked to from https://hbiostat.org/rms
|
How should I check the assumption of linearity to the logit for the continuous independent variables
|
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought with problems, one of the most important being the distortion of type I error and confidence intervals. Categorization causes even more severe problems, especially lack of fit and arbitrariness.
Instead of thinking about this as a "check for lack of fit" problem, it is better to think of it as specifying a model that is very likely to fit. One way to do this is to allocate parameters to the parts of the model that are likely to be strong and for which linearity is not already known to be a reasonable assumption. In this process one examines the effective sample size (in your case the minimum of the number of events and number of non-events) and allows complexity to the extent that the data's information content allows (using e.g. the 15:1 events:parameter rule of thumb). By pre-specifying a flexible additive parametric model one will only be wrong where it matters by omitting important interactions. Interactions should be pre-specified, generally speaking.
You can check whether nonlinearity was needed in the model with a formal test (made easy with the R rms package) but removing such terms when insignificant creates the inferential distortions I outlined above.
More details may be found at course notes linked to from https://hbiostat.org/rms
|
How should I check the assumption of linearity to the logit for the continuous independent variables
As I describe in detail in my book Regression Modeling Strategies (2nd edition available 2015-09-04, e-book available now), the process of attempting to transform variables before modeling is frought
|
12,133
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
|
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold in the final multivariable regression model in most cases, and if not, I think this might primarily be caused by interaction effects that you can correct for.
Yes, categorizing non-linear continuous variables is one option. The problems with this are that categories may in most cases seem arbitrary, and small differences in cut-off scores between categories may lead to different results (especially regarding statistical significance), and, depending on the number of categories and the size of your data, you may lose much valuable information in the data.
An alternative approach is to use a generalized additive model which is a regression model that can be specified as a logistic regression, but in which you can include non-linear independent variables as "smoother functions". Technically, this is not very complicated in R, but I don't know about other software packages. These models will identify non-linear relationships to the dependent variables, but a drawback might be that you won't get neat and tidy numbers in your output to present, but rather a visual curve that is tested for statistical significance. So it depends how interested you are in quantifying the effect of the non-linear variable on the outcome variable.
Finally, you can use generalized additive models as described above to test the assumptions of linearity in your logistic regression model, at least if you use R.
Take a look at this book (a very different field from yours, and mine, but that doesn't matter at all): http://www.amazon.com/Effects-Extensions-Ecology-Statistics-Biology/dp/0387874577/ref=sr_1_1?ie=UTF8&qid=1440928328&sr=8-1&keywords=zuur+ecology
|
How should I check the assumption of linearity to the logit for the continuous independent variables
|
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold in the final multivariable regression model in most cases, and if not, I think this might primarily be caused by interaction effects that you can correct for.
Yes, categorizing non-linear continuous variables is one option. The problems with this are that categories may in most cases seem arbitrary, and small differences in cut-off scores between categories may lead to different results (especially regarding statistical significance), and, depending on the number of categories and the size of your data, you may lose much valuable information in the data.
An alternative approach is to use a generalized additive model which is a regression model that can be specified as a logistic regression, but in which you can include non-linear independent variables as "smoother functions". Technically, this is not very complicated in R, but I don't know about other software packages. These models will identify non-linear relationships to the dependent variables, but a drawback might be that you won't get neat and tidy numbers in your output to present, but rather a visual curve that is tested for statistical significance. So it depends how interested you are in quantifying the effect of the non-linear variable on the outcome variable.
Finally, you can use generalized additive models as described above to test the assumptions of linearity in your logistic regression model, at least if you use R.
Take a look at this book (a very different field from yours, and mine, but that doesn't matter at all): http://www.amazon.com/Effects-Extensions-Ecology-Statistics-Biology/dp/0387874577/ref=sr_1_1?ie=UTF8&qid=1440928328&sr=8-1&keywords=zuur+ecology
|
How should I check the assumption of linearity to the logit for the continuous independent variables
I think that we should plot continuous variables and check for linearity before using them in a regression model. If linearity seems like a reasonable assumption, I think this will probably still hold
|
12,134
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
|
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past when I have considered combining three terms I often lose conceptual track of what I am measuring. You need to have a good handle on what you are measuring or you'll have trouble explaining your findings. Hope that helps!
|
How should I check the assumption of linearity to the logit for the continuous independent variables
|
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past w
|
How should I check the assumption of linearity to the logit for the continuous independent variables in logistic regression analysis?
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past when I have considered combining three terms I often lose conceptual track of what I am measuring. You need to have a good handle on what you are measuring or you'll have trouble explaining your findings. Hope that helps!
|
How should I check the assumption of linearity to the logit for the continuous independent variables
Since I don't know your data I don't know if combining those three variables -- the basic variable, its natural log, and an interactive term -- will be a problem. However, I know that in the past w
|
12,135
|
Train a Neural Network to distinguish between even and odd numbers
|
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise.
I think, the problem with the representation is that the function (modulo) is highly non-linear and not smooth in the input representation you've chosen for this problem.
I would try the following:
Try a better learning algorithm (back-propagation/gradient descent and its variants).
Try representing the numbers in binary using a fixed length precision.
If your input representation is a b-bit number, I would ensure your training set isn't biased towards small or large numbers. Have numbers that are uniformly, and independently chosen at random from the range $[0, 2^b-1]$.
As you've done, use a multi-layer network (try 2 layers first: i.e., hidden+output, before using more layers).
Use a separate training+test set. Don't evaluate your performance on the training set.
|
Train a Neural Network to distinguish between even and odd numbers
|
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise.
I think, the problem with the representation is that the function (modulo
|
Train a Neural Network to distinguish between even and odd numbers
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise.
I think, the problem with the representation is that the function (modulo) is highly non-linear and not smooth in the input representation you've chosen for this problem.
I would try the following:
Try a better learning algorithm (back-propagation/gradient descent and its variants).
Try representing the numbers in binary using a fixed length precision.
If your input representation is a b-bit number, I would ensure your training set isn't biased towards small or large numbers. Have numbers that are uniformly, and independently chosen at random from the range $[0, 2^b-1]$.
As you've done, use a multi-layer network (try 2 layers first: i.e., hidden+output, before using more layers).
Use a separate training+test set. Don't evaluate your performance on the training set.
|
Train a Neural Network to distinguish between even and odd numbers
As with any machine learning task, the representation of your input plays a crucial role in how well you learn and generalise.
I think, the problem with the representation is that the function (modulo
|
12,136
|
Train a Neural Network to distinguish between even and odd numbers
|
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely.
2,4,6,8.....
1,3,5,7.....
Nonlinear activation functions like sin(x) and cos(x) behave similarly.
Therefore, if you change your neurons to implement sin and cos instead of popular activation functions like tanh or relu, I guess you can solve this problem fairly easily using a single neuron.
Linear transformations always precede nonlinear transformations. Therefore a single neuron will end up learning sin(ax+b) which for the right combination of a & b will output 0's and 1's alternatively in the desired frequency we want which in this case is 1.
I have never tried sin or cos in my neural networks before. So, apologies if it ends up being a very bad idea.
|
Train a Neural Network to distinguish between even and odd numbers
|
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely.
2,4,6,8.....
1,3,5,7.....
Nonlinear activation functions like sin(x) and cos(x)
|
Train a Neural Network to distinguish between even and odd numbers
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely.
2,4,6,8.....
1,3,5,7.....
Nonlinear activation functions like sin(x) and cos(x) behave similarly.
Therefore, if you change your neurons to implement sin and cos instead of popular activation functions like tanh or relu, I guess you can solve this problem fairly easily using a single neuron.
Linear transformations always precede nonlinear transformations. Therefore a single neuron will end up learning sin(ax+b) which for the right combination of a & b will output 0's and 1's alternatively in the desired frequency we want which in this case is 1.
I have never tried sin or cos in my neural networks before. So, apologies if it ends up being a very bad idea.
|
Train a Neural Network to distinguish between even and odd numbers
Learning to classify odd numbers and even numbers is a difficult problem. A simple pattern keeps repeating infinitely.
2,4,6,8.....
1,3,5,7.....
Nonlinear activation functions like sin(x) and cos(x)
|
12,137
|
Train a Neural Network to distinguish between even and odd numbers
|
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of the number. Since what we are doing is classifying I represented my output as an array, not a single value.
ex:
input = [
[0, 0, 0, 1], // 1
[0, 0, 1, 0], // 2
[0, 0, 1, 1], // 3
[0, 1, 0, 0] // 4
]
output = [
[1, 0], // odd
[0, 1], // even
[1, 0], // odd
[0, 1] // even
]
Hope this helps!
|
Train a Neural Network to distinguish between even and odd numbers
|
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of th
|
Train a Neural Network to distinguish between even and odd numbers
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of the number. Since what we are doing is classifying I represented my output as an array, not a single value.
ex:
input = [
[0, 0, 0, 1], // 1
[0, 0, 1, 0], // 2
[0, 0, 1, 1], // 3
[0, 1, 0, 0] // 4
]
output = [
[1, 0], // odd
[0, 1], // even
[1, 0], // odd
[0, 1] // even
]
Hope this helps!
|
Train a Neural Network to distinguish between even and odd numbers
So I'm working with neural nets right now and I ran into the same issue as you. What I ended up doing was representing the input number as an array with values equal to the binary representation of th
|
12,138
|
Train a Neural Network to distinguish between even and odd numbers
|
I get here where was struggle with similar problem. So I write what I managed.
As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objects in any geometry using straight line. And this is this kind of problem. If you draw last bit of binary representation on paper you can also draw line, and all Odd numbers are on one side, and Even on other. For the same reason it is impossible to solve xor problem with one layer network.
Ok. This problem looks very simple, so lets take Heaviside step as activation function. After I played a little with my number I realized that problem here is with bias. I google a little, and what I found is that if you stay with geometry representation, bias enable you to change place of activation in coordinate system.
Very educational problem
|
Train a Neural Network to distinguish between even and odd numbers
|
I get here where was struggle with similar problem. So I write what I managed.
As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objec
|
Train a Neural Network to distinguish between even and odd numbers
I get here where was struggle with similar problem. So I write what I managed.
As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objects in any geometry using straight line. And this is this kind of problem. If you draw last bit of binary representation on paper you can also draw line, and all Odd numbers are on one side, and Even on other. For the same reason it is impossible to solve xor problem with one layer network.
Ok. This problem looks very simple, so lets take Heaviside step as activation function. After I played a little with my number I realized that problem here is with bias. I google a little, and what I found is that if you stay with geometry representation, bias enable you to change place of activation in coordinate system.
Very educational problem
|
Train a Neural Network to distinguish between even and odd numbers
I get here where was struggle with similar problem. So I write what I managed.
As far as I know one layer perceptron is able to solve every problem, which can be at the end simplified to divide objec
|
12,139
|
Train a Neural Network to distinguish between even and odd numbers
|
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as input. Therefore you should be able to create a NN to calculate n modulo k, for any n and k numbers expressed in base 2.
If you wish to calculate n modulo k for a fixed k number (for example k = 4) you can actually create an extremely simple NN that does that: express the input number n in base k, and ignore all digits other than the lowest rank digit, and you have the answer!
|
Train a Neural Network to distinguish between even and odd numbers
|
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as i
|
Train a Neural Network to distinguish between even and odd numbers
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as input. Therefore you should be able to create a NN to calculate n modulo k, for any n and k numbers expressed in base 2.
If you wish to calculate n modulo k for a fixed k number (for example k = 4) you can actually create an extremely simple NN that does that: express the input number n in base k, and ignore all digits other than the lowest rank digit, and you have the answer!
|
Train a Neural Network to distinguish between even and odd numbers
It is well known that logic gates NOT, AND, OR can all be done with very simple neural networks (NN), and that you can build a complete arithmetic calculator with logic gates using binary numbers as i
|
12,140
|
Train a Neural Network to distinguish between even and odd numbers
|
My solution
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
def layer_2_activation(x, w1, b1, w2, b2):
y2 = layer_2(x, w1, b1, w2, b2)
# return 1 / (1 + np.exp(-y2))
return (y2 > 0) * 1
def loss(param):
w1, b1, w2, b2 = param
x = np.arange(0, 1000, 1)
y_hat = layer_2_activation(x, w1, b1, w2, b2)
y_true = (x % 2 > 0) * 1
return sum(np.square(y_hat - y_true))
# %%
from sko.GA import GA
ga = GA(func=loss, n_dim=4, size_pop=50, max_iter=100, lb=[1, 0, 1, 0], ub=[32, 1, 2, 1], precision=1)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)
for x in range(1001, 1200):
y_hat = layer_2_activation(x, *best_x)
print('input:{},divide by 2:{}'.format(x, y_hat == 0))
input:1001,divide by 2:False input:1002,divide by 2:True
input:1003,divide by 2:False input:1004,divide by 2:True
input:1005,divide by 2:False input:1006,divide by 2:True
input:1007,divide by 2:False input:1008,divide by 2:True
input:1009,divide by 2:False input:1010,divide by 2:True
input:1011,divide by 2:False input:1012,divide by 2:True
input:1013,divide by 2:False input:1014,divide by 2:True
input:1015,divide by 2:False input:1016,divide by 2:True
input:1017,divide by 2:False input:1018,divide by 2:True
input:1019,divide by 2:False input:1020,divide by 2:True
input:1021,divide by 2:False input:1022,divide by 2:True
input:1023,divide by 2:False input:1024,divide by 2:True
input:1025,divide by 2:False input:1026,divide by 2:True
input:1027,divide by 2:False input:1028,divide by 2:True
input:1029,divide by 2:False input:1030,divide by 2:True
input:1031,divide by 2:False input:1032,divide by 2:True
input:1033,divide by 2:False input:1034,divide by 2:True
input:1035,divide by 2:False input:1036,divide by 2:True
input:1037,divide by 2:False input:1038,divide by 2:True
input:1039,divide by 2:False input:1040,divide by 2:True
input:1041,divide by 2:False input:1042,divide by 2:True
input:1043,divide by 2:False input:1044,divide by 2:True
input:1045,divide by 2:False input:1046,divide by 2:True
input:1047,divide by 2:False input:1048,divide by 2:True
input:1049,divide by 2:False input:1050,divide by 2:True
input:1051,divide by 2:False input:1052,divide by 2:True
input:1053,divide by 2:False input:1054,divide by 2:True
input:1055,divide by 2:False input:1056,divide by 2:True
input:1057,divide by 2:False input:1058,divide by 2:True
input:1059,divide by 2:False input:1060,divide by 2:True
input:1061,divide by 2:False input:1062,divide by 2:True
input:1063,divide by 2:False input:1064,divide by 2:True
input:1065,divide by 2:False input:1066,divide by 2:True
input:1067,divide by 2:False input:1068,divide by 2:True
input:1069,divide by 2:False input:1070,divide by 2:True
input:1071,divide by 2:False input:1072,divide by 2:True
input:1073,divide by 2:False input:1074,divide by 2:True
input:1075,divide by 2:False input:1076,divide by 2:True
input:1077,divide by 2:False input:1078,divide by 2:True
input:1079,divide by 2:False input:1080,divide by 2:True
input:1081,divide by 2:False input:1082,divide by 2:True
input:1083,divide by 2:False input:1084,divide by 2:True
input:1085,divide by 2:False input:1086,divide by 2:True
input:1087,divide by 2:False input:1088,divide by 2:True
input:1089,divide by 2:False input:1090,divide by 2:True
input:1091,divide by 2:False input:1092,divide by 2:True
input:1093,divide by 2:False input:1094,divide by 2:True
input:1095,divide by 2:False input:1096,divide by 2:True
input:1097,divide by 2:False input:1098,divide by 2:True
input:1099,divide by 2:False input:1100,divide by 2:True
input:1101,divide by 2:False input:1102,divide by 2:True
input:1103,divide by 2:False input:1104,divide by 2:True
input:1105,divide by 2:False input:1106,divide by 2:True
input:1107,divide by 2:False input:1108,divide by 2:True
input:1109,divide by 2:False input:1110,divide by 2:True
input:1111,divide by 2:False input:1112,divide by 2:True
input:1113,divide by 2:False input:1114,divide by 2:True
input:1115,divide by 2:False input:1116,divide by 2:True
input:1117,divide by 2:False input:1118,divide by 2:True
input:1119,divide by 2:False input:1120,divide by 2:True
input:1121,divide by 2:False input:1122,divide by 2:True
input:1123,divide by 2:False input:1124,divide by 2:True
input:1125,divide by 2:False input:1126,divide by 2:True
input:1127,divide by 2:False input:1128,divide by 2:True
input:1129,divide by 2:False input:1130,divide by 2:True
input:1131,divide by 2:False input:1132,divide by 2:True
input:1133,divide by 2:False input:1134,divide by 2:True
input:1135,divide by 2:False input:1136,divide by 2:True
input:1137,divide by 2:False input:1138,divide by 2:True
input:1139,divide by 2:False input:1140,divide by 2:True
input:1141,divide by 2:False input:1142,divide by 2:True
input:1143,divide by 2:False input:1144,divide by 2:True
input:1145,divide by 2:False input:1146,divide by 2:True
input:1147,divide by 2:False input:1148,divide by 2:True
input:1149,divide by 2:False input:1150,divide by 2:True
input:1151,divide by 2:False input:1152,divide by 2:True
input:1153,divide by 2:False input:1154,divide by 2:True
input:1155,divide by 2:False input:1156,divide by 2:True
input:1157,divide by 2:False input:1158,divide by 2:True
input:1159,divide by 2:False input:1160,divide by 2:True
input:1161,divide by 2:False input:1162,divide by 2:True
input:1163,divide by 2:False input:1164,divide by 2:True
input:1165,divide by 2:False input:1166,divide by 2:True
input:1167,divide by 2:False input:1168,divide by 2:True
input:1169,divide by 2:False input:1170,divide by 2:True
input:1171,divide by 2:False input:1172,divide by 2:True
input:1173,divide by 2:False input:1174,divide by 2:True
input:1175,divide by 2:False input:1176,divide by 2:True
input:1177,divide by 2:False input:1178,divide by 2:True
input:1179,divide by 2:False input:1180,divide by 2:True
input:1181,divide by 2:False input:1182,divide by 2:True
input:1183,divide by 2:False input:1184,divide by 2:True
input:1185,divide by 2:False input:1186,divide by 2:True
input:1187,divide by 2:False input:1188,divide by 2:True
input:1189,divide by 2:False input:1190,divide by 2:True
input:1191,divide by 2:False input:1192,divide by 2:True
input:1193,divide by 2:False input:1194,divide by 2:True
input:1195,divide by 2:False input:1196,divide by 2:True
input:1197,divide by 2:False input:1198,divide by 2:True
input:1199,divide by 2:False
Moreover, divide by other numbers (say, 7) is well, too:
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
def layer_2_activation(x, w1, b1, w2, b2):
y2 = layer_2(x, w1, b1, w2, b2)
# return 1 / (1 + np.exp(-y2))
return (y2 > 0) * 1
def loss(param):
w1, b1, w2, b2 = param
x = np.arange(0, 1000, 1)
y_hat = layer_2_activation(x, w1, b1, w2, b2)
y_true = (x % 7 > 0) * 1
return sum(np.square(y_hat - y_true))
# %%
from sko.GA import GA
ga = GA(func=loss, n_dim=4, size_pop=50, max_iter=100, lb=[1, 0, 1, 0], ub=[32, 1, 2, 1], precision=1)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)
for x in range(1001, 1200):
y_hat = layer_2_activation(x, *best_x)
print('input:{},divide by 7:{}'.format(x, y_hat == 0))
input:1001,divide by 7:True input:1002,divide by 7:False
input:1003,divide by 7:False input:1004,divide by 7:False
input:1005,divide by 7:False input:1006,divide by 7:False
input:1007,divide by 7:False input:1008,divide by 7:True
input:1009,divide by 7:False input:1010,divide by 7:False
input:1011,divide by 7:False input:1012,divide by 7:False
input:1013,divide by 7:False input:1014,divide by 7:False
input:1015,divide by 7:True input:1016,divide by 7:False
input:1017,divide by 7:False input:1018,divide by 7:False
input:1019,divide by 7:False input:1020,divide by 7:False
input:1021,divide by 7:False input:1022,divide by 7:True
input:1023,divide by 7:False input:1024,divide by 7:False
input:1025,divide by 7:False input:1026,divide by 7:False
input:1027,divide by 7:False input:1028,divide by 7:False
input:1029,divide by 7:True input:1030,divide by 7:False
input:1031,divide by 7:False input:1032,divide by 7:False
input:1033,divide by 7:False input:1034,divide by 7:False
input:1035,divide by 7:False input:1036,divide by 7:True
input:1037,divide by 7:False input:1038,divide by 7:False
input:1039,divide by 7:False input:1040,divide by 7:False
input:1041,divide by 7:False input:1042,divide by 7:False
input:1043,divide by 7:True input:1044,divide by 7:False
input:1045,divide by 7:False input:1046,divide by 7:False
input:1047,divide by 7:False input:1048,divide by 7:False
input:1049,divide by 7:False input:1050,divide by 7:True
input:1051,divide by 7:False input:1052,divide by 7:False
input:1053,divide by 7:False input:1054,divide by 7:False
input:1055,divide by 7:False input:1056,divide by 7:False
input:1057,divide by 7:True input:1058,divide by 7:False
input:1059,divide by 7:False input:1060,divide by 7:False
input:1061,divide by 7:False input:1062,divide by 7:False
input:1063,divide by 7:False input:1064,divide by 7:True
input:1065,divide by 7:False input:1066,divide by 7:False
input:1067,divide by 7:False input:1068,divide by 7:False
input:1069,divide by 7:False input:1070,divide by 7:False
input:1071,divide by 7:True input:1072,divide by 7:False
input:1073,divide by 7:False input:1074,divide by 7:False
input:1075,divide by 7:False input:1076,divide by 7:False
input:1077,divide by 7:False input:1078,divide by 7:True
input:1079,divide by 7:False input:1080,divide by 7:False
input:1081,divide by 7:False input:1082,divide by 7:False
input:1083,divide by 7:False input:1084,divide by 7:False
input:1085,divide by 7:True input:1086,divide by 7:False
input:1087,divide by 7:False input:1088,divide by 7:False
input:1089,divide by 7:False input:1090,divide by 7:False
input:1091,divide by 7:False input:1092,divide by 7:True
input:1093,divide by 7:False input:1094,divide by 7:False
input:1095,divide by 7:False input:1096,divide by 7:False
input:1097,divide by 7:False input:1098,divide by 7:False
input:1099,divide by 7:True input:1100,divide by 7:False
input:1101,divide by 7:False input:1102,divide by 7:False
input:1103,divide by 7:False input:1104,divide by 7:False
input:1105,divide by 7:False input:1106,divide by 7:True
input:1107,divide by 7:False input:1108,divide by 7:False
input:1109,divide by 7:False input:1110,divide by 7:False
input:1111,divide by 7:False input:1112,divide by 7:False
input:1113,divide by 7:True input:1114,divide by 7:False
input:1115,divide by 7:False input:1116,divide by 7:False
input:1117,divide by 7:False input:1118,divide by 7:False
input:1119,divide by 7:False input:1120,divide by 7:True
input:1121,divide by 7:False input:1122,divide by 7:False
input:1123,divide by 7:False input:1124,divide by 7:False
input:1125,divide by 7:False input:1126,divide by 7:False
input:1127,divide by 7:True input:1128,divide by 7:False
input:1129,divide by 7:False input:1130,divide by 7:False
input:1131,divide by 7:False input:1132,divide by 7:False
input:1133,divide by 7:False input:1134,divide by 7:True
input:1135,divide by 7:False input:1136,divide by 7:False
input:1137,divide by 7:False input:1138,divide by 7:False
input:1139,divide by 7:False input:1140,divide by 7:False
input:1141,divide by 7:True input:1142,divide by 7:False
input:1143,divide by 7:False input:1144,divide by 7:False
input:1145,divide by 7:False input:1146,divide by 7:False
input:1147,divide by 7:False input:1148,divide by 7:True
input:1149,divide by 7:False input:1150,divide by 7:False
input:1151,divide by 7:False input:1152,divide by 7:False
input:1153,divide by 7:False input:1154,divide by 7:False
input:1155,divide by 7:True input:1156,divide by 7:False
input:1157,divide by 7:False input:1158,divide by 7:False
input:1159,divide by 7:False input:1160,divide by 7:False
input:1161,divide by 7:False input:1162,divide by 7:True
input:1163,divide by 7:False input:1164,divide by 7:False
input:1165,divide by 7:False input:1166,divide by 7:False
input:1167,divide by 7:False input:1168,divide by 7:False
input:1169,divide by 7:True input:1170,divide by 7:False
input:1171,divide by 7:False input:1172,divide by 7:False
input:1173,divide by 7:False input:1174,divide by 7:False
input:1175,divide by 7:False input:1176,divide by 7:True
input:1177,divide by 7:False input:1178,divide by 7:False
input:1179,divide by 7:False input:1180,divide by 7:False
input:1181,divide by 7:False input:1182,divide by 7:False
input:1183,divide by 7:True input:1184,divide by 7:False
input:1185,divide by 7:False input:1186,divide by 7:False
input:1187,divide by 7:False input:1188,divide by 7:False
input:1189,divide by 7:False input:1190,divide by 7:True
input:1191,divide by 7:False input:1192,divide by 7:False
input:1193,divide by 7:False input:1194,divide by 7:False
input:1195,divide by 7:False input:1196,divide by 7:False
input:1197,divide by 7:True input:1198,divide by 7:False
input:1199,divide by 7:False
Explanation:
I get 2 different solutions. They both are good:
1. sin as activation
2. floor(or int) as activation
It is impossible to find the best weights using gradient descent, and I use genetic algorithm (from scikit-opt)
|
Train a Neural Network to distinguish between even and odd numbers
|
My solution
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
|
Train a Neural Network to distinguish between even and odd numbers
My solution
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
def layer_2_activation(x, w1, b1, w2, b2):
y2 = layer_2(x, w1, b1, w2, b2)
# return 1 / (1 + np.exp(-y2))
return (y2 > 0) * 1
def loss(param):
w1, b1, w2, b2 = param
x = np.arange(0, 1000, 1)
y_hat = layer_2_activation(x, w1, b1, w2, b2)
y_true = (x % 2 > 0) * 1
return sum(np.square(y_hat - y_true))
# %%
from sko.GA import GA
ga = GA(func=loss, n_dim=4, size_pop=50, max_iter=100, lb=[1, 0, 1, 0], ub=[32, 1, 2, 1], precision=1)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)
for x in range(1001, 1200):
y_hat = layer_2_activation(x, *best_x)
print('input:{},divide by 2:{}'.format(x, y_hat == 0))
input:1001,divide by 2:False input:1002,divide by 2:True
input:1003,divide by 2:False input:1004,divide by 2:True
input:1005,divide by 2:False input:1006,divide by 2:True
input:1007,divide by 2:False input:1008,divide by 2:True
input:1009,divide by 2:False input:1010,divide by 2:True
input:1011,divide by 2:False input:1012,divide by 2:True
input:1013,divide by 2:False input:1014,divide by 2:True
input:1015,divide by 2:False input:1016,divide by 2:True
input:1017,divide by 2:False input:1018,divide by 2:True
input:1019,divide by 2:False input:1020,divide by 2:True
input:1021,divide by 2:False input:1022,divide by 2:True
input:1023,divide by 2:False input:1024,divide by 2:True
input:1025,divide by 2:False input:1026,divide by 2:True
input:1027,divide by 2:False input:1028,divide by 2:True
input:1029,divide by 2:False input:1030,divide by 2:True
input:1031,divide by 2:False input:1032,divide by 2:True
input:1033,divide by 2:False input:1034,divide by 2:True
input:1035,divide by 2:False input:1036,divide by 2:True
input:1037,divide by 2:False input:1038,divide by 2:True
input:1039,divide by 2:False input:1040,divide by 2:True
input:1041,divide by 2:False input:1042,divide by 2:True
input:1043,divide by 2:False input:1044,divide by 2:True
input:1045,divide by 2:False input:1046,divide by 2:True
input:1047,divide by 2:False input:1048,divide by 2:True
input:1049,divide by 2:False input:1050,divide by 2:True
input:1051,divide by 2:False input:1052,divide by 2:True
input:1053,divide by 2:False input:1054,divide by 2:True
input:1055,divide by 2:False input:1056,divide by 2:True
input:1057,divide by 2:False input:1058,divide by 2:True
input:1059,divide by 2:False input:1060,divide by 2:True
input:1061,divide by 2:False input:1062,divide by 2:True
input:1063,divide by 2:False input:1064,divide by 2:True
input:1065,divide by 2:False input:1066,divide by 2:True
input:1067,divide by 2:False input:1068,divide by 2:True
input:1069,divide by 2:False input:1070,divide by 2:True
input:1071,divide by 2:False input:1072,divide by 2:True
input:1073,divide by 2:False input:1074,divide by 2:True
input:1075,divide by 2:False input:1076,divide by 2:True
input:1077,divide by 2:False input:1078,divide by 2:True
input:1079,divide by 2:False input:1080,divide by 2:True
input:1081,divide by 2:False input:1082,divide by 2:True
input:1083,divide by 2:False input:1084,divide by 2:True
input:1085,divide by 2:False input:1086,divide by 2:True
input:1087,divide by 2:False input:1088,divide by 2:True
input:1089,divide by 2:False input:1090,divide by 2:True
input:1091,divide by 2:False input:1092,divide by 2:True
input:1093,divide by 2:False input:1094,divide by 2:True
input:1095,divide by 2:False input:1096,divide by 2:True
input:1097,divide by 2:False input:1098,divide by 2:True
input:1099,divide by 2:False input:1100,divide by 2:True
input:1101,divide by 2:False input:1102,divide by 2:True
input:1103,divide by 2:False input:1104,divide by 2:True
input:1105,divide by 2:False input:1106,divide by 2:True
input:1107,divide by 2:False input:1108,divide by 2:True
input:1109,divide by 2:False input:1110,divide by 2:True
input:1111,divide by 2:False input:1112,divide by 2:True
input:1113,divide by 2:False input:1114,divide by 2:True
input:1115,divide by 2:False input:1116,divide by 2:True
input:1117,divide by 2:False input:1118,divide by 2:True
input:1119,divide by 2:False input:1120,divide by 2:True
input:1121,divide by 2:False input:1122,divide by 2:True
input:1123,divide by 2:False input:1124,divide by 2:True
input:1125,divide by 2:False input:1126,divide by 2:True
input:1127,divide by 2:False input:1128,divide by 2:True
input:1129,divide by 2:False input:1130,divide by 2:True
input:1131,divide by 2:False input:1132,divide by 2:True
input:1133,divide by 2:False input:1134,divide by 2:True
input:1135,divide by 2:False input:1136,divide by 2:True
input:1137,divide by 2:False input:1138,divide by 2:True
input:1139,divide by 2:False input:1140,divide by 2:True
input:1141,divide by 2:False input:1142,divide by 2:True
input:1143,divide by 2:False input:1144,divide by 2:True
input:1145,divide by 2:False input:1146,divide by 2:True
input:1147,divide by 2:False input:1148,divide by 2:True
input:1149,divide by 2:False input:1150,divide by 2:True
input:1151,divide by 2:False input:1152,divide by 2:True
input:1153,divide by 2:False input:1154,divide by 2:True
input:1155,divide by 2:False input:1156,divide by 2:True
input:1157,divide by 2:False input:1158,divide by 2:True
input:1159,divide by 2:False input:1160,divide by 2:True
input:1161,divide by 2:False input:1162,divide by 2:True
input:1163,divide by 2:False input:1164,divide by 2:True
input:1165,divide by 2:False input:1166,divide by 2:True
input:1167,divide by 2:False input:1168,divide by 2:True
input:1169,divide by 2:False input:1170,divide by 2:True
input:1171,divide by 2:False input:1172,divide by 2:True
input:1173,divide by 2:False input:1174,divide by 2:True
input:1175,divide by 2:False input:1176,divide by 2:True
input:1177,divide by 2:False input:1178,divide by 2:True
input:1179,divide by 2:False input:1180,divide by 2:True
input:1181,divide by 2:False input:1182,divide by 2:True
input:1183,divide by 2:False input:1184,divide by 2:True
input:1185,divide by 2:False input:1186,divide by 2:True
input:1187,divide by 2:False input:1188,divide by 2:True
input:1189,divide by 2:False input:1190,divide by 2:True
input:1191,divide by 2:False input:1192,divide by 2:True
input:1193,divide by 2:False input:1194,divide by 2:True
input:1195,divide by 2:False input:1196,divide by 2:True
input:1197,divide by 2:False input:1198,divide by 2:True
input:1199,divide by 2:False
Moreover, divide by other numbers (say, 7) is well, too:
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
def layer_2_activation(x, w1, b1, w2, b2):
y2 = layer_2(x, w1, b1, w2, b2)
# return 1 / (1 + np.exp(-y2))
return (y2 > 0) * 1
def loss(param):
w1, b1, w2, b2 = param
x = np.arange(0, 1000, 1)
y_hat = layer_2_activation(x, w1, b1, w2, b2)
y_true = (x % 7 > 0) * 1
return sum(np.square(y_hat - y_true))
# %%
from sko.GA import GA
ga = GA(func=loss, n_dim=4, size_pop=50, max_iter=100, lb=[1, 0, 1, 0], ub=[32, 1, 2, 1], precision=1)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)
for x in range(1001, 1200):
y_hat = layer_2_activation(x, *best_x)
print('input:{},divide by 7:{}'.format(x, y_hat == 0))
input:1001,divide by 7:True input:1002,divide by 7:False
input:1003,divide by 7:False input:1004,divide by 7:False
input:1005,divide by 7:False input:1006,divide by 7:False
input:1007,divide by 7:False input:1008,divide by 7:True
input:1009,divide by 7:False input:1010,divide by 7:False
input:1011,divide by 7:False input:1012,divide by 7:False
input:1013,divide by 7:False input:1014,divide by 7:False
input:1015,divide by 7:True input:1016,divide by 7:False
input:1017,divide by 7:False input:1018,divide by 7:False
input:1019,divide by 7:False input:1020,divide by 7:False
input:1021,divide by 7:False input:1022,divide by 7:True
input:1023,divide by 7:False input:1024,divide by 7:False
input:1025,divide by 7:False input:1026,divide by 7:False
input:1027,divide by 7:False input:1028,divide by 7:False
input:1029,divide by 7:True input:1030,divide by 7:False
input:1031,divide by 7:False input:1032,divide by 7:False
input:1033,divide by 7:False input:1034,divide by 7:False
input:1035,divide by 7:False input:1036,divide by 7:True
input:1037,divide by 7:False input:1038,divide by 7:False
input:1039,divide by 7:False input:1040,divide by 7:False
input:1041,divide by 7:False input:1042,divide by 7:False
input:1043,divide by 7:True input:1044,divide by 7:False
input:1045,divide by 7:False input:1046,divide by 7:False
input:1047,divide by 7:False input:1048,divide by 7:False
input:1049,divide by 7:False input:1050,divide by 7:True
input:1051,divide by 7:False input:1052,divide by 7:False
input:1053,divide by 7:False input:1054,divide by 7:False
input:1055,divide by 7:False input:1056,divide by 7:False
input:1057,divide by 7:True input:1058,divide by 7:False
input:1059,divide by 7:False input:1060,divide by 7:False
input:1061,divide by 7:False input:1062,divide by 7:False
input:1063,divide by 7:False input:1064,divide by 7:True
input:1065,divide by 7:False input:1066,divide by 7:False
input:1067,divide by 7:False input:1068,divide by 7:False
input:1069,divide by 7:False input:1070,divide by 7:False
input:1071,divide by 7:True input:1072,divide by 7:False
input:1073,divide by 7:False input:1074,divide by 7:False
input:1075,divide by 7:False input:1076,divide by 7:False
input:1077,divide by 7:False input:1078,divide by 7:True
input:1079,divide by 7:False input:1080,divide by 7:False
input:1081,divide by 7:False input:1082,divide by 7:False
input:1083,divide by 7:False input:1084,divide by 7:False
input:1085,divide by 7:True input:1086,divide by 7:False
input:1087,divide by 7:False input:1088,divide by 7:False
input:1089,divide by 7:False input:1090,divide by 7:False
input:1091,divide by 7:False input:1092,divide by 7:True
input:1093,divide by 7:False input:1094,divide by 7:False
input:1095,divide by 7:False input:1096,divide by 7:False
input:1097,divide by 7:False input:1098,divide by 7:False
input:1099,divide by 7:True input:1100,divide by 7:False
input:1101,divide by 7:False input:1102,divide by 7:False
input:1103,divide by 7:False input:1104,divide by 7:False
input:1105,divide by 7:False input:1106,divide by 7:True
input:1107,divide by 7:False input:1108,divide by 7:False
input:1109,divide by 7:False input:1110,divide by 7:False
input:1111,divide by 7:False input:1112,divide by 7:False
input:1113,divide by 7:True input:1114,divide by 7:False
input:1115,divide by 7:False input:1116,divide by 7:False
input:1117,divide by 7:False input:1118,divide by 7:False
input:1119,divide by 7:False input:1120,divide by 7:True
input:1121,divide by 7:False input:1122,divide by 7:False
input:1123,divide by 7:False input:1124,divide by 7:False
input:1125,divide by 7:False input:1126,divide by 7:False
input:1127,divide by 7:True input:1128,divide by 7:False
input:1129,divide by 7:False input:1130,divide by 7:False
input:1131,divide by 7:False input:1132,divide by 7:False
input:1133,divide by 7:False input:1134,divide by 7:True
input:1135,divide by 7:False input:1136,divide by 7:False
input:1137,divide by 7:False input:1138,divide by 7:False
input:1139,divide by 7:False input:1140,divide by 7:False
input:1141,divide by 7:True input:1142,divide by 7:False
input:1143,divide by 7:False input:1144,divide by 7:False
input:1145,divide by 7:False input:1146,divide by 7:False
input:1147,divide by 7:False input:1148,divide by 7:True
input:1149,divide by 7:False input:1150,divide by 7:False
input:1151,divide by 7:False input:1152,divide by 7:False
input:1153,divide by 7:False input:1154,divide by 7:False
input:1155,divide by 7:True input:1156,divide by 7:False
input:1157,divide by 7:False input:1158,divide by 7:False
input:1159,divide by 7:False input:1160,divide by 7:False
input:1161,divide by 7:False input:1162,divide by 7:True
input:1163,divide by 7:False input:1164,divide by 7:False
input:1165,divide by 7:False input:1166,divide by 7:False
input:1167,divide by 7:False input:1168,divide by 7:False
input:1169,divide by 7:True input:1170,divide by 7:False
input:1171,divide by 7:False input:1172,divide by 7:False
input:1173,divide by 7:False input:1174,divide by 7:False
input:1175,divide by 7:False input:1176,divide by 7:True
input:1177,divide by 7:False input:1178,divide by 7:False
input:1179,divide by 7:False input:1180,divide by 7:False
input:1181,divide by 7:False input:1182,divide by 7:False
input:1183,divide by 7:True input:1184,divide by 7:False
input:1185,divide by 7:False input:1186,divide by 7:False
input:1187,divide by 7:False input:1188,divide by 7:False
input:1189,divide by 7:False input:1190,divide by 7:True
input:1191,divide by 7:False input:1192,divide by 7:False
input:1193,divide by 7:False input:1194,divide by 7:False
input:1195,divide by 7:False input:1196,divide by 7:False
input:1197,divide by 7:True input:1198,divide by 7:False
input:1199,divide by 7:False
Explanation:
I get 2 different solutions. They both are good:
1. sin as activation
2. floor(or int) as activation
It is impossible to find the best weights using gradient descent, and I use genetic algorithm (from scikit-opt)
|
Train a Neural Network to distinguish between even and odd numbers
My solution
import numpy as np
def layer_1_z(x, w1, b1):
return 1 / w1 * x + b1
def layer_2(x, w1, b1, w2, b2):
y1 = layer_1_z(x, w1, b1)
y2 = y1 - np.floor(y1)
return w2 * y2 + b2
|
12,141
|
Train a Neural Network to distinguish between even and odd numbers
|
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segments. This is a machine vision problem and that could be learned by conventional networks.
On the other extreme, if the number is stored as a float, the question reduces (or generalizes) to recognize when a float number is approximately an integer.
|
Train a Neural Network to distinguish between even and odd numbers
|
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segmen
|
Train a Neural Network to distinguish between even and odd numbers
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segments. This is a machine vision problem and that could be learned by conventional networks.
On the other extreme, if the number is stored as a float, the question reduces (or generalizes) to recognize when a float number is approximately an integer.
|
Train a Neural Network to distinguish between even and odd numbers
One idea evading the use of explicit "mod 2" in the input could be to codify the number as a sequence of pixels, then the problem amounts to recognize if the segment can be split into two equal segmen
|
12,142
|
Train a Neural Network to distinguish between even and odd numbers
|
I created such a network in here.
The representation @William Gottschalk gave was the foundation.
It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons for one-hot encoding of 0 and 1.
|
Train a Neural Network to distinguish between even and odd numbers
|
I created such a network in here.
The representation @William Gottschalk gave was the foundation.
It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons fo
|
Train a Neural Network to distinguish between even and odd numbers
I created such a network in here.
The representation @William Gottschalk gave was the foundation.
It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons for one-hot encoding of 0 and 1.
|
Train a Neural Network to distinguish between even and odd numbers
I created such a network in here.
The representation @William Gottschalk gave was the foundation.
It just uses 1 neuron in the first hidden layer with 32 inputs. The output layer has just 2 neurons fo
|
12,143
|
What exactly are moments? How are they derived?
|
It's been a long time since I took a physics class, so let me know if any of this is incorrect.
General description of moments with physical analogs
Take a random variable, $X$. The $n$-th moment of $X$ around $c$ is:
$$m_n(c)=E[(X-c)^n]$$
This corresponds exactly to the physical sense of a moment. Imagine $X$ as a collection of points along the real line with density given by the pdf. Place a fulcrum under this line at $c$ and start calculating moments relative to that fulcrum, and the calculations will correspond exactly to statistical moments.
Most of the time, the $n$-th moment of $X$ refers to the moment around 0 (moments where the fulcrum is placed at 0):
$$m_n=E[X^n]$$
The $n$-th central moment of $X$ is:
$$\hat m_n=m_n(m_1) =E[(X-m_1)^n]$$
This corresponds to moments where the fulcrum is placed at the center of mass, so the distribution is balanced. It allows moments to be more easily interpreted, as we'll see below. The first central moment will always be zero, because the distribution is balanced.
The $n$-th standardized moment of $X$ is:
$$\tilde m_n = \dfrac{\hat m_n}{\left(\sqrt{\hat m_2}\right)^n}=\dfrac{E[(X-m_1)^n]}
{\left(\sqrt{E[(X-m_1)^2]}\right)^n}$$
Again, this scales moments by the spread of the distribution, allowing for easier interpretation specifically of Kurtosis. The first standardized moment will always be zero, the second will always be one. This corresponds to the moment of the standard score (z-score) of a variable. I don't have a great physical analog for this concept.
Commonly used moments
For any distribution there are potentially an infinite number of moments. Enough moments will almost always fully characterize and distribution (deriving the necessary conditions for this to be certain is a part of the moment problem). Four moments are commonly talked about a lot in statistics:
Mean - the 1st moment (centered around zero). It is the center of mass of the distribution, or alternatively it's proportional to the moment of torque of the distribution relative to a fulcrum at 0.
Variance - the 2nd central moment. Interpreted as representing the degree to which the distribution of $X$ is spread out. It corresponds to the moment of inertia of a distribution balanced on its fulcrum.
Skewness - the 3rd central moment (sometimes standardized). A measure of the skew of a distribution in one direction or another. Relative to a normal distribution (which has no skew), positively skewed distribution have a low probability of extremely high outcomes, negatively skewed distributions have a small probability of extremely low outcomes. Physical analogs are difficult, but loosely it measures the asymmetry of a distribution. As an example, the figure below is taken from Wikipedia.
Kurtosis - the 4th standardized moment, usually excess Kurtosis, the 4th standardized moment minus three. Kurtosis measures the extent to which $X$ places more probability on the center of the distribution relative to the tails. Higher Kurtosis means less frequent larger deviations from the mean and more frequent smaller deviations. It is often interpreted relative to the normal distribution, which has a 4th standardized moment of 3, hence an excess Kurtosis of 0. Here a physical analog is even more difficult, but in the figure below, taken from Wikipedia, the distributions with higher peaks have greater Kurtosis.
We rarely talk about moments beyond Kurtosis, precisely because there is very little intuition to them. This is similar to physicists stopping after the second moment.
|
What exactly are moments? How are they derived?
|
It's been a long time since I took a physics class, so let me know if any of this is incorrect.
General description of moments with physical analogs
Take a random variable, $X$. The $n$-th moment of $
|
What exactly are moments? How are they derived?
It's been a long time since I took a physics class, so let me know if any of this is incorrect.
General description of moments with physical analogs
Take a random variable, $X$. The $n$-th moment of $X$ around $c$ is:
$$m_n(c)=E[(X-c)^n]$$
This corresponds exactly to the physical sense of a moment. Imagine $X$ as a collection of points along the real line with density given by the pdf. Place a fulcrum under this line at $c$ and start calculating moments relative to that fulcrum, and the calculations will correspond exactly to statistical moments.
Most of the time, the $n$-th moment of $X$ refers to the moment around 0 (moments where the fulcrum is placed at 0):
$$m_n=E[X^n]$$
The $n$-th central moment of $X$ is:
$$\hat m_n=m_n(m_1) =E[(X-m_1)^n]$$
This corresponds to moments where the fulcrum is placed at the center of mass, so the distribution is balanced. It allows moments to be more easily interpreted, as we'll see below. The first central moment will always be zero, because the distribution is balanced.
The $n$-th standardized moment of $X$ is:
$$\tilde m_n = \dfrac{\hat m_n}{\left(\sqrt{\hat m_2}\right)^n}=\dfrac{E[(X-m_1)^n]}
{\left(\sqrt{E[(X-m_1)^2]}\right)^n}$$
Again, this scales moments by the spread of the distribution, allowing for easier interpretation specifically of Kurtosis. The first standardized moment will always be zero, the second will always be one. This corresponds to the moment of the standard score (z-score) of a variable. I don't have a great physical analog for this concept.
Commonly used moments
For any distribution there are potentially an infinite number of moments. Enough moments will almost always fully characterize and distribution (deriving the necessary conditions for this to be certain is a part of the moment problem). Four moments are commonly talked about a lot in statistics:
Mean - the 1st moment (centered around zero). It is the center of mass of the distribution, or alternatively it's proportional to the moment of torque of the distribution relative to a fulcrum at 0.
Variance - the 2nd central moment. Interpreted as representing the degree to which the distribution of $X$ is spread out. It corresponds to the moment of inertia of a distribution balanced on its fulcrum.
Skewness - the 3rd central moment (sometimes standardized). A measure of the skew of a distribution in one direction or another. Relative to a normal distribution (which has no skew), positively skewed distribution have a low probability of extremely high outcomes, negatively skewed distributions have a small probability of extremely low outcomes. Physical analogs are difficult, but loosely it measures the asymmetry of a distribution. As an example, the figure below is taken from Wikipedia.
Kurtosis - the 4th standardized moment, usually excess Kurtosis, the 4th standardized moment minus three. Kurtosis measures the extent to which $X$ places more probability on the center of the distribution relative to the tails. Higher Kurtosis means less frequent larger deviations from the mean and more frequent smaller deviations. It is often interpreted relative to the normal distribution, which has a 4th standardized moment of 3, hence an excess Kurtosis of 0. Here a physical analog is even more difficult, but in the figure below, taken from Wikipedia, the distributions with higher peaks have greater Kurtosis.
We rarely talk about moments beyond Kurtosis, precisely because there is very little intuition to them. This is similar to physicists stopping after the second moment.
|
What exactly are moments? How are they derived?
It's been a long time since I took a physics class, so let me know if any of this is incorrect.
General description of moments with physical analogs
Take a random variable, $X$. The $n$-th moment of $
|
12,144
|
What exactly are moments? How are they derived?
|
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution".
Moments do NOT completely characterize a distribution. Specifically, knowledge of all infinite number of moments, even if they exist, does not necessarily uniquely determine the distribution.
Per my favorite probability book, Feller "An Introduction to Probability Theory and Its Applications Vol II" (see my answer at Real-life examples of common distributions ), section VII.3 example on pp. 227-228, the Lognormal is not determined by its moments, meaning that there are other distributions having all infinite number of moments the same as the Lognormal, but different distribution functions. As is widely known, the Moment Generating Function does not exist for the Lognormal, nor can it for these other distributions possessing the same moments.
As stated on p. 228, an essentially nonzero random variable $X$ is determined by its moments if they all exist and
$$\sum_{n=1}^{\infty} (\mathbb{E}[X^{2n}])^{-1/(2n)}$$
diverges. Note that this is not an if and only if. This condition does not hold for the Lognormal, and indeed it is not determined by its moments.
On the other hand, distributions (random variables) which share all infinite number of moments, can only differ by so much, due to inequalities which can be derived from their moments.
|
What exactly are moments? How are they derived?
|
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution".
M
|
What exactly are moments? How are they derived?
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution".
Moments do NOT completely characterize a distribution. Specifically, knowledge of all infinite number of moments, even if they exist, does not necessarily uniquely determine the distribution.
Per my favorite probability book, Feller "An Introduction to Probability Theory and Its Applications Vol II" (see my answer at Real-life examples of common distributions ), section VII.3 example on pp. 227-228, the Lognormal is not determined by its moments, meaning that there are other distributions having all infinite number of moments the same as the Lognormal, but different distribution functions. As is widely known, the Moment Generating Function does not exist for the Lognormal, nor can it for these other distributions possessing the same moments.
As stated on p. 228, an essentially nonzero random variable $X$ is determined by its moments if they all exist and
$$\sum_{n=1}^{\infty} (\mathbb{E}[X^{2n}])^{-1/(2n)}$$
diverges. Note that this is not an if and only if. This condition does not hold for the Lognormal, and indeed it is not determined by its moments.
On the other hand, distributions (random variables) which share all infinite number of moments, can only differ by so much, due to inequalities which can be derived from their moments.
|
What exactly are moments? How are they derived?
This is a bit of an old thread, but i wish to correct a misstatement in the comment by Fg Nu who wrote "Moments are parameterized by the natural numbers, and completely characterize a distribution".
M
|
12,145
|
What exactly are moments? How are they derived?
|
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its moment of inertia. After that, you're on your own.
|
What exactly are moments? How are they derived?
|
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its
|
What exactly are moments? How are they derived?
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its moment of inertia. After that, you're on your own.
|
What exactly are moments? How are they derived?
A corollary to Glen_b's remarks is that the first moment, the mean, corresponds to the center of gravity for a physical object, and the second moment around the mean, the variance, corresponds to its
|
12,146
|
What exactly are moments? How are they derived?
|
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass.
Actually, we have to assume that each tier in the tree is complete. When we break data up into bins, we get a real number from the division, but we round up. Well, that's a tier that is incomplete, so we don't end up with a histogram approximating the normal.
Change the branching probabilities to p=0.9999 and q=0.0001 and that gets us a skewed normal. The probability mass shifted. That accounts for skewness.
Having incomplete tiers or bins less than 2^n generate binomial trees with areas that have no probability mass. This gives us kurtosis.
Response to comment:
When I was talking about determining the number of bins, round up to the next integer.
Quincunx machines drop balls that come to eventually approximate the normal distribution via the binomial. Several assumptions are made by such a machine: 1) the number of bins is finite, 2) the underlying tree is binary, and 3) the probabilities are fixed. The Quincunx machine at the Museum of Mathematics in New York, lets the user dynamically change the probabilities. The probabilities can change at any time, even before the current layer is finished. Hence this idea about the bins not being filled.
Unlike what I said in my original answer when you have a void in the tree, the distribution demonstrates kurtosis.
I'm looking at this from the perspective of generative systems. I use a triangle to summarize decision trees. When a novel decision is made, more bins are added at the base of the triangle, and in terms of the distribution, in the tails. Trimming subtrees from the tree would leave voids in the distribution's probability mass.
I only replied to give you an intuitive sense. Labels? I've used Excel and played with the probabilities in the binomial and generated the expected skews. I have not done so with kurtosis, it doesn't help that we are forced to think about probability mass as being static while using language suggesting movement. The underlying data or balls cause the kurtosis. Then, we analyze it variously and attribute it to shape descriptive terms like center, shoulder, and tail. The only things we have to work with are the bins. Bins live dynamic lives even if the data can't.
|
What exactly are moments? How are they derived?
|
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass.
Actually, we have to
|
What exactly are moments? How are they derived?
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass.
Actually, we have to assume that each tier in the tree is complete. When we break data up into bins, we get a real number from the division, but we round up. Well, that's a tier that is incomplete, so we don't end up with a histogram approximating the normal.
Change the branching probabilities to p=0.9999 and q=0.0001 and that gets us a skewed normal. The probability mass shifted. That accounts for skewness.
Having incomplete tiers or bins less than 2^n generate binomial trees with areas that have no probability mass. This gives us kurtosis.
Response to comment:
When I was talking about determining the number of bins, round up to the next integer.
Quincunx machines drop balls that come to eventually approximate the normal distribution via the binomial. Several assumptions are made by such a machine: 1) the number of bins is finite, 2) the underlying tree is binary, and 3) the probabilities are fixed. The Quincunx machine at the Museum of Mathematics in New York, lets the user dynamically change the probabilities. The probabilities can change at any time, even before the current layer is finished. Hence this idea about the bins not being filled.
Unlike what I said in my original answer when you have a void in the tree, the distribution demonstrates kurtosis.
I'm looking at this from the perspective of generative systems. I use a triangle to summarize decision trees. When a novel decision is made, more bins are added at the base of the triangle, and in terms of the distribution, in the tails. Trimming subtrees from the tree would leave voids in the distribution's probability mass.
I only replied to give you an intuitive sense. Labels? I've used Excel and played with the probabilities in the binomial and generated the expected skews. I have not done so with kurtosis, it doesn't help that we are forced to think about probability mass as being static while using language suggesting movement. The underlying data or balls cause the kurtosis. Then, we analyze it variously and attribute it to shape descriptive terms like center, shoulder, and tail. The only things we have to work with are the bins. Bins live dynamic lives even if the data can't.
|
What exactly are moments? How are they derived?
A binomial tree has two branches each with a probably of 0.5. Actually, p=0.5, and q=1-0.5=0.5. This generates a normal distribution with an evenly distributed probability mass.
Actually, we have to
|
12,147
|
What exactly are moments? How are they derived?
|
How can I build intuition for what moments really are?
In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article exploiting moments is:
Ming-Kuei Hu, "Visual pattern recognition by moment invariants," in IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179-187, February 1962.
|
What exactly are moments? How are they derived?
|
How can I build intuition for what moments really are?
In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article ex
|
What exactly are moments? How are they derived?
How can I build intuition for what moments really are?
In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article exploiting moments is:
Ming-Kuei Hu, "Visual pattern recognition by moment invariants," in IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179-187, February 1962.
|
What exactly are moments? How are they derived?
How can I build intuition for what moments really are?
In the context of computer vision, recognizing two-dimensional shapes such as letters or geometric objects from pixel data, a classic article ex
|
12,148
|
Uniform random variable as sum of two random variables
|
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables.
Notation
Let $X$ and $Y$ be iid such that $X+Y$ has a uniform distribution on $[0,1]$. This means that for all $0\le a \le b \le 1$,
$$\Pr(a < X+Y \le b) = b-a.$$
The essential support of the common distribution of $X$ and $Y$ therefore is $[0,1/2]$ (for otherwise there would be positive probability that $X+Y$ lies outside $[0,1]$).
The Picture
Let $0 \lt \epsilon \lt 1/4$. Contemplate this diagram showing how sums of random variables are computed:
The underlying probability distribution is the joint one for $(X,Y)$. The probability of any event $a \lt X+Y \le b$ is given by the total probability covered by the diagonal band stretching between the lines $x+y=a$ and $x+y=b$. Three such bands are shown: from $0$ to $\epsilon$, appearing as a small blue triangle in the lower left; from $1/2-\epsilon$ to $1/2+\epsilon$, shown as a gray rectangle capped with two (yellow and green) triangles; and from $1-\epsilon$ to $1$, appearing as a small red triangle in the upper right.
What the Picture Shows
By comparing the lower left triangle in the figure to the lower left square containing it and exploiting the iid assumption for $X$ and $Y$, it is clear that
$$\epsilon = \Pr(X+Y \le \epsilon) \lt \Pr(X \le \epsilon)\Pr(Y \le \epsilon) = \Pr(X \le \epsilon)^2.$$
Note that the inequality is strict: equality is not possible because there is some positive probability that both $X$ and $Y$ are less than $\epsilon$ but nevertheless $X+Y \gt \epsilon$.
Similarly, comparing the red triangle to the square in the upper right corner,
$$\epsilon = \Pr(X+Y \gt 1-\epsilon) \lt \Pr(X \gt 1/2-\epsilon)^2.$$
Finally, comparing the two opposite triangles in the upper left and lower right to the diagonal band containing them gives another strict inequality,
$$2\epsilon \lt 2 \Pr(X\le \epsilon)\Pr(X \gt 1/2-\epsilon) \lt \Pr(1/2-\epsilon \lt X+Y \le 1/2+\epsilon) = 2\epsilon.$$
The first inequality ensues from the previous two (take their square roots and multiply them) while the second one describes the (strict) inclusion of the triangles within the band and the last equality expresses the uniformity of $X+Y$. The conclusion that $2\epsilon \lt 2\epsilon$ is the contradiction proving such $X$ and $Y$ cannot exist, QED.
|
Uniform random variable as sum of two random variables
|
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables.
Notation
Let $X$ an
|
Uniform random variable as sum of two random variables
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables.
Notation
Let $X$ and $Y$ be iid such that $X+Y$ has a uniform distribution on $[0,1]$. This means that for all $0\le a \le b \le 1$,
$$\Pr(a < X+Y \le b) = b-a.$$
The essential support of the common distribution of $X$ and $Y$ therefore is $[0,1/2]$ (for otherwise there would be positive probability that $X+Y$ lies outside $[0,1]$).
The Picture
Let $0 \lt \epsilon \lt 1/4$. Contemplate this diagram showing how sums of random variables are computed:
The underlying probability distribution is the joint one for $(X,Y)$. The probability of any event $a \lt X+Y \le b$ is given by the total probability covered by the diagonal band stretching between the lines $x+y=a$ and $x+y=b$. Three such bands are shown: from $0$ to $\epsilon$, appearing as a small blue triangle in the lower left; from $1/2-\epsilon$ to $1/2+\epsilon$, shown as a gray rectangle capped with two (yellow and green) triangles; and from $1-\epsilon$ to $1$, appearing as a small red triangle in the upper right.
What the Picture Shows
By comparing the lower left triangle in the figure to the lower left square containing it and exploiting the iid assumption for $X$ and $Y$, it is clear that
$$\epsilon = \Pr(X+Y \le \epsilon) \lt \Pr(X \le \epsilon)\Pr(Y \le \epsilon) = \Pr(X \le \epsilon)^2.$$
Note that the inequality is strict: equality is not possible because there is some positive probability that both $X$ and $Y$ are less than $\epsilon$ but nevertheless $X+Y \gt \epsilon$.
Similarly, comparing the red triangle to the square in the upper right corner,
$$\epsilon = \Pr(X+Y \gt 1-\epsilon) \lt \Pr(X \gt 1/2-\epsilon)^2.$$
Finally, comparing the two opposite triangles in the upper left and lower right to the diagonal band containing them gives another strict inequality,
$$2\epsilon \lt 2 \Pr(X\le \epsilon)\Pr(X \gt 1/2-\epsilon) \lt \Pr(1/2-\epsilon \lt X+Y \le 1/2+\epsilon) = 2\epsilon.$$
The first inequality ensues from the previous two (take their square roots and multiply them) while the second one describes the (strict) inclusion of the triangles within the band and the last equality expresses the uniformity of $X+Y$. The conclusion that $2\epsilon \lt 2\epsilon$ is the contradiction proving such $X$ and $Y$ cannot exist, QED.
|
Uniform random variable as sum of two random variables
The result can be proven with a picture: the visible gray areas show that a uniform distribution cannot be decomposed as a sum of two independent identically distributed variables.
Notation
Let $X$ an
|
12,149
|
Uniform random variable as sum of two random variables
|
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ and $Y$ are iid. Then $\text{Kurt}(U) = -1.2$ implies $\text{Kurt}(X) = -2.4$ which is a contradiction as $\text{Kurt}(X) \geq -2$ for any random variable.
Rather more interesting is the line of reasoning that got me to that point. $X$ (and $Y$) must be bounded between 0 and 0.5 - that much is obvious, but helpfully means that its moments and central moments exist. Let's start by considering the mean and variance: $\mathbb{E}(U)=0.5$ and $\text{Var}(U)=\frac{1}{12}$. If $X$ and $Y$ are identically distributed then we have:
$$\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y) = 2 \mathbb{E}(X)= 0.5$$
So $\mathbb{E}(X) = 0.25$. For the variance we additionally need to use independence to apply:
$$\text{Var}(X+Y) = \text{Var}(X) + \text{Var}(Y) = 2 \text{Var}(X) = \frac{1}{12}$$
Hence $\text{Var}(X) = \frac{1}{24}$ and $\sigma_X = \frac{1}{2\sqrt{6}} \approx 0.204$. Wow! That is a lot of variation for a random variable whose support ranges from 0 to 0.5. But we should have expected that, since the standard deviation isn't going to scale in the same way that the mean did.
Now, what's the largest standard deviation that a random variable can have if the smallest value it can take is 0, the largest value it can take is 0.5, and the mean is 0.25? Collecting all the probability at two point masses on the extremes, 0.25 away from the mean, would clearly give a standard deviation of 0.25. So our $\sigma_X$ is large but not impossible. (I hoped to show that this implied too much probability lay in the tails for $X + Y$ to be uniform, but I couldn't get anywhere with that on the back of an envelope.)
Second moment considerations almost put an impossible constraint on $X$ so let's consider higher moments. What about Pearson's moment coefficient of skewness, $\gamma_1 = \frac{\mathbb{E}(X - \mu_X)^3}{\sigma_X^3} = \frac{\kappa_3}{\kappa_2^{3/2}}$? This exists since the central moments exist and $\sigma_X \neq 0$. It is helpful to know some properties of the cumulants, in particular applying independence and then identical distribution gives:
$$\kappa_i(U) = \kappa_i(X + Y) = \kappa_i(X) + \kappa_i(Y) = 2\kappa_i(X)$$
This additivity property is precisely the generalisation of how we dealt with the mean and variance above - indeed, the first and second cumulants are just $\kappa_1 = \mu$ and $\kappa_2 = \sigma^2$.
Then $\kappa_3(U) = 2\kappa_3(X)$ and $\big(\kappa_2(U)\big)^{3/2} = \big(2\kappa_2(X)\big)^{3/2} = 2^{3/2} \big(\kappa_2(X)\big)^{3/2}$. The fraction for $\gamma_1$ cancels to yield $\text{Skew}(U) = \text{Skew}(X + Y) = \text{Skew}(X) / \sqrt{2}$. Since the uniform distribution has zero skewness, so does $X$, but I can't see how a contradiction arises from this restriction.
So instead, let's try the excess kurtosis, $\gamma_2 = \frac{\kappa_4}{\kappa_2^2} = \frac{\mathbb{E}(X - \mu_X)^4}{\sigma_X^4} - 3$. By a similar argument (this question is self-study, so try it!), we can show this exists and obeys:
$$\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$$
The uniform distribution has excess kurtosis $-1.2$ so we require $X$ to have excess kurtosis $-2.4$. But the smallest possible excess kurtosis is $-2$, which is achieved by the $\text{Binomial}(1, \frac{1}{2})$ Bernoulli distribution.
|
Uniform random variable as sum of two random variables
|
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ a
|
Uniform random variable as sum of two random variables
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ and $Y$ are iid. Then $\text{Kurt}(U) = -1.2$ implies $\text{Kurt}(X) = -2.4$ which is a contradiction as $\text{Kurt}(X) \geq -2$ for any random variable.
Rather more interesting is the line of reasoning that got me to that point. $X$ (and $Y$) must be bounded between 0 and 0.5 - that much is obvious, but helpfully means that its moments and central moments exist. Let's start by considering the mean and variance: $\mathbb{E}(U)=0.5$ and $\text{Var}(U)=\frac{1}{12}$. If $X$ and $Y$ are identically distributed then we have:
$$\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y) = 2 \mathbb{E}(X)= 0.5$$
So $\mathbb{E}(X) = 0.25$. For the variance we additionally need to use independence to apply:
$$\text{Var}(X+Y) = \text{Var}(X) + \text{Var}(Y) = 2 \text{Var}(X) = \frac{1}{12}$$
Hence $\text{Var}(X) = \frac{1}{24}$ and $\sigma_X = \frac{1}{2\sqrt{6}} \approx 0.204$. Wow! That is a lot of variation for a random variable whose support ranges from 0 to 0.5. But we should have expected that, since the standard deviation isn't going to scale in the same way that the mean did.
Now, what's the largest standard deviation that a random variable can have if the smallest value it can take is 0, the largest value it can take is 0.5, and the mean is 0.25? Collecting all the probability at two point masses on the extremes, 0.25 away from the mean, would clearly give a standard deviation of 0.25. So our $\sigma_X$ is large but not impossible. (I hoped to show that this implied too much probability lay in the tails for $X + Y$ to be uniform, but I couldn't get anywhere with that on the back of an envelope.)
Second moment considerations almost put an impossible constraint on $X$ so let's consider higher moments. What about Pearson's moment coefficient of skewness, $\gamma_1 = \frac{\mathbb{E}(X - \mu_X)^3}{\sigma_X^3} = \frac{\kappa_3}{\kappa_2^{3/2}}$? This exists since the central moments exist and $\sigma_X \neq 0$. It is helpful to know some properties of the cumulants, in particular applying independence and then identical distribution gives:
$$\kappa_i(U) = \kappa_i(X + Y) = \kappa_i(X) + \kappa_i(Y) = 2\kappa_i(X)$$
This additivity property is precisely the generalisation of how we dealt with the mean and variance above - indeed, the first and second cumulants are just $\kappa_1 = \mu$ and $\kappa_2 = \sigma^2$.
Then $\kappa_3(U) = 2\kappa_3(X)$ and $\big(\kappa_2(U)\big)^{3/2} = \big(2\kappa_2(X)\big)^{3/2} = 2^{3/2} \big(\kappa_2(X)\big)^{3/2}$. The fraction for $\gamma_1$ cancels to yield $\text{Skew}(U) = \text{Skew}(X + Y) = \text{Skew}(X) / \sqrt{2}$. Since the uniform distribution has zero skewness, so does $X$, but I can't see how a contradiction arises from this restriction.
So instead, let's try the excess kurtosis, $\gamma_2 = \frac{\kappa_4}{\kappa_2^2} = \frac{\mathbb{E}(X - \mu_X)^4}{\sigma_X^4} - 3$. By a similar argument (this question is self-study, so try it!), we can show this exists and obeys:
$$\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$$
The uniform distribution has excess kurtosis $-1.2$ so we require $X$ to have excess kurtosis $-2.4$. But the smallest possible excess kurtosis is $-2$, which is achieved by the $\text{Binomial}(1, \frac{1}{2})$ Bernoulli distribution.
|
Uniform random variable as sum of two random variables
I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: $\text{Kurt}(U) = \text{Kurt}(X + Y) = \text{Kurt}(X) / 2$ since $X$ a
|
12,150
|
Uniform random variable as sum of two random variables
|
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of
\begin{align}
P[X > 1/2]^2 = P[X > 1/2, Y > 1/2] \leq P[X + Y > 1] = P[U > 1] = 0.
\end{align}
This shows that $X$ has moment of order $n$ for all $n \in \mathbb{N}$. We thus can expand the characteristic function $\varphi(t)$ of $X$ as follows (see page 344, (26.7) of Billingsley's book):
\begin{align}
\varphi(t) = \sum_{k = 0}^\infty \frac{(it)^k}{k!}E(X^k), \text{ for all } t\in \mathbb{R}.
\end{align}
This means that $\varphi(t)$ is differentiable everywhere on $\mathbb{R}$. Furthermore, by assumption and the characteristic function of U(0,1) random variable, we have:
\begin{align}
\varphi(t)^2 = \frac{e^{it} - 1}{it}. \tag{1}
\end{align}
Therefore,
\begin{align}
2\varphi(t)\varphi'(t) = \frac{-te^{it} - ie^{it} + i}{-t^2}. \tag{2}
\end{align}
By $(1)$, $\varphi^2(2\pi) = 0$, whence $\varphi(2\pi) = 0$, based on which and by $(2)$,
\begin{align}
0 = 2\varphi(2\pi)\varphi'(2\pi)= -\frac{i}{4\pi^2} \neq 0.
\end{align}
Contradiction!
|
Uniform random variable as sum of two random variables
|
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of
\begin{align}
P[X > 1/2]^2 =
|
Uniform random variable as sum of two random variables
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of
\begin{align}
P[X > 1/2]^2 = P[X > 1/2, Y > 1/2] \leq P[X + Y > 1] = P[U > 1] = 0.
\end{align}
This shows that $X$ has moment of order $n$ for all $n \in \mathbb{N}$. We thus can expand the characteristic function $\varphi(t)$ of $X$ as follows (see page 344, (26.7) of Billingsley's book):
\begin{align}
\varphi(t) = \sum_{k = 0}^\infty \frac{(it)^k}{k!}E(X^k), \text{ for all } t\in \mathbb{R}.
\end{align}
This means that $\varphi(t)$ is differentiable everywhere on $\mathbb{R}$. Furthermore, by assumption and the characteristic function of U(0,1) random variable, we have:
\begin{align}
\varphi(t)^2 = \frac{e^{it} - 1}{it}. \tag{1}
\end{align}
Therefore,
\begin{align}
2\varphi(t)\varphi'(t) = \frac{-te^{it} - ie^{it} + i}{-t^2}. \tag{2}
\end{align}
By $(1)$, $\varphi^2(2\pi) = 0$, whence $\varphi(2\pi) = 0$, based on which and by $(2)$,
\begin{align}
0 = 2\varphi(2\pi)\varphi'(2\pi)= -\frac{i}{4\pi^2} \neq 0.
\end{align}
Contradiction!
|
Uniform random variable as sum of two random variables
Assume $U = X + Y$ for two i.i.d random variables. First note that since $U$ has support $[0, 1]$, $X$ must be a bounded random variable (bounded by $1/2$), as a result of
\begin{align}
P[X > 1/2]^2 =
|
12,151
|
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
|
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do a formal test you want to know the distribution under the hypothesis that the two samples are independent and each i.i.d. from the same underlying distribution. To rely on the usual asymptotic theory you will need continuity of the underlying common distribution (not of the empirical distributions). See the Wikipedia page linked to above for more details.
In R, you can use the ks.test, which computes exact $p$-values for small sample sizes.
|
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
|
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do
|
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do a formal test you want to know the distribution under the hypothesis that the two samples are independent and each i.i.d. from the same underlying distribution. To rely on the usual asymptotic theory you will need continuity of the underlying common distribution (not of the empirical distributions). See the Wikipedia page linked to above for more details.
In R, you can use the ks.test, which computes exact $p$-values for small sample sizes.
|
Can I use Kolmogorov-Smirnov to compare two empirical distributions?
That is OK, and quite reasonable. It is referred to as the two-sample Kolmogorov-Smirnov test. Measuring the difference between two distribution functions by the supnorm is always sensible, but to do
|
12,152
|
Stepwise regression in R - How does it work?
|
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models.
Here's a quickie to generate formula for all 15 combinations.
library(leaps)
tmp<-regsubsets(mpg ~ wt + drat + disp + qsec, data=mtcars, nbest=1000, really.big=T, intercept=F)
all.mods <- summary(tmp)[[1]]
all.mods <- lapply(1:nrow(all.mods), function(x) as.formula(paste("mpg~", paste(names(which(all.mods[x,])), collapse="+"))))
head(all.mods)
[[1]]
mpg ~ drat
<environment: 0x0000000013a678d8>
[[2]]
mpg ~ qsec
<environment: 0x0000000013a6b3b0>
[[3]]
mpg ~ wt
<environment: 0x0000000013a6df28>
[[4]]
mpg ~ disp
<environment: 0x0000000013a70aa0>
[[5]]
mpg ~ wt + qsec
<environment: 0x0000000013a74540>
[[6]]
mpg ~ drat + disp
<environment: 0x0000000013a76f68>
AIC values for each of the model are extracted with:
all.lm<-lapply(all.mods, lm, mtcars)
sapply(all.lm, extractAIC)[2,]
[1] 97.98786 111.77605 73.21736 77.39732 63.90843 77.92493 74.15591 79.02978 91.24052 71.35572
[11] 63.89108 65.90826 78.68074 72.97352 65.62733
Let's go back to your step-regression. The extractAIC value for lm(mpg ~ wt + drat + disp + qsec) is 65.63 (equivalent to model 15 in the list above).
If the model remove disp (-disp), then lm(mpg ~ wt + drat + qsec) is 63.891 (or model 11 in the list).
If the model do not remove anything (none), then the AIC is still 65.63
If the model remove qsec (-qsec), then lm(mpg ~ wt + drat + disp) is 65.908 (model 12).
etc.
Basically the summary reveal the all possible stepwise removal of one-term from your full model and compare the extractAIC value, by listing them in ascending order. Since the smaller AIC value is more likely to resemble the TRUTH model, step retain the (-disp) model in step one.
The process is repeated again, but with the retained (-disp) model as the starting point. Terms are either subtracted ("backwards") or subtracted/added ("both") to allow the comparison of the models. Since the lowest AIC value in comparison is still the (-disp) model, process stop and resultant models given.
With regards to your query: "What is the function trying to achieve by adding the +disp again in the stepwise selection?", in this case, it doesn't really do anything, cos the best model across all 15 models is model 11, i.e. lm(mpg ~ wt + drat + qsec).
However, in complicated models with large number of predictors that require numerous steps to resolve, the adding back of a term that was removed initially is critical to provide the most exhaustive way of comparing the terms.
Hope this help in some way.
|
Stepwise regression in R - How does it work?
|
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models.
Here's a quickie to generate formula for all 15 combinations.
library(leaps)
tmp
|
Stepwise regression in R - How does it work?
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models.
Here's a quickie to generate formula for all 15 combinations.
library(leaps)
tmp<-regsubsets(mpg ~ wt + drat + disp + qsec, data=mtcars, nbest=1000, really.big=T, intercept=F)
all.mods <- summary(tmp)[[1]]
all.mods <- lapply(1:nrow(all.mods), function(x) as.formula(paste("mpg~", paste(names(which(all.mods[x,])), collapse="+"))))
head(all.mods)
[[1]]
mpg ~ drat
<environment: 0x0000000013a678d8>
[[2]]
mpg ~ qsec
<environment: 0x0000000013a6b3b0>
[[3]]
mpg ~ wt
<environment: 0x0000000013a6df28>
[[4]]
mpg ~ disp
<environment: 0x0000000013a70aa0>
[[5]]
mpg ~ wt + qsec
<environment: 0x0000000013a74540>
[[6]]
mpg ~ drat + disp
<environment: 0x0000000013a76f68>
AIC values for each of the model are extracted with:
all.lm<-lapply(all.mods, lm, mtcars)
sapply(all.lm, extractAIC)[2,]
[1] 97.98786 111.77605 73.21736 77.39732 63.90843 77.92493 74.15591 79.02978 91.24052 71.35572
[11] 63.89108 65.90826 78.68074 72.97352 65.62733
Let's go back to your step-regression. The extractAIC value for lm(mpg ~ wt + drat + disp + qsec) is 65.63 (equivalent to model 15 in the list above).
If the model remove disp (-disp), then lm(mpg ~ wt + drat + qsec) is 63.891 (or model 11 in the list).
If the model do not remove anything (none), then the AIC is still 65.63
If the model remove qsec (-qsec), then lm(mpg ~ wt + drat + disp) is 65.908 (model 12).
etc.
Basically the summary reveal the all possible stepwise removal of one-term from your full model and compare the extractAIC value, by listing them in ascending order. Since the smaller AIC value is more likely to resemble the TRUTH model, step retain the (-disp) model in step one.
The process is repeated again, but with the retained (-disp) model as the starting point. Terms are either subtracted ("backwards") or subtracted/added ("both") to allow the comparison of the models. Since the lowest AIC value in comparison is still the (-disp) model, process stop and resultant models given.
With regards to your query: "What is the function trying to achieve by adding the +disp again in the stepwise selection?", in this case, it doesn't really do anything, cos the best model across all 15 models is model 11, i.e. lm(mpg ~ wt + drat + qsec).
However, in complicated models with large number of predictors that require numerous steps to resolve, the adding back of a term that was removed initially is critical to provide the most exhaustive way of comparing the terms.
Hope this help in some way.
|
Stepwise regression in R - How does it work?
Perhaps it would be easier to understand how stepwise regression is being done by looking at all 15 possible lm models.
Here's a quickie to generate formula for all 15 combinations.
library(leaps)
tmp
|
12,153
|
Stepwise regression in R - How does it work?
|
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you can only discard variables from the model at any step, whereas in stepwise selection you can also add variables to the model.
About the output in the stepwise selection, in general the output shows you ordered alternatives to reduce your AIC, so the first row at any step is your best option. Then, there is a +disp in the third row because adding that variable to your model would be your third best option to decrease your AIC. But obviously, as your best alternative is <none>, that means not doing anything, the procedure stops and gives you the same results as in backward selection.
|
Stepwise regression in R - How does it work?
|
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you
|
Stepwise regression in R - How does it work?
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you can only discard variables from the model at any step, whereas in stepwise selection you can also add variables to the model.
About the output in the stepwise selection, in general the output shows you ordered alternatives to reduce your AIC, so the first row at any step is your best option. Then, there is a +disp in the third row because adding that variable to your model would be your third best option to decrease your AIC. But obviously, as your best alternative is <none>, that means not doing anything, the procedure stops and gives you the same results as in backward selection.
|
Stepwise regression in R - How does it work?
Here a simplified response. First, both procedures try to reduce the AIC of a given model, but they do it in different ways. Then, the basic difference is that in the backward selection procedure you
|
12,154
|
Non-normal distributions with zero skewness and zero excess kurtosis?
|
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0)
(a) For example, in this answer an example is given by taking a 50-50 mixture of a gamma variate, (which I call $X$), and the negative of a second one, which has a density that looks like this:
Clearly the result is symmetric and not normal. The scale parameter is unimportant here, so we can make it 1. Careful choice of the shape parameter of the gamma yields the required kurtosis:
The variance of this double-gamma ($Y$) is easy to work out in terms of the gamma variate it's based on: $\text{Var}(Y)=E(X^2)=\text{Var}(X)+E(X)^2=\alpha+\alpha^2$.
The fourth central moment of the variable $Y$ is the same as $E(X^4)$, which for a gamma($\alpha$) is $\alpha(\alpha+1)(\alpha+2)(\alpha+3)$
As a result the kurtosis is $\frac{\alpha(\alpha+1)(\alpha+2)(\alpha+3)}{\alpha^2(\alpha+1)^2}=\frac{(\alpha+2)(\alpha+3)}{\alpha(\alpha+1)}$. This is $3$ when $(\alpha+2)(\alpha+3)=3\alpha(\alpha+1)$, which happens when $\alpha=(\sqrt{13}+1)/2\approx 2.303$.
(b) We could also create an example as a scale mixture of two uniforms. Let $U_1\sim U(-1,1)$ and let $U_2\sim U(-a,a)$, and let $M=\frac12 U_1+\frac12 U_2$. Clearly by considering that $M$ is symmetric and has finite range, we must have $E(M)=0$; the skewness will also be 0 and central moments and raw moments will be the same.
$\text{Var}(M)=E(M^2)=\frac12\text{Var}(U1)+\frac12\text{Var}(U_2)=\frac16[1+a^2]$.
Similarly, $E(M^4)=\frac{1}{10} (1+a^4)$ and so
the kurtosis is $\frac{\frac{1}{10} (1+a^4)}{[\frac16 (1+a^2)]^2}=3.6\frac{1+a^4}{(1+a^2)^2}$
If we choose $a=\sqrt{5+\sqrt{24}}\approx 3.1463$, then kurtosis is 3, and the density looks like this:
(c) here's a fun example. Let $X_i\stackrel{_\text{iid}}{\sim}\text{Pois}(\lambda)$, for $i=1,2$.
Let $Y$ be a 50-50 mixture of $\sqrt{X_1}$ and $-\sqrt{X_2}$:
by symmetry $E(Y)=0$ (we also need $E(|Y|)$ to be finite but given $E(X_1)$ is finite, we have that)
$Var(Y)=E(Y^2)=E(X_1)=\lambda$
by symmetry (and the fact that the absolute 3rd moment exists) skew=0
4th moment: $E(Y^4) = E(X_1^2) = \lambda+\lambda^2$
kurtosis = $\frac{\lambda+\lambda^2}{\lambda^2}= 1+1/\lambda$
so when $\lambda=\frac12$, kurtosis is 3. This is the case illustrated above.
(d) all my examples so far have been symmetric, since symmetric answers are easier to create -- but asymmetric solutions are also possible. Here's a discrete example.
(e) Now, here's an asymmetric continuous family. It will perhaps be the most surprising for some readers, so I'll describe it in detail. I'll begin by describing a discrete example and then build a continuous example from it (indeed I could have started with the one in (d), and it would have been simpler to play with, but I didn't, so we also have another discrete example for free).
$\:\,$ (i) At $x=-2,1$ and $m=\frac12 (5+\sqrt{33})$ ($\approx 5.3723$) place probabilities of $p_{-2}= \frac{1}{36}(7+\sqrt{33})$, $p_1=\frac{1}{36}(17+\sqrt{33})$, and $p_m=\frac{1}{36}(12-2\sqrt{33})$ (approximately 35.402%, 63.179% and 1.419%), respectively. This asymmetric three-point discrete distribution has zero skewness and zero excess kurtosis (as with all the above examples, it also has mean zero, which simplifies the calculations).
$\:$ (ii) Now, let's make a continuous mixture. Centered at each of the ordinates above (-2,1,m), place a Gaussian kernel with common standard deviation $\sigma$, and probability-weight given by the probabilities above (i.e. $w=(p_{-2},p_1,p_m)$). Phrased another way, take a mixture of three Gaussians with means at $-2,1$ and $m$ each with standard deviation $\sigma$ in the proportions $(p_{-2},p_1,p_m)$ respectively. For any choice of $\sigma$ the resulting continuous distribution has skewness 0 and excess kurtosis 0.
Here's one example (here the common $\sigma$ for the normal components is 1.25):
(The marks below the density show the locations of the centers of the Gaussian components.)
As you see, none of these examples look particularly "normal". It would be a simple matter to make any number of discrete, continuous or mixed variables with the same properties. While most of my examples were constructed as mixtures, there's nothing special about mixtures, other than they're often a convenient way to make distributions with properties the way you want, a bit like building things with Lego.
This answer gives some additional details on kurtosis that should make some of the considerations involved in constructing other examples a little clearer.
You could match more moments in similar fashion, though it requires more effort to do so. However, because the MGF of the normal exists, you can't match all integer moments of a normal with some non-normal distribution, since that would mean their MGFs match, implying the second distribution was normal as well.
|
Non-normal distributions with zero skewness and zero excess kurtosis?
|
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0)
(a) For example, in this answer
|
Non-normal distributions with zero skewness and zero excess kurtosis?
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0)
(a) For example, in this answer an example is given by taking a 50-50 mixture of a gamma variate, (which I call $X$), and the negative of a second one, which has a density that looks like this:
Clearly the result is symmetric and not normal. The scale parameter is unimportant here, so we can make it 1. Careful choice of the shape parameter of the gamma yields the required kurtosis:
The variance of this double-gamma ($Y$) is easy to work out in terms of the gamma variate it's based on: $\text{Var}(Y)=E(X^2)=\text{Var}(X)+E(X)^2=\alpha+\alpha^2$.
The fourth central moment of the variable $Y$ is the same as $E(X^4)$, which for a gamma($\alpha$) is $\alpha(\alpha+1)(\alpha+2)(\alpha+3)$
As a result the kurtosis is $\frac{\alpha(\alpha+1)(\alpha+2)(\alpha+3)}{\alpha^2(\alpha+1)^2}=\frac{(\alpha+2)(\alpha+3)}{\alpha(\alpha+1)}$. This is $3$ when $(\alpha+2)(\alpha+3)=3\alpha(\alpha+1)$, which happens when $\alpha=(\sqrt{13}+1)/2\approx 2.303$.
(b) We could also create an example as a scale mixture of two uniforms. Let $U_1\sim U(-1,1)$ and let $U_2\sim U(-a,a)$, and let $M=\frac12 U_1+\frac12 U_2$. Clearly by considering that $M$ is symmetric and has finite range, we must have $E(M)=0$; the skewness will also be 0 and central moments and raw moments will be the same.
$\text{Var}(M)=E(M^2)=\frac12\text{Var}(U1)+\frac12\text{Var}(U_2)=\frac16[1+a^2]$.
Similarly, $E(M^4)=\frac{1}{10} (1+a^4)$ and so
the kurtosis is $\frac{\frac{1}{10} (1+a^4)}{[\frac16 (1+a^2)]^2}=3.6\frac{1+a^4}{(1+a^2)^2}$
If we choose $a=\sqrt{5+\sqrt{24}}\approx 3.1463$, then kurtosis is 3, and the density looks like this:
(c) here's a fun example. Let $X_i\stackrel{_\text{iid}}{\sim}\text{Pois}(\lambda)$, for $i=1,2$.
Let $Y$ be a 50-50 mixture of $\sqrt{X_1}$ and $-\sqrt{X_2}$:
by symmetry $E(Y)=0$ (we also need $E(|Y|)$ to be finite but given $E(X_1)$ is finite, we have that)
$Var(Y)=E(Y^2)=E(X_1)=\lambda$
by symmetry (and the fact that the absolute 3rd moment exists) skew=0
4th moment: $E(Y^4) = E(X_1^2) = \lambda+\lambda^2$
kurtosis = $\frac{\lambda+\lambda^2}{\lambda^2}= 1+1/\lambda$
so when $\lambda=\frac12$, kurtosis is 3. This is the case illustrated above.
(d) all my examples so far have been symmetric, since symmetric answers are easier to create -- but asymmetric solutions are also possible. Here's a discrete example.
(e) Now, here's an asymmetric continuous family. It will perhaps be the most surprising for some readers, so I'll describe it in detail. I'll begin by describing a discrete example and then build a continuous example from it (indeed I could have started with the one in (d), and it would have been simpler to play with, but I didn't, so we also have another discrete example for free).
$\:\,$ (i) At $x=-2,1$ and $m=\frac12 (5+\sqrt{33})$ ($\approx 5.3723$) place probabilities of $p_{-2}= \frac{1}{36}(7+\sqrt{33})$, $p_1=\frac{1}{36}(17+\sqrt{33})$, and $p_m=\frac{1}{36}(12-2\sqrt{33})$ (approximately 35.402%, 63.179% and 1.419%), respectively. This asymmetric three-point discrete distribution has zero skewness and zero excess kurtosis (as with all the above examples, it also has mean zero, which simplifies the calculations).
$\:$ (ii) Now, let's make a continuous mixture. Centered at each of the ordinates above (-2,1,m), place a Gaussian kernel with common standard deviation $\sigma$, and probability-weight given by the probabilities above (i.e. $w=(p_{-2},p_1,p_m)$). Phrased another way, take a mixture of three Gaussians with means at $-2,1$ and $m$ each with standard deviation $\sigma$ in the proportions $(p_{-2},p_1,p_m)$ respectively. For any choice of $\sigma$ the resulting continuous distribution has skewness 0 and excess kurtosis 0.
Here's one example (here the common $\sigma$ for the normal components is 1.25):
(The marks below the density show the locations of the centers of the Gaussian components.)
As you see, none of these examples look particularly "normal". It would be a simple matter to make any number of discrete, continuous or mixed variables with the same properties. While most of my examples were constructed as mixtures, there's nothing special about mixtures, other than they're often a convenient way to make distributions with properties the way you want, a bit like building things with Lego.
This answer gives some additional details on kurtosis that should make some of the considerations involved in constructing other examples a little clearer.
You could match more moments in similar fashion, though it requires more effort to do so. However, because the MGF of the normal exists, you can't match all integer moments of a normal with some non-normal distribution, since that would mean their MGFs match, implying the second distribution was normal as well.
|
Non-normal distributions with zero skewness and zero excess kurtosis?
Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0)
(a) For example, in this answer
|
12,155
|
Non-normal distributions with zero skewness and zero excess kurtosis?
|
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line" with the consequence that all higher moments of the DDF are zero.
Paul Dirac applies it to quantum mechanics in his 1931 book The Principles of Quantum Mechanics but it's origins date back to Fourier, Lesbesgue, Cauchy and others. The DDF also has physical analogues in modeling the distribution, e.g., of the crack of a bat hitting a baseball.
|
Non-normal distributions with zero skewness and zero excess kurtosis?
|
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on
|
Non-normal distributions with zero skewness and zero excess kurtosis?
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line" with the consequence that all higher moments of the DDF are zero.
Paul Dirac applies it to quantum mechanics in his 1931 book The Principles of Quantum Mechanics but it's origins date back to Fourier, Lesbesgue, Cauchy and others. The DDF also has physical analogues in modeling the distribution, e.g., of the crack of a bat hitting a baseball.
|
Non-normal distributions with zero skewness and zero excess kurtosis?
Good points are made by Glen_b. I would only add consideration of the Dirac Delta function as additional grist for the mill. As Wikipedia notes, "The DDF is a generalized function, or distribution, on
|
12,156
|
Why is chi square used when creating a confidence interval for the variance?
|
Quick answer
The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining
\begin{eqnarray*}
\bar{X}&=&\sum^N \frac{X_i}{N}\\
S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{N-1}
\end{eqnarray*}
when forming confidence intervals, the sampling distribution associated with the sample variance ($S^2$, remember, a random variable!) is a chi-square distribution ($S^2(N-1)/\sigma^2 \sim \chi^2_{n-1}$), just as the sampling distribution associated with the sample mean is a standard normal distribution ($(\bar{X}-\mu)\sqrt{n}/\sigma \sim Z(0,1)$) when you know the variance, and with a t-student when you don't ($(\bar{X}-\mu)\sqrt{n}/S \sim T_{n-1}$).
Long answer
First of all, we'll prove that $S^2(N-1)/\sigma^2$ follows a chi-square distribution with $N-1$ degrees of freedom. After that, we'll see how this proof is useful when deriving the confidence intervals for the variance, and how the chi-square distribution appears (and why it is so useful!). Let's begin.
The proof
For this, maybe you must get used to the chi-square distribution in this Wikipedia article. This distribution has only one parameter: the degrees of freedom, $\nu$, and happens to have a Moment Generating Function (MGF) given by:
\begin{equation*}
m_{\chi^2_\nu}(t)=(1-2t)^{-\nu/2}.
\end{equation*}
If we can show that the distribution of $S^2(N-1)/\sigma^2$ has a moment generating function like this one, but with $\nu=N-1$, then we have shown that $S^2(N-1)/\sigma^2$ follows a chi-square distribution with $N-1$ degrees of freedom. In order to show this, note two facts:
If we define,
\begin{equation*}
Y = \sum \frac{(X_i-\bar{X})^2}{\sigma^2} = \sum Z_i^2,
\end{equation*}
where $Z_i\sim N(0,1)$, i.e., standard normal random variables, the moment generating function of $Y$ is given by
\begin{eqnarray*}
m_Y(t) &=& \mathbb{E}[e^{tY}]\\
&=&\mathbb{E}[e^{tZ_1^2}]\times \mathbb{E}[e^{tZ_2^2}]\times ...\mathbb{E}[e^{tZ_N^2}]\\
&=&m_{Z_i^2}(t)\times m_{Z_2^2}(t)\times ...m_{Z_N^2}(t).
\end{eqnarray*}
The MGF of $Z^2$ is given by
\begin{eqnarray*}
m_{Z^2}(t) &=& \int_{-\infty}^{\infty} f(z)\exp(tz^2)dz\\
&=&(1-2t)^{-1/2},
\end{eqnarray*}
where I have used the PDF of the standard normal, $f(z)=e^{-z^2/2}/\sqrt{2\pi}$ and, hence,
\begin{equation*}
m_Y(t)=(1-2t)^{-N/2},
\end{equation*}
which implies that $Y$ follows a chi-square distribution with $N$ degrees of freedom.
If $Y_1$ and $Y_2$ are independent and each distribute as a chi-square distribution but with $\nu_1$ and $\nu_2$ degrees of freedom, then $W=Y_1+Y_2$ distributes with a chi-square distribution with $\nu_1+\nu_2$ degrees of freedom (this follows from taking the MGF of $W$; do this!).
With the above facts, note that if you multiply the sample variance by $N-1$, you obtain (after some algebra),
\begin{equation*}
(N-1)S^2 = -n(\bar{X}-\mu)+\sum(X_i-\mu)^2,
\end{equation*}
and, hence, dividing by $\sigma^2$,
\begin{equation*}
\frac{(N-1)S^2}{\sigma^2}+\frac{(\bar{X}-\mu)^2}{\sigma^2/N}=\sum \frac{(X_i-\mu)^2}{\sigma^2}.
\end{equation*}
Note that the second term in the left-side of this sum distributes as a chi-square distribution with 1 degree of freedom, and the right-hand side sum distributes as a chi-square with $N$ degrees of freedom. Therefore, $S^2(N-1)/\sigma^2$ distributes as a chi-square with $N-1$ degrees of freedom.
Calculating the Confidence Interval for the variance.
When looking for a confidence interval for the variance, you want to know the limits $L_1$ and $L_2$ in
\begin{equation*}
\mathbb{P}\left(L_1\leq \sigma^2 \leq L_2\right) = 1-\alpha.
\end{equation*}
Let's play with the inequality inside the parenthesis. First, divide by $S^2(N-1)$,
\begin{equation*}
\frac{L_1}{S^2(N-1)}\leq \frac{\sigma^2}{S^2(N-1)} \leq \frac{L_2}{S^2(N-1)}.
\end{equation*}
And then remember two things: (1) the statistic $S^2(N-1)/\sigma^2$ has a chi-squared distribution with $N-1$ degrees of freedom and (2) the variances is always greather than zero, which implies that you can invert the inequalities, because\begin{eqnarray*}
\frac{L_1}{S^2(N-1)}\leq \frac{\sigma^2}{S^2(N-1)} &\Rightarrow&
\frac{S^2(N-1)}{\sigma^2}\leq \frac{S^2(N-1)}{L_1},\\
\frac{\sigma^2}{S^2(N-1)} \leq \frac{L_2}{S^2(N-1)} &\Rightarrow&
\frac{S^2(N-1)}{L_2} \leq \frac{S^2(N-1)}{\sigma^2},\\
\end{eqnarray*}
hence, the probability we are looking for is:
\begin{equation*}
\mathbb{P}\left(\frac{S^2(N-1)}{L_2} \leq \frac{S^2(N-1)}{\sigma^2}\leq \frac{S^2(N-1)}{L_1}\right) = 1-\alpha.
\end{equation*}
Note that $S^2(N-1)/\sigma^2 \sim \chi^2(N-1)$. We want then,
\begin{eqnarray*}
\int_{\frac{S^2(N-1)}{L_2}}^{N-1}p_{\chi^2}(x)dx &=& (1-\alpha)/2\ \ \ ,\\
\int_{N-1}^{\frac{S^2(N-1)}{L_1}}p_{\chi^2}(x)dx &=& (1-\alpha)/2\ \ \,
\end{eqnarray*}
(we integrate up to $N-1$ because the expected value of a chi-squared random variable with $N-1$ degrees of freedom is $N-1$) or, equivalently,
\begin{eqnarray*}
\int_{0}^{\frac{S^2(N-1)}{L_2}}p_{\chi^2}(x)dx=\alpha/2,\\
\int_{\frac{S^2(N-1)}{L_1}}^{\infty}p_{\chi^2}(x)dx=\alpha/2.
\end{eqnarray*}
Calling $\chi^2_{\alpha/2}=\frac{S^2(N-1)}{L_2}$ and $\chi^2_{1-\alpha/2}=
\frac{S^2(N-1)}{L_1}$, where the values $\chi^2_{\alpha/2}$ and $\chi^2_{1-\alpha/2}$ can be found in chi-square tables (in computers mainly!) and solving for $L_1$ and $L_2$,
\begin{eqnarray*}
L_1 &=& \frac{S^2(N-1)}{\chi^2_{1-\alpha/2}},\\
L_2 &=& \frac{S^2(N-1)}{\chi^2_{\alpha/2}}.
\end{eqnarray*}
Hence, your confidence interval for the variance is
\begin{equation*}
C.I.=\left(\frac{S^2(N-1)}{\chi^2_{1-\alpha/2}},
\frac{S^2(N-1)}{\chi^2_{\alpha/2}}\right).
\end{equation*}
|
Why is chi square used when creating a confidence interval for the variance?
|
Quick answer
The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining
\begin{eqnarray*}
\bar{X}&=&\sum^N \frac{X_i}{N}\\
S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{
|
Why is chi square used when creating a confidence interval for the variance?
Quick answer
The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining
\begin{eqnarray*}
\bar{X}&=&\sum^N \frac{X_i}{N}\\
S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{N-1}
\end{eqnarray*}
when forming confidence intervals, the sampling distribution associated with the sample variance ($S^2$, remember, a random variable!) is a chi-square distribution ($S^2(N-1)/\sigma^2 \sim \chi^2_{n-1}$), just as the sampling distribution associated with the sample mean is a standard normal distribution ($(\bar{X}-\mu)\sqrt{n}/\sigma \sim Z(0,1)$) when you know the variance, and with a t-student when you don't ($(\bar{X}-\mu)\sqrt{n}/S \sim T_{n-1}$).
Long answer
First of all, we'll prove that $S^2(N-1)/\sigma^2$ follows a chi-square distribution with $N-1$ degrees of freedom. After that, we'll see how this proof is useful when deriving the confidence intervals for the variance, and how the chi-square distribution appears (and why it is so useful!). Let's begin.
The proof
For this, maybe you must get used to the chi-square distribution in this Wikipedia article. This distribution has only one parameter: the degrees of freedom, $\nu$, and happens to have a Moment Generating Function (MGF) given by:
\begin{equation*}
m_{\chi^2_\nu}(t)=(1-2t)^{-\nu/2}.
\end{equation*}
If we can show that the distribution of $S^2(N-1)/\sigma^2$ has a moment generating function like this one, but with $\nu=N-1$, then we have shown that $S^2(N-1)/\sigma^2$ follows a chi-square distribution with $N-1$ degrees of freedom. In order to show this, note two facts:
If we define,
\begin{equation*}
Y = \sum \frac{(X_i-\bar{X})^2}{\sigma^2} = \sum Z_i^2,
\end{equation*}
where $Z_i\sim N(0,1)$, i.e., standard normal random variables, the moment generating function of $Y$ is given by
\begin{eqnarray*}
m_Y(t) &=& \mathbb{E}[e^{tY}]\\
&=&\mathbb{E}[e^{tZ_1^2}]\times \mathbb{E}[e^{tZ_2^2}]\times ...\mathbb{E}[e^{tZ_N^2}]\\
&=&m_{Z_i^2}(t)\times m_{Z_2^2}(t)\times ...m_{Z_N^2}(t).
\end{eqnarray*}
The MGF of $Z^2$ is given by
\begin{eqnarray*}
m_{Z^2}(t) &=& \int_{-\infty}^{\infty} f(z)\exp(tz^2)dz\\
&=&(1-2t)^{-1/2},
\end{eqnarray*}
where I have used the PDF of the standard normal, $f(z)=e^{-z^2/2}/\sqrt{2\pi}$ and, hence,
\begin{equation*}
m_Y(t)=(1-2t)^{-N/2},
\end{equation*}
which implies that $Y$ follows a chi-square distribution with $N$ degrees of freedom.
If $Y_1$ and $Y_2$ are independent and each distribute as a chi-square distribution but with $\nu_1$ and $\nu_2$ degrees of freedom, then $W=Y_1+Y_2$ distributes with a chi-square distribution with $\nu_1+\nu_2$ degrees of freedom (this follows from taking the MGF of $W$; do this!).
With the above facts, note that if you multiply the sample variance by $N-1$, you obtain (after some algebra),
\begin{equation*}
(N-1)S^2 = -n(\bar{X}-\mu)+\sum(X_i-\mu)^2,
\end{equation*}
and, hence, dividing by $\sigma^2$,
\begin{equation*}
\frac{(N-1)S^2}{\sigma^2}+\frac{(\bar{X}-\mu)^2}{\sigma^2/N}=\sum \frac{(X_i-\mu)^2}{\sigma^2}.
\end{equation*}
Note that the second term in the left-side of this sum distributes as a chi-square distribution with 1 degree of freedom, and the right-hand side sum distributes as a chi-square with $N$ degrees of freedom. Therefore, $S^2(N-1)/\sigma^2$ distributes as a chi-square with $N-1$ degrees of freedom.
Calculating the Confidence Interval for the variance.
When looking for a confidence interval for the variance, you want to know the limits $L_1$ and $L_2$ in
\begin{equation*}
\mathbb{P}\left(L_1\leq \sigma^2 \leq L_2\right) = 1-\alpha.
\end{equation*}
Let's play with the inequality inside the parenthesis. First, divide by $S^2(N-1)$,
\begin{equation*}
\frac{L_1}{S^2(N-1)}\leq \frac{\sigma^2}{S^2(N-1)} \leq \frac{L_2}{S^2(N-1)}.
\end{equation*}
And then remember two things: (1) the statistic $S^2(N-1)/\sigma^2$ has a chi-squared distribution with $N-1$ degrees of freedom and (2) the variances is always greather than zero, which implies that you can invert the inequalities, because\begin{eqnarray*}
\frac{L_1}{S^2(N-1)}\leq \frac{\sigma^2}{S^2(N-1)} &\Rightarrow&
\frac{S^2(N-1)}{\sigma^2}\leq \frac{S^2(N-1)}{L_1},\\
\frac{\sigma^2}{S^2(N-1)} \leq \frac{L_2}{S^2(N-1)} &\Rightarrow&
\frac{S^2(N-1)}{L_2} \leq \frac{S^2(N-1)}{\sigma^2},\\
\end{eqnarray*}
hence, the probability we are looking for is:
\begin{equation*}
\mathbb{P}\left(\frac{S^2(N-1)}{L_2} \leq \frac{S^2(N-1)}{\sigma^2}\leq \frac{S^2(N-1)}{L_1}\right) = 1-\alpha.
\end{equation*}
Note that $S^2(N-1)/\sigma^2 \sim \chi^2(N-1)$. We want then,
\begin{eqnarray*}
\int_{\frac{S^2(N-1)}{L_2}}^{N-1}p_{\chi^2}(x)dx &=& (1-\alpha)/2\ \ \ ,\\
\int_{N-1}^{\frac{S^2(N-1)}{L_1}}p_{\chi^2}(x)dx &=& (1-\alpha)/2\ \ \,
\end{eqnarray*}
(we integrate up to $N-1$ because the expected value of a chi-squared random variable with $N-1$ degrees of freedom is $N-1$) or, equivalently,
\begin{eqnarray*}
\int_{0}^{\frac{S^2(N-1)}{L_2}}p_{\chi^2}(x)dx=\alpha/2,\\
\int_{\frac{S^2(N-1)}{L_1}}^{\infty}p_{\chi^2}(x)dx=\alpha/2.
\end{eqnarray*}
Calling $\chi^2_{\alpha/2}=\frac{S^2(N-1)}{L_2}$ and $\chi^2_{1-\alpha/2}=
\frac{S^2(N-1)}{L_1}$, where the values $\chi^2_{\alpha/2}$ and $\chi^2_{1-\alpha/2}$ can be found in chi-square tables (in computers mainly!) and solving for $L_1$ and $L_2$,
\begin{eqnarray*}
L_1 &=& \frac{S^2(N-1)}{\chi^2_{1-\alpha/2}},\\
L_2 &=& \frac{S^2(N-1)}{\chi^2_{\alpha/2}}.
\end{eqnarray*}
Hence, your confidence interval for the variance is
\begin{equation*}
C.I.=\left(\frac{S^2(N-1)}{\chi^2_{1-\alpha/2}},
\frac{S^2(N-1)}{\chi^2_{\alpha/2}}\right).
\end{equation*}
|
Why is chi square used when creating a confidence interval for the variance?
Quick answer
The reason is because, assuming the data are i.i.d. and $X_i\sim N(\mu,\sigma^2)$, and defining
\begin{eqnarray*}
\bar{X}&=&\sum^N \frac{X_i}{N}\\
S^2 &=& \sum^{N} \frac{(\bar{X}-X_i)^2}{
|
12,157
|
Statistical intuition/data sense
|
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what is wrong and what techniuqes might be better (e.g. more efficient). So I think mathematical knowledge and thinking is important (almost necessary) to be a good statistician. But it is definitely not sufficient. I think the books referenced in comments are good. Let me give some others.
Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining
Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications
Statistical Thinking: Improving Business Performance
The Role of Statistics in Business and Industry
A Career in Statistics: Beyond the Numbers
The books by Hahn and Snee are particularly valuable and interesting because these are famous industrial statisticians with the mathematical skills and the practical experience.
|
Statistical intuition/data sense
|
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what i
|
Statistical intuition/data sense
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what is wrong and what techniuqes might be better (e.g. more efficient). So I think mathematical knowledge and thinking is important (almost necessary) to be a good statistician. But it is definitely not sufficient. I think the books referenced in comments are good. Let me give some others.
Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining
Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications
Statistical Thinking: Improving Business Performance
The Role of Statistics in Business and Industry
A Career in Statistics: Beyond the Numbers
The books by Hahn and Snee are particularly valuable and interesting because these are famous industrial statisticians with the mathematical skills and the practical experience.
|
Statistical intuition/data sense
I would first say that we shouldn't slight mathematics. It is an important tool in the development of statistical theory and statistical methods are justified by theory. Theory also tells you what i
|
12,158
|
Statistical intuition/data sense
|
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to learning about causal inference, you should learn about the value of exploratory data analysis, description, and prediction.
I've learned an incredible amount by hearing social scientists criticize each other's research in published work, blogs, seminars, and in personal conversations - there are lots of ways to learn. Follow this site, and Andrew Gelman's blog.
Of course, if you want data-sense, you need practice working with real data. There are general data-sense skills, but there is also data-sense which is specific to a problem area, or even more specifically, data-sense specific to a particular dataset.
|
Statistical intuition/data sense
|
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to
|
Statistical intuition/data sense
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to learning about causal inference, you should learn about the value of exploratory data analysis, description, and prediction.
I've learned an incredible amount by hearing social scientists criticize each other's research in published work, blogs, seminars, and in personal conversations - there are lots of ways to learn. Follow this site, and Andrew Gelman's blog.
Of course, if you want data-sense, you need practice working with real data. There are general data-sense skills, but there is also data-sense which is specific to a problem area, or even more specifically, data-sense specific to a particular dataset.
|
Statistical intuition/data sense
In the example you mention, the core issue is causal inference. A good place to start for causal inference is this triple-book-review by Andrew Gelman, and the books reviewed therein. In addition to
|
12,159
|
Statistical intuition/data sense
|
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there are discussion questions as well (part of the motivation of the sight is to give teachers of statistics real world examples to discuss with students).
|
Statistical intuition/data sense
|
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there ar
|
Statistical intuition/data sense
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there are discussion questions as well (part of the motivation of the sight is to give teachers of statistics real world examples to discuss with students).
|
Statistical intuition/data sense
A nice, free resource is the Chance News Wiki. It has many examples pulled from real examples along with discussion of good and bad points in how people interpret data and statistics. Often there ar
|
12,160
|
Statistical intuition/data sense
|
+1 for a great question! (And +1 to all the answerers thus far.)
I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use is to driving. When you are driving down the road, you just know what is going on with the other cars. For example, you know that the guy in front of you to the side is looking for the street sign where he's supposed to turn, even though he isn't using his turn-signal. You automatically identify the slow, over-cautious driver and anticipate how they'll react in different situations. You can spot the teenager who just wants to race as fast as he can go. You have a recognition-based sense of what all the cars are doing. This is exactly the same as data sense. It comes from experience, lots of experience. If you know enough of the theory, you just need to start playing with real datasets. You might be interested in exploring a site like DASL. One condition though, is that you shouldn't just get experience at loading a dataset, running a test, and getting a p-value. You will need to explore the data, probably plot it different ways, fit some models, and think about what's going on. (Notice that EDA has been a common thread here.)
One possibly non-obvious fact about this process, is that data sense can be localized to a given topical area. For example, you could get a lot of experience working with experimental data and ANOVA's, but not necessarily have a good feel for what's going on when you look at time-series data or survival data.
Let me add one more strategy that I've found enormously helpful: I think it's worth your time to learn a little (statistical) programming. You don't have to be terribly good at it (I'm known for writing "comically inefficient" code). However, once you can write some basic procedural code (say in R), you can simulate. It would be hard for me to overemphasize how much being able to conduct even very simple simulations can help. One thing you can use this for is, when in the course of your studies, you read about some property you can explore it. For instance, if you know (abstractly) that it is difficult to empirically determine whether a logit or a probit model is better for a dataset, you can code up simple simulations of this and play with them to understand the idea more fully. This will also provide you with experience, but of a slightly different type, and will also help you develop your data sense.
|
Statistical intuition/data sense
|
+1 for a great question! (And +1 to all the answerers thus far.)
I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use i
|
Statistical intuition/data sense
+1 for a great question! (And +1 to all the answerers thus far.)
I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use is to driving. When you are driving down the road, you just know what is going on with the other cars. For example, you know that the guy in front of you to the side is looking for the street sign where he's supposed to turn, even though he isn't using his turn-signal. You automatically identify the slow, over-cautious driver and anticipate how they'll react in different situations. You can spot the teenager who just wants to race as fast as he can go. You have a recognition-based sense of what all the cars are doing. This is exactly the same as data sense. It comes from experience, lots of experience. If you know enough of the theory, you just need to start playing with real datasets. You might be interested in exploring a site like DASL. One condition though, is that you shouldn't just get experience at loading a dataset, running a test, and getting a p-value. You will need to explore the data, probably plot it different ways, fit some models, and think about what's going on. (Notice that EDA has been a common thread here.)
One possibly non-obvious fact about this process, is that data sense can be localized to a given topical area. For example, you could get a lot of experience working with experimental data and ANOVA's, but not necessarily have a good feel for what's going on when you look at time-series data or survival data.
Let me add one more strategy that I've found enormously helpful: I think it's worth your time to learn a little (statistical) programming. You don't have to be terribly good at it (I'm known for writing "comically inefficient" code). However, once you can write some basic procedural code (say in R), you can simulate. It would be hard for me to overemphasize how much being able to conduct even very simple simulations can help. One thing you can use this for is, when in the course of your studies, you read about some property you can explore it. For instance, if you know (abstractly) that it is difficult to empirically determine whether a logit or a probit model is better for a dataset, you can code up simple simulations of this and play with them to understand the idea more fully. This will also provide you with experience, but of a slightly different type, and will also help you develop your data sense.
|
Statistical intuition/data sense
+1 for a great question! (And +1 to all the answerers thus far.)
I think there very much is such a thing as data sense, but I don't think there's anything mystical to it. The analogy I would use i
|
12,161
|
Caret re-sampling methods
|
Ok, here is my try:
boot - bootstrap
boot632 -- 0.632 bootstrap
cv -- cross-validation, probably this refers to K-fold cross-validation.
LOOCV -- leave-one-out cross validation, also known as jacknife.
LGOCV -- leave-group-out cross validation, variant of LOOCV for hierarchical data.
repeatedcv -- is probably repeated random sub-sampling validation, i.e division to train and test data is done in random way.
oob -- refers to out-of-bag estimation proposed by Breiman, which further is related to bootstrap aggregating. (The file in the link is not a ps file, but a ps.Z file, rename it and then try opening.)
|
Caret re-sampling methods
|
Ok, here is my try:
boot - bootstrap
boot632 -- 0.632 bootstrap
cv -- cross-validation, probably this refers to K-fold cross-validation.
LOOCV -- leave-one-out cross validation, also known as jacknif
|
Caret re-sampling methods
Ok, here is my try:
boot - bootstrap
boot632 -- 0.632 bootstrap
cv -- cross-validation, probably this refers to K-fold cross-validation.
LOOCV -- leave-one-out cross validation, also known as jacknife.
LGOCV -- leave-group-out cross validation, variant of LOOCV for hierarchical data.
repeatedcv -- is probably repeated random sub-sampling validation, i.e division to train and test data is done in random way.
oob -- refers to out-of-bag estimation proposed by Breiman, which further is related to bootstrap aggregating. (The file in the link is not a ps file, but a ps.Z file, rename it and then try opening.)
|
Caret re-sampling methods
Ok, here is my try:
boot - bootstrap
boot632 -- 0.632 bootstrap
cv -- cross-validation, probably this refers to K-fold cross-validation.
LOOCV -- leave-one-out cross validation, also known as jacknif
|
12,162
|
Caret re-sampling methods
|
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap.
A good file that you can look about resampling methods is Predictive Modeling with R and the caret Package (pdf). Max presented this in "useR! 2013".
|
Caret re-sampling methods
|
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap.
A good file that you can look about resampling met
|
Caret re-sampling methods
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap.
A good file that you can look about resampling methods is Predictive Modeling with R and the caret Package (pdf). Max presented this in "useR! 2013".
|
Caret re-sampling methods
The repeatedcv is repeated 10–fold cross–validation for sure, according to Max Kuhn's presentation. The default resampling scheme is the bootstrap.
A good file that you can look about resampling met
|
12,163
|
Can a small sample size cause type 1 error?
|
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions associated with discrete outcomes, which can cause the nominal Type I rate not to be achieved exactly especially with small sample sizes.)
There is an important principle here: if your test has acceptable size (= nominal Type I rate) and acceptable power for the effect you're looking for, then even if the sample size is small it's ok.
The danger is that if we otherwise know little about the situation--maybe these are all the data we have--then we might be concerned about "Type III" errors: that is, model mis-specification. They can be difficult to check with small sample sets.
As a practical example of the interplay of ideas, I will share a story. Long ago I was asked to recommend a sample size to confirm an environmental cleanup. This was during the pre-cleanup phase before we had any data. My plan called for analyzing the 1000 or so samples that would be obtained during cleanup (to establish that enough soil had been removed at each location) to assess the post-cleanup mean and variance of the contaminant concentration. Then (to simplify greatly), I said we would use a textbook formula--based on specified power and test size--to determine the number of independent confirmation samples that would be used to prove the cleanup was successful.
What made this memorable was that after the cleanup was done, the formula said to use only 3 samples. Suddenly my recommendation did not look very credible!
The reason for needing only 3 samples is that the cleanup was aggressive and worked well. It reduced average contaminant concentrations to about 100 give or take 100 ppm, consistently below the target of 500 ppm.
In the end this approach worked because we had obtained the 1000 previous samples (albeit of lower analytical quality: they had greater measurement error) to establish that the statistical assumptions being made were in fact good ones for this site. That is how the potential for Type III error was handled.
One more twist for your consideration: knowing the regulatory agency would never approve using just 3 samples, I recommended obtaining 5 measurements. These were to be made of 25 random samples of the entire site, composited in groups of 5. Statistically there would be only 5 numbers in the final hypothesis test, but we achieved greater power to detect an isolated "hot spot" by taking 25 physical samples. This highlights the important relationship between how many numbers are used in the test and how they were obtained. There's more to statistical decision making than just algorithms with numbers!
To my everlasting relief, the five composite values confirmed the cleanup target was met.
|
Can a small sample size cause type 1 error?
|
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions ass
|
Can a small sample size cause type 1 error?
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions associated with discrete outcomes, which can cause the nominal Type I rate not to be achieved exactly especially with small sample sizes.)
There is an important principle here: if your test has acceptable size (= nominal Type I rate) and acceptable power for the effect you're looking for, then even if the sample size is small it's ok.
The danger is that if we otherwise know little about the situation--maybe these are all the data we have--then we might be concerned about "Type III" errors: that is, model mis-specification. They can be difficult to check with small sample sets.
As a practical example of the interplay of ideas, I will share a story. Long ago I was asked to recommend a sample size to confirm an environmental cleanup. This was during the pre-cleanup phase before we had any data. My plan called for analyzing the 1000 or so samples that would be obtained during cleanup (to establish that enough soil had been removed at each location) to assess the post-cleanup mean and variance of the contaminant concentration. Then (to simplify greatly), I said we would use a textbook formula--based on specified power and test size--to determine the number of independent confirmation samples that would be used to prove the cleanup was successful.
What made this memorable was that after the cleanup was done, the formula said to use only 3 samples. Suddenly my recommendation did not look very credible!
The reason for needing only 3 samples is that the cleanup was aggressive and worked well. It reduced average contaminant concentrations to about 100 give or take 100 ppm, consistently below the target of 500 ppm.
In the end this approach worked because we had obtained the 1000 previous samples (albeit of lower analytical quality: they had greater measurement error) to establish that the statistical assumptions being made were in fact good ones for this site. That is how the potential for Type III error was handled.
One more twist for your consideration: knowing the regulatory agency would never approve using just 3 samples, I recommended obtaining 5 measurements. These were to be made of 25 random samples of the entire site, composited in groups of 5. Statistically there would be only 5 numbers in the final hypothesis test, but we achieved greater power to detect an isolated "hot spot" by taking 25 physical samples. This highlights the important relationship between how many numbers are used in the test and how they were obtained. There's more to statistical decision making than just algorithms with numbers!
To my everlasting relief, the five composite values confirmed the cleanup target was met.
|
Can a small sample size cause type 1 error?
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions ass
|
12,164
|
Can a small sample size cause type 1 error?
|
Another consequence of a small sample is the increase of type 2 error.
Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a point null hypothesis. These hypothesis are hypothesis having some parameters equals zero, and are known to be false in the considered experience.
On the opposite, too large samples increase the type 1 error because the p-value depends on the size of the sample, but the alpha level of significance is fixed. A test on such a sample will always reject the null hypothesis. Read "The insignificance of statistical significance testing" by Johnson and Douglas (1999) to have an overview of the issue.
This is not a direct answer to the question but these considerations are complementary.
|
Can a small sample size cause type 1 error?
|
Another consequence of a small sample is the increase of type 2 error.
Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a p
|
Can a small sample size cause type 1 error?
Another consequence of a small sample is the increase of type 2 error.
Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a point null hypothesis. These hypothesis are hypothesis having some parameters equals zero, and are known to be false in the considered experience.
On the opposite, too large samples increase the type 1 error because the p-value depends on the size of the sample, but the alpha level of significance is fixed. A test on such a sample will always reject the null hypothesis. Read "The insignificance of statistical significance testing" by Johnson and Douglas (1999) to have an overview of the issue.
This is not a direct answer to the question but these considerations are complementary.
|
Can a small sample size cause type 1 error?
Another consequence of a small sample is the increase of type 2 error.
Nunnally demonstrated in the paper "The place of statistics in psychology", 1960, that small samples generally fail to reject a p
|
12,165
|
How is the .similarity method in SpaCy computed?
|
Found the answer, in short, it's yes:
Link to Source Code
return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm)
This looks like it's the formula for computing cosine similarity and the vectors seem to be created with SpaCy's .vector which the documentation says is trained from GloVe's w2v model.
|
How is the .similarity method in SpaCy computed?
|
Found the answer, in short, it's yes:
Link to Source Code
return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm)
This looks like it's the formula for computing cosine si
|
How is the .similarity method in SpaCy computed?
Found the answer, in short, it's yes:
Link to Source Code
return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm)
This looks like it's the formula for computing cosine similarity and the vectors seem to be created with SpaCy's .vector which the documentation says is trained from GloVe's w2v model.
|
How is the .similarity method in SpaCy computed?
Found the answer, in short, it's yes:
Link to Source Code
return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm)
This looks like it's the formula for computing cosine si
|
12,166
|
How is the .similarity method in SpaCy computed?
|
By default it's cosine similarity, with vectors averaged over the document for missing words.
You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wraps similarity functions, making it easy to customise the similarity:
https://github.com/explosion/spaCy/blob/develop/spacy/pipeline.pyx#L50
|
How is the .similarity method in SpaCy computed?
|
By default it's cosine similarity, with vectors averaged over the document for missing words.
You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wr
|
How is the .similarity method in SpaCy computed?
By default it's cosine similarity, with vectors averaged over the document for missing words.
You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wraps similarity functions, making it easy to customise the similarity:
https://github.com/explosion/spaCy/blob/develop/spacy/pipeline.pyx#L50
|
How is the .similarity method in SpaCy computed?
By default it's cosine similarity, with vectors averaged over the document for missing words.
You can also customize this, by setting a hook to doc.user_hooks['similarity']. This pipeline component wr
|
12,167
|
How to calculate number of features based on image resolution?
|
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100.
Sample Pixels From the Image
+----+----+
| x1 | x2 |
+----+----+
Imagine when plotting our training set, we noticed that it can't be separated easily with a linear model, so we choose to add polynomial terms to better fit the data.
Let's say, we decide to construct our polynomials by including all of the pixel intensities, and all possible multiples that can be formed from them.
Since our matrix is small, let's enumerate them:
$$x_1,\ x_2,\ x_1^2,\ x_2^2,\ x_1 \times x_2,\ x_2 \times x_1 $$
Interpreting the above sequence of features can see that there is a pattern. The first two terms, group 1, are features consisting only of their pixel intensity. The following two terms after that, group 2, are features consisting of the square of their intensity. The last two terms, group 3, are the product of all the combinations of pairwise (two) pixel intensities.
group 1: $x_1,\ x_2$
group 2: $x_1^2,\ x_2^2$
group 3: $x_1 \times x_2,\ x_2 \times x_1$
But wait, there is a problem. If you look at the group 3 terms in the sequence ($ x_1 \times x_2$ and $x_2 \times x_1$) you'll notice that they are equal. Remember our housing example. Imagine having two features x1 = square footage, and x2 = square footage, for the same house... That doesn't make any sense! Ok, so we need to get rid of the duplicate feature, lets say arbitrarily $x_2 \times x_1$. Now we can rewrite the list of group three features as:
group 3: $x_1 \times x_2$
We count the features in all three groups and get 5.
But this is a toy example. Lets derive a generic formula for calculating the number of features. Let's use our original groups of features as a starting point.
$size group 1 + size group 2 + size group 3 = m \times n + m \times n +m \times n = 3 \times m \times n$
Ah! But we had to get rid of the duplicate product in group 3.
So to properly count the features for group 3 we will need a way to count all unique pairwise products in the matrix. Which can be done with the binomial coefficient, which is a method for counting all possible unique subgroups of size k from an equal or larger group of size n. So to properly count the features in group 3 calculate $C(m \times n, 2)$.
So our generic formula would be:
$$ m \times n + m \times n +C(m \times n, 2) = 2m \times n + C(m \times n, 2) $$
Lets use it to calculate the number of features in our toy example:
$$2 \times 1 \times 2 + C(1 \times 2, 2) = 4 + 1 = 5$$
Thats it!
|
How to calculate number of features based on image resolution?
|
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100.
Sample Pixels From the Image
+----+----+
| x1 | x2 |
+----+----+
Imagine when plotting o
|
How to calculate number of features based on image resolution?
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100.
Sample Pixels From the Image
+----+----+
| x1 | x2 |
+----+----+
Imagine when plotting our training set, we noticed that it can't be separated easily with a linear model, so we choose to add polynomial terms to better fit the data.
Let's say, we decide to construct our polynomials by including all of the pixel intensities, and all possible multiples that can be formed from them.
Since our matrix is small, let's enumerate them:
$$x_1,\ x_2,\ x_1^2,\ x_2^2,\ x_1 \times x_2,\ x_2 \times x_1 $$
Interpreting the above sequence of features can see that there is a pattern. The first two terms, group 1, are features consisting only of their pixel intensity. The following two terms after that, group 2, are features consisting of the square of their intensity. The last two terms, group 3, are the product of all the combinations of pairwise (two) pixel intensities.
group 1: $x_1,\ x_2$
group 2: $x_1^2,\ x_2^2$
group 3: $x_1 \times x_2,\ x_2 \times x_1$
But wait, there is a problem. If you look at the group 3 terms in the sequence ($ x_1 \times x_2$ and $x_2 \times x_1$) you'll notice that they are equal. Remember our housing example. Imagine having two features x1 = square footage, and x2 = square footage, for the same house... That doesn't make any sense! Ok, so we need to get rid of the duplicate feature, lets say arbitrarily $x_2 \times x_1$. Now we can rewrite the list of group three features as:
group 3: $x_1 \times x_2$
We count the features in all three groups and get 5.
But this is a toy example. Lets derive a generic formula for calculating the number of features. Let's use our original groups of features as a starting point.
$size group 1 + size group 2 + size group 3 = m \times n + m \times n +m \times n = 3 \times m \times n$
Ah! But we had to get rid of the duplicate product in group 3.
So to properly count the features for group 3 we will need a way to count all unique pairwise products in the matrix. Which can be done with the binomial coefficient, which is a method for counting all possible unique subgroups of size k from an equal or larger group of size n. So to properly count the features in group 3 calculate $C(m \times n, 2)$.
So our generic formula would be:
$$ m \times n + m \times n +C(m \times n, 2) = 2m \times n + C(m \times n, 2) $$
Lets use it to calculate the number of features in our toy example:
$$2 \times 1 \times 2 + C(1 \times 2, 2) = 4 + 1 = 5$$
Thats it!
|
How to calculate number of features based on image resolution?
Perhaps a simpler case will make things clearer. Lets say we choose a 1x2 sample of pixels instead of 100x100.
Sample Pixels From the Image
+----+----+
| x1 | x2 |
+----+----+
Imagine when plotting o
|
12,168
|
How to calculate number of features based on image resolution?
|
If you are using all the linear and quadratic features, the total number is supposed to be:
100*100 + 100*100 + C(100*100,2) = 50015000
10000 + 10000 + 49995000 = 50015000
xi xi^2 xixj
To calculate the combination in Octave/Matlab,
octave:23> nchoosek(100*100,2)
ans = 49995000
|
How to calculate number of features based on image resolution?
|
If you are using all the linear and quadratic features, the total number is supposed to be:
100*100 + 100*100 + C(100*100,2) = 50015000
10000 + 10000 + 49995000 = 50015000
xi xi^2
|
How to calculate number of features based on image resolution?
If you are using all the linear and quadratic features, the total number is supposed to be:
100*100 + 100*100 + C(100*100,2) = 50015000
10000 + 10000 + 49995000 = 50015000
xi xi^2 xixj
To calculate the combination in Octave/Matlab,
octave:23> nchoosek(100*100,2)
ans = 49995000
|
How to calculate number of features based on image resolution?
If you are using all the linear and quadratic features, the total number is supposed to be:
100*100 + 100*100 + C(100*100,2) = 50015000
10000 + 10000 + 49995000 = 50015000
xi xi^2
|
12,169
|
How to calculate number of features based on image resolution?
|
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
|
How to calculate number of features based on image resolution?
|
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
|
How to calculate number of features based on image resolution?
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
|
How to calculate number of features based on image resolution?
The idea of ($x^2$)/2 might also work to obtain the quadratic features. So if n = 2500, then we know that x(i) = 2500 and substituting x in the formula will give 50 million
|
12,170
|
Difference between multivariate standard normal distribution and Gaussian copula
|
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of unrelated non-statistical subjects mentioned in the paper's title. The page title in the first reference offered (in a comment to the question) is "From Finance to Cosmology: The Copula of Large-Scale Structure." With both "finance" and "cosmology" appearing prominently, we can be pretty sure that this is not a good source of information about copulas!
Let's instead turn to a standard and very accessible textbook, Roger Nelsen's An introduction to copulas (Second Edition, 2006), for the key definitions.
... every copula is a joint distribution function with margins that are uniform on [the closed unit interval $[0,1]]$.
[At p. 23, bottom.]
For some insight into copulae, turn to the first theorem in the book, Sklar's Theorem:
Let $H$ be a joint distribution function with margins $F$ and $G$. Then there exists a copula $C$ such that for all $x,y$ in [the extended real numbers], $$H(x,y) = C(F(x),G(y)).$$
[Stated on pp. 18 and 21.]
Although Nelsen does not call it as such, he does define the Gaussian copula in an example:
... if $\Phi$ denotes the standard (univariate) normal distribution function and $N_\rho$ denotes the standard bivariate normal distribution function (with Pearson's product-moment correlation coefficient $\rho$), then ... $$C(u,v) = \frac{1}{2\pi\sqrt{1-\rho^2}}\int_{-\infty}^{\Phi^{-1}(u)}\int_{-\infty}^{\Phi^{-1}(v)}\exp\left[\frac{-\left(s^2-2\rho s t + t^2\right)}{2\left(1-\rho^2\right)}\right]dsdt$$
[at p. 23, equation 2.3.6]. From the notation it is immediate that this $C$ indeed is the joint distribution for $(u,v)$ when $(\Phi^{-1}(u), \Phi^{-1}(v))$ is bivariate Normal. We may now turn around and construct a new bivariate distribution having any desired (continuous) marginal distributions $F$ and $G$ for which this $C$ is the copula, merely by replacing these occurrences of $\Phi$ by $F$ and $G$: take this particular $C$ in the characterization of copulas above.
So yes, this looks remarkably like the formulas for a bivariate normal distribution, because it is bivariate normal for the transformed variables $(\Phi^{-1}(F(x)),\Phi^{-1}(G(y)))$. Because these transformations will be nonlinear whenever $F$ and $G$ are not already (univariate) Normal CDFs themselves, the resulting distribution is not (in these cases) bivariate normal.
Example
Let $F$ be the distribution function for a Beta$(4,2)$ variable $X$ and $G$ the distribution function for a Gamma$(2)$ variable $Y$. By using the preceding construction we can form the joint distribution $H$ with a Gaussian copula and marginals $F$ and $G$. To depict this distribution, here is a partial plot of its bivariate density on $x$ and $y$ axes:
The dark areas have low probability density; the light regions have the highest density. All the probability has been squeezed into the region where $0\le x \le 1$ (the support of the Beta distribution) and $0 \le y$ (the support of the Gamma distribution).
The lack of symmetry makes it obviously non-normal (and without normal margins), but it nevertheless has a Gaussian copula by construction. FWIW it has a formula and it's ugly, also obviously not bivariate Normal:
$$\frac{1}{\sqrt{3}}2 \left(20 (1-x) x^3\right) \left(e^{-y} y\right) \exp \left(w(x,y)\right)$$
where $w(x,y)$ is given by $$\text{erfc}^{-1}\left(2 (Q(2,0,y))^2-\frac{2}{3} \left(\sqrt{2} \text{erfc}^{-1}(2 (Q(2,0,y)))-\frac{\text{erfc}^{-1}(2 (I_x(4,2)))}{\sqrt{2}}\right)^2\right).$$
($Q$ is a regularized Gamma function and $I_x$ is a regularized Beta function.)
|
Difference between multivariate standard normal distribution and Gaussian copula
|
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of un
|
Difference between multivariate standard normal distribution and Gaussian copula
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of unrelated non-statistical subjects mentioned in the paper's title. The page title in the first reference offered (in a comment to the question) is "From Finance to Cosmology: The Copula of Large-Scale Structure." With both "finance" and "cosmology" appearing prominently, we can be pretty sure that this is not a good source of information about copulas!
Let's instead turn to a standard and very accessible textbook, Roger Nelsen's An introduction to copulas (Second Edition, 2006), for the key definitions.
... every copula is a joint distribution function with margins that are uniform on [the closed unit interval $[0,1]]$.
[At p. 23, bottom.]
For some insight into copulae, turn to the first theorem in the book, Sklar's Theorem:
Let $H$ be a joint distribution function with margins $F$ and $G$. Then there exists a copula $C$ such that for all $x,y$ in [the extended real numbers], $$H(x,y) = C(F(x),G(y)).$$
[Stated on pp. 18 and 21.]
Although Nelsen does not call it as such, he does define the Gaussian copula in an example:
... if $\Phi$ denotes the standard (univariate) normal distribution function and $N_\rho$ denotes the standard bivariate normal distribution function (with Pearson's product-moment correlation coefficient $\rho$), then ... $$C(u,v) = \frac{1}{2\pi\sqrt{1-\rho^2}}\int_{-\infty}^{\Phi^{-1}(u)}\int_{-\infty}^{\Phi^{-1}(v)}\exp\left[\frac{-\left(s^2-2\rho s t + t^2\right)}{2\left(1-\rho^2\right)}\right]dsdt$$
[at p. 23, equation 2.3.6]. From the notation it is immediate that this $C$ indeed is the joint distribution for $(u,v)$ when $(\Phi^{-1}(u), \Phi^{-1}(v))$ is bivariate Normal. We may now turn around and construct a new bivariate distribution having any desired (continuous) marginal distributions $F$ and $G$ for which this $C$ is the copula, merely by replacing these occurrences of $\Phi$ by $F$ and $G$: take this particular $C$ in the characterization of copulas above.
So yes, this looks remarkably like the formulas for a bivariate normal distribution, because it is bivariate normal for the transformed variables $(\Phi^{-1}(F(x)),\Phi^{-1}(G(y)))$. Because these transformations will be nonlinear whenever $F$ and $G$ are not already (univariate) Normal CDFs themselves, the resulting distribution is not (in these cases) bivariate normal.
Example
Let $F$ be the distribution function for a Beta$(4,2)$ variable $X$ and $G$ the distribution function for a Gamma$(2)$ variable $Y$. By using the preceding construction we can form the joint distribution $H$ with a Gaussian copula and marginals $F$ and $G$. To depict this distribution, here is a partial plot of its bivariate density on $x$ and $y$ axes:
The dark areas have low probability density; the light regions have the highest density. All the probability has been squeezed into the region where $0\le x \le 1$ (the support of the Beta distribution) and $0 \le y$ (the support of the Gamma distribution).
The lack of symmetry makes it obviously non-normal (and without normal margins), but it nevertheless has a Gaussian copula by construction. FWIW it has a formula and it's ugly, also obviously not bivariate Normal:
$$\frac{1}{\sqrt{3}}2 \left(20 (1-x) x^3\right) \left(e^{-y} y\right) \exp \left(w(x,y)\right)$$
where $w(x,y)$ is given by $$\text{erfc}^{-1}\left(2 (Q(2,0,y))^2-\frac{2}{3} \left(\sqrt{2} \text{erfc}^{-1}(2 (Q(2,0,y)))-\frac{\text{erfc}^{-1}(2 (I_x(4,2)))}{\sqrt{2}}\right)^2\right).$$
($Q$ is a regularized Gamma function and $I_x$ is a regularized Beta function.)
|
Difference between multivariate standard normal distribution and Gaussian copula
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of un
|
12,171
|
Regression for a model of form $y=ax^k$?
|
Your example is a very good one because it clearly points up recurrent issues with such data.
Two common names are power function and power law. In biology, and some other fields, people often talk of allometry, especially whenever you are relating size measurements. In physics, and some other fields, people talk of scaling laws.
I would not regard monomial as a good term here, as I associate that with integer powers. For the same reason this is not best regarded as a special case of a polynomial.
Problems of fitting a power law to the tail of a distribution morph into problems of fitting a power law to the relationship between two different variables.
The easiest way to fit a power law is take logarithms of both variables and then fit a straight line using regression. There are many objections to this whenever both variables are subject to error, as is common. The example here is a case in point as both variables (and neither) might be regarded as response (dependent variable). That argument leads to a more symmetric method of fitting.
In addition, there is always the question of assumptions about error structure.
Again, the example here is a case in point as errors are clearly heteroscedastic. That suggests something more like weighted least-squares.
One excellent review is http://www.ncbi.nlm.nih.gov/pubmed/16573844
Yet another problem is that people often identify power laws only over some range of their data. The questions then become scientific as well as statistical, going all the way down to whether identifying power laws is just wishful thinking or a fashionable amateur pastime. Much of the discussion arises under the headings of fractal and scale-free behaviour, with associated discussion ranging from physics to metaphysics. In your specific example, a little curvature seems evident.
Enthusiasts for power laws are not always matched by sceptics, because the enthusiasts publish more than the sceptics. I'd suggest that a scatter plot on logarithmic scales, although a natural and excellent plot that is essential, should be accompanied by residual plots of some kind to check for departures from power function form.
|
Regression for a model of form $y=ax^k$?
|
Your example is a very good one because it clearly points up recurrent issues with such data.
Two common names are power function and power law. In biology, and some other fields, people often talk of
|
Regression for a model of form $y=ax^k$?
Your example is a very good one because it clearly points up recurrent issues with such data.
Two common names are power function and power law. In biology, and some other fields, people often talk of allometry, especially whenever you are relating size measurements. In physics, and some other fields, people talk of scaling laws.
I would not regard monomial as a good term here, as I associate that with integer powers. For the same reason this is not best regarded as a special case of a polynomial.
Problems of fitting a power law to the tail of a distribution morph into problems of fitting a power law to the relationship between two different variables.
The easiest way to fit a power law is take logarithms of both variables and then fit a straight line using regression. There are many objections to this whenever both variables are subject to error, as is common. The example here is a case in point as both variables (and neither) might be regarded as response (dependent variable). That argument leads to a more symmetric method of fitting.
In addition, there is always the question of assumptions about error structure.
Again, the example here is a case in point as errors are clearly heteroscedastic. That suggests something more like weighted least-squares.
One excellent review is http://www.ncbi.nlm.nih.gov/pubmed/16573844
Yet another problem is that people often identify power laws only over some range of their data. The questions then become scientific as well as statistical, going all the way down to whether identifying power laws is just wishful thinking or a fashionable amateur pastime. Much of the discussion arises under the headings of fractal and scale-free behaviour, with associated discussion ranging from physics to metaphysics. In your specific example, a little curvature seems evident.
Enthusiasts for power laws are not always matched by sceptics, because the enthusiasts publish more than the sceptics. I'd suggest that a scatter plot on logarithmic scales, although a natural and excellent plot that is essential, should be accompanied by residual plots of some kind to check for departures from power function form.
|
Regression for a model of form $y=ax^k$?
Your example is a very good one because it clearly points up recurrent issues with such data.
Two common names are power function and power law. In biology, and some other fields, people often talk of
|
12,172
|
Regression for a model of form $y=ax^k$?
|
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm():
Try this:
# Generate some data
set.seed(42)
x <- seq(1, 10, 1)
a = 10
b = 2
scatt <- rnorm(10, sd = 0.2)
dat <- data.frame(
x = x,
y = a*x^(-b) + scatt
)
Fit a model:
# Fit a model
model <- lm(log(y) ~ log(x) + 1, data = dat)
summary(model)
pred <- data.frame(
x = dat$x,
p = exp(predict(model, dat))
)
Now create a plot:
# Create a plot
library(ggplot2)
ggplot() +
geom_point(data = dat, aes(x=x, y=y)) +
geom_line(data = pred, aes(x=x, y=p), col = "red")
|
Regression for a model of form $y=ax^k$?
|
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm():
Try this:
# Generate some data
set.seed(42)
x <- seq(1, 10,
|
Regression for a model of form $y=ax^k$?
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm():
Try this:
# Generate some data
set.seed(42)
x <- seq(1, 10, 1)
a = 10
b = 2
scatt <- rnorm(10, sd = 0.2)
dat <- data.frame(
x = x,
y = a*x^(-b) + scatt
)
Fit a model:
# Fit a model
model <- lm(log(y) ~ log(x) + 1, data = dat)
summary(model)
pred <- data.frame(
x = dat$x,
p = exp(predict(model, dat))
)
Now create a plot:
# Create a plot
library(ggplot2)
ggplot() +
geom_point(data = dat, aes(x=x, y=y)) +
geom_line(data = pred, aes(x=x, y=p), col = "red")
|
Regression for a model of form $y=ax^k$?
If you assume that a power is a good model to fit, then you can use log(y) ~ log(x) as your model, and fit a linear regression using lm():
Try this:
# Generate some data
set.seed(42)
x <- seq(1, 10,
|
12,173
|
Difference in using normalized gradient and gradient
|
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are only interested in the direction and not necessarily how far we move along that direction, we are usually not interested in the magnitude of the gradient. Thereby, normalized gradient is good enough for our purposes and we let $\eta$ dictate how far we want to move in the computed direction. However, if you use unnormalized gradient descent, then at any point, the distance you move in the optimal direction is dictated by the magnitude of the gradient (in essence dictated by the surface of the objective function i.e a point on a steep surface will have high magnitude whereas a point on the fairly flat surface will have low magnitude).
From the above, you might have realized that normalization of gradient is an added controlling power that you get (whether it is useful or not is something upto your specific application). What I mean by the above is:
1] If you want to ensure that your algorithm moves in fixed step sizes in every iteration, then you might want to use normalized gradient descent with fixed $\eta$.
2] If you want to ensure that your algorithm moves in step sizes which is dictated precisely by you, then again you may want to use normalized gradient descent with your specific function for step size encoded into $\eta$.
3] If you want to let the magnitude of the gradient dictate the step size, then you will use unnormalized gradient descent.
There are several other variants like you can let the magnitude of the gradient decide the step size, but you put a cap on it and so on.
Now, step size clearly has influence on the speed of convergence and stability. Which of the above step sizes works best depends purely on your application (i.e objective function). In certain cases, the relationship between speed of convergence, stability and step size can be analyzed. This relationship then may give a hint as to whether you would want to go with normalized or unnormalized gradient descent.
To summarize, there is no difference between normalized and unnormalized gradient descent (as far as the theory behind the algorithm goes). However, it has practical impact on the speed of convergence and stability. The choice of one over the other is purely based on the application/objective at hand.
|
Difference in using normalized gradient and gradient
|
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are
|
Difference in using normalized gradient and gradient
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are only interested in the direction and not necessarily how far we move along that direction, we are usually not interested in the magnitude of the gradient. Thereby, normalized gradient is good enough for our purposes and we let $\eta$ dictate how far we want to move in the computed direction. However, if you use unnormalized gradient descent, then at any point, the distance you move in the optimal direction is dictated by the magnitude of the gradient (in essence dictated by the surface of the objective function i.e a point on a steep surface will have high magnitude whereas a point on the fairly flat surface will have low magnitude).
From the above, you might have realized that normalization of gradient is an added controlling power that you get (whether it is useful or not is something upto your specific application). What I mean by the above is:
1] If you want to ensure that your algorithm moves in fixed step sizes in every iteration, then you might want to use normalized gradient descent with fixed $\eta$.
2] If you want to ensure that your algorithm moves in step sizes which is dictated precisely by you, then again you may want to use normalized gradient descent with your specific function for step size encoded into $\eta$.
3] If you want to let the magnitude of the gradient dictate the step size, then you will use unnormalized gradient descent.
There are several other variants like you can let the magnitude of the gradient decide the step size, but you put a cap on it and so on.
Now, step size clearly has influence on the speed of convergence and stability. Which of the above step sizes works best depends purely on your application (i.e objective function). In certain cases, the relationship between speed of convergence, stability and step size can be analyzed. This relationship then may give a hint as to whether you would want to go with normalized or unnormalized gradient descent.
To summarize, there is no difference between normalized and unnormalized gradient descent (as far as the theory behind the algorithm goes). However, it has practical impact on the speed of convergence and stability. The choice of one over the other is purely based on the application/objective at hand.
|
Difference in using normalized gradient and gradient
In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are
|
12,174
|
Difference in using normalized gradient and gradient
|
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) = x^Tx$. In this case the ODE that describes a given gradient descent trajectory (as step sizes approaches zero) can be determined analytically: $y(t) = x_0/||x_0|| * e^{-t}$. So, the norm of the gradient decreases exponentially fast as you approach the critical point. In such cases it's often better to bounce back and forth across the min a few times than to approach it very slowly. In general though, first order methods are known to have very slow convergence around the critical points so you should not really be using them if you really care about accuracy. If you can't compute the Hessian of your objective analytically you can still approximate it (BFGS).
|
Difference in using normalized gradient and gradient
|
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) =
|
Difference in using normalized gradient and gradient
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) = x^Tx$. In this case the ODE that describes a given gradient descent trajectory (as step sizes approaches zero) can be determined analytically: $y(t) = x_0/||x_0|| * e^{-t}$. So, the norm of the gradient decreases exponentially fast as you approach the critical point. In such cases it's often better to bounce back and forth across the min a few times than to approach it very slowly. In general though, first order methods are known to have very slow convergence around the critical points so you should not really be using them if you really care about accuracy. If you can't compute the Hessian of your objective analytically you can still approximate it (BFGS).
|
Difference in using normalized gradient and gradient
Which method has faster convergence will depend on your specific objective, and generally I use the normalized gradient. A good example of why you might want to do this is a simple quadratic: $f(x) =
|
12,175
|
Difference in using normalized gradient and gradient
|
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\eta$ times the gradient the same.
|
Difference in using normalized gradient and gradient
|
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\
|
Difference in using normalized gradient and gradient
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\eta$ times the gradient the same.
|
Difference in using normalized gradient and gradient
What really matters is how $\eta$ is selected. It doesn't matter whether you use the normalized gradient or the unnormalized gradient if the step size is selected in a way that makes the length of $\
|
12,176
|
How to interpret coefficients from a logistic regression?
|
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on the predictor values.
Exponentiating the log odds gives you the odds ratio for a one-unit increase in your variable. So for example, with "gender", if Female = 0 and Male = 1 and a logistic regression coefficient of 0.014, then you can assert that the odds of your outcome for men are exp(0.014) = 1.01 times that of the odds of your outcome in women.
|
How to interpret coefficients from a logistic regression?
|
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on t
|
How to interpret coefficients from a logistic regression?
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on the predictor values.
Exponentiating the log odds gives you the odds ratio for a one-unit increase in your variable. So for example, with "gender", if Female = 0 and Male = 1 and a logistic regression coefficient of 0.014, then you can assert that the odds of your outcome for men are exp(0.014) = 1.01 times that of the odds of your outcome in women.
|
How to interpret coefficients from a logistic regression?
If you're fitting a binomial GLM with a logit link (i.e. a logistic regression model), then your regression equation is the log odds that the response value is a '1' (or a 'success'), conditioned on t
|
12,177
|
How to interpret coefficients from a logistic regression?
|
the odds ratio of women should be 1 / exp(0.014)
explanation:
since the event for male is '1' and female is '0'
that means the reference level is female.
the equation ln(s) = B0 + B1*(gender)
odds(female) = exp(B0)
odds(male) = exp(B0 + B1 * 1)
odds ratio(male) = odds(male) / odds(female) = exp(0.014) = 1.01
therefore, odds ratio(female) = 1 / 1.01 = 0.99
|
How to interpret coefficients from a logistic regression?
|
the odds ratio of women should be 1 / exp(0.014)
explanation:
since the event for male is '1' and female is '0'
that means the reference level is female.
the equation ln(s) = B0 + B1*(gender)
odds(fem
|
How to interpret coefficients from a logistic regression?
the odds ratio of women should be 1 / exp(0.014)
explanation:
since the event for male is '1' and female is '0'
that means the reference level is female.
the equation ln(s) = B0 + B1*(gender)
odds(female) = exp(B0)
odds(male) = exp(B0 + B1 * 1)
odds ratio(male) = odds(male) / odds(female) = exp(0.014) = 1.01
therefore, odds ratio(female) = 1 / 1.01 = 0.99
|
How to interpret coefficients from a logistic regression?
the odds ratio of women should be 1 / exp(0.014)
explanation:
since the event for male is '1' and female is '0'
that means the reference level is female.
the equation ln(s) = B0 + B1*(gender)
odds(fem
|
12,178
|
How to compute prediction bands for non-linear regression?
|
This is called the Delta Method.
Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of your predictors, $x$. First, find the derivative of this function with respect to your vector of parameters, $\beta$: $G^\prime(\beta, x)$. This says, if you change a parameter by a little bit, how much does your function change? Note that this derivative may be a function of your parameters themselves as well as the predictors. For example, if $G(\beta,x) = \exp (\beta x)$, then the derivative is $x \exp (\beta x)$, which depends upon the value of $\beta$ and the value of $x$. To evaluate this, you plug in the estimate of $\beta$ that your procedure gives, $\hat{\beta}$, and the value of the predictor $x$ where you want the prediction.
The Delta Method, derived from maximum likelihood procedures, states that the variance of $G\left(\hat{\beta}, x\right)$ is going to be
$$G^\prime\left(\hat{\beta},x\right)^T \text{Var}\left(\hat{\beta}\right) G^\prime\left(\hat{\beta},x\right),$$
where $\text{Var}\left(\hat{\beta}\right)$ is the variance-covariance matrix of your estimates (this is equal to the inverse of the Hessian---the second derivatives of the likelihood function at your estimates). The function that your statistics packages employs calculates this value for each different value of the predictor $x$. This is just a number, not a vector, for each value of $x$.
This gives the variance of the value of the function at each point and this is used just like any other variance in calculating confidence intervals: take the square root of this value, multiply by the critical value for the normal or applicable t distribution relevant for a particular confidence level, and add and subtract this value to the estimate of $G(\cdot)$ at the point.
For prediction intervals, we need to take the variance of the outcome given the predictors $x$, $\text{Var}(y \mid x) \equiv \sigma^2$, into account. Hence, we must boost our variance from the Delta Method by our estimate of the variance of $\epsilon$, $\hat{\sigma}^2$, to get the variance of $y$, rather than the variance of the expected value of $y$ that is used for confidence intervals. Note that $\hat{\sigma}^2$ is the sum of squared errors (SS in help file notation) divided by the degrees of freedom (DF).
In the notation used in the help file above, it looks like their value of c does not take $\sigma^2$ into account; that is, the inverse of their Hessian is $\sigma^{-2}$ times the one that I give. I'm not sure why they do that. It could be a way of writing the confidence and prediction intervals in a more familiar way (of $\sigma$ times some number times some critical value). The variance that I give is actually c*SS/DF in their notation.
For example, in the familiar case of linear regression, their c would be $\left(x^\prime x\right)^{-1}$, while the $\text{Var}\left(\hat{\beta}\right) = \sigma^2 \left(x^\prime x\right)^{-1}$.
|
How to compute prediction bands for non-linear regression?
|
This is called the Delta Method.
Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of yo
|
How to compute prediction bands for non-linear regression?
This is called the Delta Method.
Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of your predictors, $x$. First, find the derivative of this function with respect to your vector of parameters, $\beta$: $G^\prime(\beta, x)$. This says, if you change a parameter by a little bit, how much does your function change? Note that this derivative may be a function of your parameters themselves as well as the predictors. For example, if $G(\beta,x) = \exp (\beta x)$, then the derivative is $x \exp (\beta x)$, which depends upon the value of $\beta$ and the value of $x$. To evaluate this, you plug in the estimate of $\beta$ that your procedure gives, $\hat{\beta}$, and the value of the predictor $x$ where you want the prediction.
The Delta Method, derived from maximum likelihood procedures, states that the variance of $G\left(\hat{\beta}, x\right)$ is going to be
$$G^\prime\left(\hat{\beta},x\right)^T \text{Var}\left(\hat{\beta}\right) G^\prime\left(\hat{\beta},x\right),$$
where $\text{Var}\left(\hat{\beta}\right)$ is the variance-covariance matrix of your estimates (this is equal to the inverse of the Hessian---the second derivatives of the likelihood function at your estimates). The function that your statistics packages employs calculates this value for each different value of the predictor $x$. This is just a number, not a vector, for each value of $x$.
This gives the variance of the value of the function at each point and this is used just like any other variance in calculating confidence intervals: take the square root of this value, multiply by the critical value for the normal or applicable t distribution relevant for a particular confidence level, and add and subtract this value to the estimate of $G(\cdot)$ at the point.
For prediction intervals, we need to take the variance of the outcome given the predictors $x$, $\text{Var}(y \mid x) \equiv \sigma^2$, into account. Hence, we must boost our variance from the Delta Method by our estimate of the variance of $\epsilon$, $\hat{\sigma}^2$, to get the variance of $y$, rather than the variance of the expected value of $y$ that is used for confidence intervals. Note that $\hat{\sigma}^2$ is the sum of squared errors (SS in help file notation) divided by the degrees of freedom (DF).
In the notation used in the help file above, it looks like their value of c does not take $\sigma^2$ into account; that is, the inverse of their Hessian is $\sigma^{-2}$ times the one that I give. I'm not sure why they do that. It could be a way of writing the confidence and prediction intervals in a more familiar way (of $\sigma$ times some number times some critical value). The variance that I give is actually c*SS/DF in their notation.
For example, in the familiar case of linear regression, their c would be $\left(x^\prime x\right)^{-1}$, while the $\text{Var}\left(\hat{\beta}\right) = \sigma^2 \left(x^\prime x\right)^{-1}$.
|
How to compute prediction bands for non-linear regression?
This is called the Delta Method.
Suppose that you have some function $y = G(\beta,x) + \epsilon$; note that $G(\cdot)$ is a function of the parameters that you estimate, $\beta$, and the values of yo
|
12,179
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
|
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix happens to be rather easy to find for random variables in the exponential family. It also ties in to a lot of other math-stat material that tends to come up about the same time, and gives a nice geometric intuition about what exactly Fisher information means.
There's absolutely no reason I can think of not to use some other optimizer if you prefer, other than that you might have to code it by hand rather than use a pre-existing package. I suspect that any strong emphasis on Fisher scoring is a combination of (in order of decreasing weight) pedagogy, ease-of-derivation, historical bias, and "not-invented-here" syndrome.
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
|
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix h
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix happens to be rather easy to find for random variables in the exponential family. It also ties in to a lot of other math-stat material that tends to come up about the same time, and gives a nice geometric intuition about what exactly Fisher information means.
There's absolutely no reason I can think of not to use some other optimizer if you prefer, other than that you might have to code it by hand rather than use a pre-existing package. I suspect that any strong emphasis on Fisher scoring is a combination of (in order of decreasing weight) pedagogy, ease-of-derivation, historical bias, and "not-invented-here" syndrome.
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
Fisher's scoring is just a version of Newton's method that happens to be identified with GLMs, there's nothing particularly special about it, other than the fact that the Fisher's information matrix h
|
12,180
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
|
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The algorithm came before the models, at least in the general case.
It's also worth remembering that IWLS was what they had available back in the early 70s, so GLMs were an important class of models to know about. The fact you can maximize GLM likelihoods reliably using Newton-type algorithms (they typically have unique MLEs) also meant that programs like GLIM could be used by those without skills in numerical optimization.
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
|
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The alg
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The algorithm came before the models, at least in the general case.
It's also worth remembering that IWLS was what they had available back in the early 70s, so GLMs were an important class of models to know about. The fact you can maximize GLM likelihoods reliably using Newton-type algorithms (they typically have unique MLEs) also meant that programs like GLIM could be used by those without skills in numerical optimization.
|
Why do we make a big fuss about using Fisher scoring when we fit a GLM?
It's historical, and pragmatic; Nelder and Wedderburn reverse-engineered GLMs, as the set of models where you can find the MLE using Fisher scoring (i.e. Iteratively ReWeighted Least Squares). The alg
|
12,181
|
New revolutionary way of data mining?
|
Does this make any sense? Partly.
What does he mean? Please ask him.
Do you have a clue - or perhaps even a name for the proposed method and some references?
Cross Validation. http://en.wikipedia.org/wiki/Cross-validation_(statistics)
Or did this guy find the holy grail nobody else understands? No.
He even says in this interview that his method could potentially revolutionize science... Perhaps he forgot to include the references for that statement ...
|
New revolutionary way of data mining?
|
Does this make any sense? Partly.
What does he mean? Please ask him.
Do you have a clue - or perhaps even a name for the proposed method and some references?
Cross Validation. http://en.wikipedia.
|
New revolutionary way of data mining?
Does this make any sense? Partly.
What does he mean? Please ask him.
Do you have a clue - or perhaps even a name for the proposed method and some references?
Cross Validation. http://en.wikipedia.org/wiki/Cross-validation_(statistics)
Or did this guy find the holy grail nobody else understands? No.
He even says in this interview that his method could potentially revolutionize science... Perhaps he forgot to include the references for that statement ...
|
New revolutionary way of data mining?
Does this make any sense? Partly.
What does he mean? Please ask him.
Do you have a clue - or perhaps even a name for the proposed method and some references?
Cross Validation. http://en.wikipedia.
|
12,182
|
New revolutionary way of data mining?
|
Not sure if there'll be any other "ranty" responses, but heres mine.
Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you don't use cross validation to estimate the betas, you use OLS or IRLS or some other "optimal" solution.
What I see as a glaringly obvious gap in the quote is no reference to any notion of actually checking the "best" models to see if they make sense. Generally, a good model makes sense on some intuitive level. It seems like the claim is that CV is a silver bullet to all prediction problems. There is also no talk off setting up at the higher level of model structure - do we use SVM, Regression Trees, Boosting, Bagging, OLS, GLMS, GLMNS. Do we regularise variables? If so how? Do we group variables together? Do we want robustness to sparsity? Do we have outliers? Should we model the data as a whole or in pieces? There are too many approaches to be decided on the basis of CV.
And another important aspect is what computer systems are available? How is the data stored and processed? Is there missingness - how do we account for this?
And here is the big one: do we have sufficiently good data to make good predictions? Are there known variables that we don't have in our data set? Is our data representative of whatever it is we're trying to predict?
Cross Validation is a useful tool, but hardly revolutionary. I think the main reason people like is that it seems like a "math free" way of doing statistics. But there are many areas of CV which are not theoretically resolved - such as the size of the folds, the numbers of splits (how many times do we divide the data up into $K$ groups?), should the division be random or systematic (eg remove a state or province per fold or just some random 5%)? When does it matter? How do we measure performance? How do we account for the fact that the error rates across different folds are correlated as they are based on the same $K-2$ folds of data.
Additionally, I personally haven't seen a comparison of the trade off between computer intensive CV and less expensive methods such as REML or Variational Bayes. What do we get in exchange for spending the addiional computing time? Also seems like CV is more valuable in the "small $n$" and "big $p$" cases than the "big $n$ small $p$" one as in "big $n$ small $p$" case the out of sample error is very nearly equal to the in sample error.
|
New revolutionary way of data mining?
|
Not sure if there'll be any other "ranty" responses, but heres mine.
Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you
|
New revolutionary way of data mining?
Not sure if there'll be any other "ranty" responses, but heres mine.
Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you don't use cross validation to estimate the betas, you use OLS or IRLS or some other "optimal" solution.
What I see as a glaringly obvious gap in the quote is no reference to any notion of actually checking the "best" models to see if they make sense. Generally, a good model makes sense on some intuitive level. It seems like the claim is that CV is a silver bullet to all prediction problems. There is also no talk off setting up at the higher level of model structure - do we use SVM, Regression Trees, Boosting, Bagging, OLS, GLMS, GLMNS. Do we regularise variables? If so how? Do we group variables together? Do we want robustness to sparsity? Do we have outliers? Should we model the data as a whole or in pieces? There are too many approaches to be decided on the basis of CV.
And another important aspect is what computer systems are available? How is the data stored and processed? Is there missingness - how do we account for this?
And here is the big one: do we have sufficiently good data to make good predictions? Are there known variables that we don't have in our data set? Is our data representative of whatever it is we're trying to predict?
Cross Validation is a useful tool, but hardly revolutionary. I think the main reason people like is that it seems like a "math free" way of doing statistics. But there are many areas of CV which are not theoretically resolved - such as the size of the folds, the numbers of splits (how many times do we divide the data up into $K$ groups?), should the division be random or systematic (eg remove a state or province per fold or just some random 5%)? When does it matter? How do we measure performance? How do we account for the fact that the error rates across different folds are correlated as they are based on the same $K-2$ folds of data.
Additionally, I personally haven't seen a comparison of the trade off between computer intensive CV and less expensive methods such as REML or Variational Bayes. What do we get in exchange for spending the addiional computing time? Also seems like CV is more valuable in the "small $n$" and "big $p$" cases than the "big $n$ small $p$" one as in "big $n$ small $p$" case the out of sample error is very nearly equal to the in sample error.
|
New revolutionary way of data mining?
Not sure if there'll be any other "ranty" responses, but heres mine.
Cross Validation is in no way "new". Additionally, Cross Validation is not used when analytic solutions are found. For example you
|
12,183
|
New revolutionary way of data mining?
|
You can look for patterns where, on average, all the models
out-of-sample continue to do well.
My understanding of the word patterns here, is he means different market conditions. A naive approach will analyse all available data (we all know more data is better), to train the best curve fitting model, then run it on all data, and trade with it all the time.
The more successful hedge fund managers and algorithmic traders use their market knowledge. As a concrete example the first half hour of a trading session can be more volatile. So they'll try the models on all their data but for just that first half hour, and on all their data, but excluding that first half hour. They may discover that two of their models do well on the first half hour, but eight of them lose money. Whereas, when they exclude that first half hour, seven of their models make money, three lose money.
But, rather than taking those two winning models and use them in the first half hour of trading, they say: that is a bad time of day for algorithmic trading, and we're not going to trade at all. The rest of the day they will use their seven models. I.e. it appears that the market is easier to predict with machine learning at those times, so those models have more chance of being reliable going forward.
(Time of day isn't the only pattern; others are usually related to news events, e.g. the market is more volatile just before key economic figures are announced.)
That is my interpretation of what he is saying; it may be totally wrong, but I hope it is still useful food for thought for somebody.
|
New revolutionary way of data mining?
|
You can look for patterns where, on average, all the models
out-of-sample continue to do well.
My understanding of the word patterns here, is he means different market conditions. A naive approach
|
New revolutionary way of data mining?
You can look for patterns where, on average, all the models
out-of-sample continue to do well.
My understanding of the word patterns here, is he means different market conditions. A naive approach will analyse all available data (we all know more data is better), to train the best curve fitting model, then run it on all data, and trade with it all the time.
The more successful hedge fund managers and algorithmic traders use their market knowledge. As a concrete example the first half hour of a trading session can be more volatile. So they'll try the models on all their data but for just that first half hour, and on all their data, but excluding that first half hour. They may discover that two of their models do well on the first half hour, but eight of them lose money. Whereas, when they exclude that first half hour, seven of their models make money, three lose money.
But, rather than taking those two winning models and use them in the first half hour of trading, they say: that is a bad time of day for algorithmic trading, and we're not going to trade at all. The rest of the day they will use their seven models. I.e. it appears that the market is easier to predict with machine learning at those times, so those models have more chance of being reliable going forward.
(Time of day isn't the only pattern; others are usually related to news events, e.g. the market is more volatile just before key economic figures are announced.)
That is my interpretation of what he is saying; it may be totally wrong, but I hope it is still useful food for thought for somebody.
|
New revolutionary way of data mining?
You can look for patterns where, on average, all the models
out-of-sample continue to do well.
My understanding of the word patterns here, is he means different market conditions. A naive approach
|
12,184
|
New revolutionary way of data mining?
|
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting somewhere if the out-of-sample results are more than 50 percent of the in-sample."? Then bad-mouthing SAS and IBM doesn't make him look very smart either. People can have success in the market without understanding statistics and part of success is luck. It is wrong to treat successful businessmen as if they are guru's of forecasting.
|
New revolutionary way of data mining?
|
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting som
|
New revolutionary way of data mining?
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting somewhere if the out-of-sample results are more than 50 percent of the in-sample."? Then bad-mouthing SAS and IBM doesn't make him look very smart either. People can have success in the market without understanding statistics and part of success is luck. It is wrong to treat successful businessmen as if they are guru's of forecasting.
|
New revolutionary way of data mining?
His explanation about a common error in data mining seems sensible. His explanation of what he does not make any sense. What does he mean when he says "Generally speaking, you are really getting som
|
12,185
|
New revolutionary way of data mining?
|
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drift, so cross-validation as practised in other industries is not as successful in financial applications. In the second part, he refers to a financial metric, either return on investment on Sharpe ratio (return in the numerator), not MSE or other loss function. If the in-sample strategy produces a 10% return, then in real trading it may quite realistically produce only 5%. The "revolutionary" part is most certainly about his proprietary analysis approach, not about the quotes.
|
New revolutionary way of data mining?
|
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drif
|
New revolutionary way of data mining?
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drift, so cross-validation as practised in other industries is not as successful in financial applications. In the second part, he refers to a financial metric, either return on investment on Sharpe ratio (return in the numerator), not MSE or other loss function. If the in-sample strategy produces a 10% return, then in real trading it may quite realistically produce only 5%. The "revolutionary" part is most certainly about his proprietary analysis approach, not about the quotes.
|
New revolutionary way of data mining?
As a finance professional, I know enough context that the statement does not present any ambiguity. Financial time series are often characterized by regime changes, structural breaks, and concept drif
|
12,186
|
Normalizing constant in Bayes theorem
|
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data and, of course, it does not depend on the parameters since these have been integrated out.
Now, since:
$\Pr(\textrm{data})$ does not depend on the parameters for which one wants to make inference;
$\Pr(\textrm{data})$ is generally difficult to calculate in a closed-form;
one often uses the following adaptation of Baye's formula:
$\Pr(\textrm{parameters} \mid \textrm{data}) \propto \Pr(\textrm{data} \mid \textrm{parameters}) \Pr(\textrm{parameters})$
Basically, $\Pr(\textrm{data})$ is nothing but a "normalising constant", i.e., a constant that makes the posterior density integrate to one.
|
Normalizing constant in Bayes theorem
|
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data
|
Normalizing constant in Bayes theorem
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data and, of course, it does not depend on the parameters since these have been integrated out.
Now, since:
$\Pr(\textrm{data})$ does not depend on the parameters for which one wants to make inference;
$\Pr(\textrm{data})$ is generally difficult to calculate in a closed-form;
one often uses the following adaptation of Baye's formula:
$\Pr(\textrm{parameters} \mid \textrm{data}) \propto \Pr(\textrm{data} \mid \textrm{parameters}) \Pr(\textrm{parameters})$
Basically, $\Pr(\textrm{data})$ is nothing but a "normalising constant", i.e., a constant that makes the posterior density integrate to one.
|
Normalizing constant in Bayes theorem
The denominator, $\Pr(\textrm{data})$, is obtained by integrating out the parameters from the join probability, $\Pr(\textrm{data}, \textrm{parameters})$. This is the marginal probability of the data
|
12,187
|
Normalizing constant in Bayes theorem
|
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
|
Normalizing constant in Bayes theorem
|
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
|
Normalizing constant in Bayes theorem
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
|
Normalizing constant in Bayes theorem
When applying Bayes' rule, we usually wish to infer the "parameters" and the "data" is already given. Thus, $\Pr(\textrm{data})$ is a constant and we can assume that it is just a normalizing factor.
|
12,188
|
Normalizing constant in Bayes theorem
|
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B).
The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$.
This is the transformation applied to the prior.
If B always occurs in all states of the world, there is no information content & the update factor is 1.
In this case, $Pr(A|B) = Pr(A)$.
However, if B occurs frequently when A has occurred, but the overall probability of B occurring is very low, then there is high information content with respect to Pr(A).
The update factor will be HIGH and so $Pr(A|B) >> Pr(A)$.
For completeness, if B occurs rarely when A has occurred, but the overall probability of B occurring is very high, then there is also information content with respect to Pr(A), but in the opposite direction.
The update factor will be LOW and so $Pr(B|A) << Pr(A)$.
Purely mechanical explanations of Bayes seem to miss the genius of this simple equation.
|
Normalizing constant in Bayes theorem
|
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B).
The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$.
This is the transformation applied to the prior.
|
Normalizing constant in Bayes theorem
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B).
The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$.
This is the transformation applied to the prior.
If B always occurs in all states of the world, there is no information content & the update factor is 1.
In this case, $Pr(A|B) = Pr(A)$.
However, if B occurs frequently when A has occurred, but the overall probability of B occurring is very low, then there is high information content with respect to Pr(A).
The update factor will be HIGH and so $Pr(A|B) >> Pr(A)$.
For completeness, if B occurs rarely when A has occurred, but the overall probability of B occurring is very high, then there is also information content with respect to Pr(A), but in the opposite direction.
The update factor will be LOW and so $Pr(B|A) << Pr(A)$.
Purely mechanical explanations of Bayes seem to miss the genius of this simple equation.
|
Normalizing constant in Bayes theorem
Most explanations of Bayes miss the mark. Consider the following for the role of Pr(B).
The crux of Bayes is the "update factor" $[Pr(B|A) / Pr(B)]$.
This is the transformation applied to the prior.
|
12,189
|
Measuring Document Similarity
|
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distance between any two points starts to look the same. One way to deal with this is to reduce the data dimensionality by using PCA or LSA (Latent Semantic Analysis; also known as Latent Semantic Indexing) and then measure the distances in the new space. Using something like LSA over PCA is advantageous since it can give a meaningful representation in terms of "semantic concepts", apart from measuring distances in a lower dimensional space.
Comparing documents based on the probability distributions is usually done by first computing the topic distribution of each document (using something like Latent Dirichlet Allocation), and then computing some sort of divergence (e.g., KL divergence) between the topic distributions of pair of documents. In a way, it's actually kind of similar to doing LSA first and then measuring distances in the LSA space using KL-divergence between the vectors (instead of cosine similarity).
KL-divergence is a distance measure for comparing distributions so it may be preferable if the document representation is in terms of some distribution (which is often actually the case -- e.g., documents represented as distribution over topics, as in LDA). Also note that under such a representation, the entries in the feature vector would sum to one (since you are basically treating the document as a distribution over topics or semantic concepts).
Also see a related thread here.
|
Measuring Document Similarity
|
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a repres
|
Measuring Document Similarity
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distance between any two points starts to look the same. One way to deal with this is to reduce the data dimensionality by using PCA or LSA (Latent Semantic Analysis; also known as Latent Semantic Indexing) and then measure the distances in the new space. Using something like LSA over PCA is advantageous since it can give a meaningful representation in terms of "semantic concepts", apart from measuring distances in a lower dimensional space.
Comparing documents based on the probability distributions is usually done by first computing the topic distribution of each document (using something like Latent Dirichlet Allocation), and then computing some sort of divergence (e.g., KL divergence) between the topic distributions of pair of documents. In a way, it's actually kind of similar to doing LSA first and then measuring distances in the LSA space using KL-divergence between the vectors (instead of cosine similarity).
KL-divergence is a distance measure for comparing distributions so it may be preferable if the document representation is in terms of some distribution (which is often actually the case -- e.g., documents represented as distribution over topics, as in LDA). Also note that under such a representation, the entries in the feature vector would sum to one (since you are basically treating the document as a distribution over topics or semantic concepts).
Also see a related thread here.
|
Measuring Document Similarity
For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a repres
|
12,190
|
Measuring Document Similarity
|
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html
import urllib,urllib2
import json
API_URL="http://www.scurtu.it/apis/documentSimilarity"
inputDict={}
inputDict['doc1']='Document with some text'
inputDict['doc2']='Other document with some text'
params = urllib.urlencode(inputDict)
f = urllib2.urlopen(API_URL, params)
response= f.read()
responseObject=json.loads(response)
print responseObject
|
Measuring Document Similarity
|
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html
import urllib,urllib2
import json
API_URL="http://www.scurtu.it/apis/documentSimil
|
Measuring Document Similarity
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html
import urllib,urllib2
import json
API_URL="http://www.scurtu.it/apis/documentSimilarity"
inputDict={}
inputDict['doc1']='Document with some text'
inputDict['doc2']='Other document with some text'
params = urllib.urlencode(inputDict)
f = urllib2.urlopen(API_URL, params)
response= f.read()
responseObject=json.loads(response)
print responseObject
|
Measuring Document Similarity
You might want to try this online service for cosine document similarity http://www.scurtu.it/documentSimilarity.html
import urllib,urllib2
import json
API_URL="http://www.scurtu.it/apis/documentSimil
|
12,191
|
Cauchy Distribution and Central Limit Theorem
|
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no matter what the value of $n$ is.
So you do not get either the Gaussian limit or the reduction in dispersion associated with the Central Limit Theorem.
|
Cauchy Distribution and Central Limit Theorem
|
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no mat
|
Cauchy Distribution and Central Limit Theorem
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no matter what the value of $n$ is.
So you do not get either the Gaussian limit or the reduction in dispersion associated with the Central Limit Theorem.
|
Cauchy Distribution and Central Limit Theorem
The distribution of the mean of $n$ i.i.d. samples from a Cauchy distribution has the same distribution (including the same median and inter-quartile range) as the original Cauchy distribution, no mat
|
12,192
|
What common forecasting models can be seen as special cases of ARIMA models?
|
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multiplicand.
The multiplicative seasonal model can be used to model time series where one has the following (in my opinion a very unusual) case: If the amplitude of the seasonal component/pattern is proportional to the average level of the series, the series can be referred to as having multiplicative seasonality. Even in the case of multiplicative models, one can often represent these as ARIMA models thus completing the "umbrella."
Furthermore since a Transfer Function is a Generalized Least Squares Model it can reduce to a standard regression model by omitting the ARIMA component and assuming a set of weights needed to homogenize the error structure.
|
What common forecasting models can be seen as special cases of ARIMA models?
|
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multip
|
What common forecasting models can be seen as special cases of ARIMA models?
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multiplicand.
The multiplicative seasonal model can be used to model time series where one has the following (in my opinion a very unusual) case: If the amplitude of the seasonal component/pattern is proportional to the average level of the series, the series can be referred to as having multiplicative seasonality. Even in the case of multiplicative models, one can often represent these as ARIMA models thus completing the "umbrella."
Furthermore since a Transfer Function is a Generalized Least Squares Model it can reduce to a standard regression model by omitting the ARIMA component and assuming a set of weights needed to homogenize the error structure.
|
What common forecasting models can be seen as special cases of ARIMA models?
The Box-Jenkins approach incorporates all well-known forecasting models except multiplicative models like the Holt-Winston Multiplicative Seasonal Model where the expected value is based upon a multip
|
12,193
|
What common forecasting models can be seen as special cases of ARIMA models?
|
You can add
Drift: ARIMA(0,1,0) with constant.
Damped Holt's: ARIMA(0,1,2)
Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$.
However, HW uses only three parameters and that (rather strange) ARIMA model has $m+1$ parameters. So there are a lot of parameter constraints.
The ETS (exponential smoothing) and ARIMA classes of models overlap, but neither is contained within the other. There are a lot of non-linear ETS models that have no ARIMA equivalent, and a lot of ARIMA models that have no ETS equivalent. For example, all ETS models are non-stationary.
|
What common forecasting models can be seen as special cases of ARIMA models?
|
You can add
Drift: ARIMA(0,1,0) with constant.
Damped Holt's: ARIMA(0,1,2)
Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$.
However, HW uses only three parameters and that (rather strange) ARIMA
|
What common forecasting models can be seen as special cases of ARIMA models?
You can add
Drift: ARIMA(0,1,0) with constant.
Damped Holt's: ARIMA(0,1,2)
Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$.
However, HW uses only three parameters and that (rather strange) ARIMA model has $m+1$ parameters. So there are a lot of parameter constraints.
The ETS (exponential smoothing) and ARIMA classes of models overlap, but neither is contained within the other. There are a lot of non-linear ETS models that have no ARIMA equivalent, and a lot of ARIMA models that have no ETS equivalent. For example, all ETS models are non-stationary.
|
What common forecasting models can be seen as special cases of ARIMA models?
You can add
Drift: ARIMA(0,1,0) with constant.
Damped Holt's: ARIMA(0,1,2)
Additive Holt-Winters: SARIMA(0,1,$m+1$)(0,1,0)$_m$.
However, HW uses only three parameters and that (rather strange) ARIMA
|
12,194
|
What common forecasting models can be seen as special cases of ARIMA models?
|
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model.
To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact, there are various types of EWMA models and these happen to be included in the class of ARIMA(0,d,q) models - see Cogger (1974):
The Optimality of General-Order Exponential Smoothing
by K. O. Cogger. Operations Research. Vol. 22, No. 4 (Jul. - Aug., 1974), pp. 858-867.
The abstract for the paper is as follows:
This paper derives the class of nonstationary time-series
representations for which exponential smoothing of arbitrary order
minimizes mean-square forecast error. It points out that these
representations are included in the class of integrated moving
averages developed by Box and Jenkins, permitting various procedures
to be applied to estimating the smoothing constant and determining the
appropriate order of smoothing. These results further permit the
principle of parsimony in parameterization to be applied to any choice
between exponential smoothing and alternative forecasting procedures.
|
What common forecasting models can be seen as special cases of ARIMA models?
|
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model.
To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact
|
What common forecasting models can be seen as special cases of ARIMA models?
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model.
To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact, there are various types of EWMA models and these happen to be included in the class of ARIMA(0,d,q) models - see Cogger (1974):
The Optimality of General-Order Exponential Smoothing
by K. O. Cogger. Operations Research. Vol. 22, No. 4 (Jul. - Aug., 1974), pp. 858-867.
The abstract for the paper is as follows:
This paper derives the class of nonstationary time-series
representations for which exponential smoothing of arbitrary order
minimizes mean-square forecast error. It points out that these
representations are included in the class of integrated moving
averages developed by Box and Jenkins, permitting various procedures
to be applied to estimating the smoothing constant and determining the
appropriate order of smoothing. These results further permit the
principle of parsimony in parameterization to be applied to any choice
between exponential smoothing and alternative forecasting procedures.
|
What common forecasting models can be seen as special cases of ARIMA models?
The exponentially weighted moving average (EWMA) is algebraically equivalent to an ARIMA(0,1,1) model.
To put it another way, the EWMA is a particular model within the class of ARIMA models. In fact
|
12,195
|
What common forecasting models can be seen as special cases of ARIMA models?
|
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1)
and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)."
https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=9290&context=rtd
|
What common forecasting models can be seen as special cases of ARIMA models?
|
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1)
and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)."
https://lib.
|
What common forecasting models can be seen as special cases of ARIMA models?
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1)
and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)."
https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=9290&context=rtd
|
What common forecasting models can be seen as special cases of ARIMA models?
"The Gauss-Markov plus white noise model of the first difference is a special case of an ARIMA (1,1,1)
and the damped cosine plus white noise model is a special case of an ARIMA (2,1,2)."
https://lib.
|
12,196
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
ttnphns is correct.
However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it ANOVA. ANCOVA and ANOVA are the same, as ttnphns pointed out. The difference is that with ANCOVA you don't treat the covariates as predictors and you definitely appear to want to do just that.
What the reviewer was getting at was that, while you can perform an ANOVA on continuous predictors, it's typical that one perform a regression. One feature of this is that you get estimates of the effects of the continuous variable and you can even look at interactions between it and the categorical (which aren't included in an ANCOVA but could be in an ANOVA).
You may need some help with interpretation of regression results because funny things happen on the way to interactions if you're going to use the beta values to determine the significance of your effects.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
ttnphns is correct.
However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it
|
When should one use multiple regression with dummy coding vs. ANCOVA?
ttnphns is correct.
However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it ANOVA. ANCOVA and ANOVA are the same, as ttnphns pointed out. The difference is that with ANCOVA you don't treat the covariates as predictors and you definitely appear to want to do just that.
What the reviewer was getting at was that, while you can perform an ANOVA on continuous predictors, it's typical that one perform a regression. One feature of this is that you get estimates of the effects of the continuous variable and you can even look at interactions between it and the categorical (which aren't included in an ANCOVA but could be in an ANOVA).
You may need some help with interpretation of regression results because funny things happen on the way to interactions if you're going to use the beta values to determine the significance of your effects.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
ttnphns is correct.
However, given your additional comments I would suggest that the reviewer wanted the change merely for interpretation. If you want to stick with ANOVA style results just call it
|
12,197
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (continuous predictors). If I recode the "factors" into dummy variables (omitting one redundant category from each factor) and input all those together with the covariates as "independent variables" in REGRESSION procedure (linear regression), I will obtain the same results as with GLM (taken that the dependent variable is the same, of course).
P.S. The results will be identical if the models are identical. If regression contains only main effects then ANCOVA should be specified without factor by factor interactions, of course.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (con
|
When should one use multiple regression with dummy coding vs. ANCOVA?
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (continuous predictors). If I recode the "factors" into dummy variables (omitting one redundant category from each factor) and input all those together with the covariates as "independent variables" in REGRESSION procedure (linear regression), I will obtain the same results as with GLM (taken that the dependent variable is the same, of course).
P.S. The results will be identical if the models are identical. If regression contains only main effects then ANCOVA should be specified without factor by factor interactions, of course.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
These two are the same thing. For example, in SPSS the procedure where I specify ANCOVA is called GLM (general linear model); it askes to input "factors" (categorical predictors) and "covariates" (con
|
12,198
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometrics, biology, chemistry, physics, and finance SPSS is not accurate or useful in general. Even within psychology, SPSS preset regression corrections are often problematic.
Within education research here are examples of misuses of multiple regression and ANCOVA; they are similar but it is 100% wrong to say they are the same or almost identical.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5701329/
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometri
|
When should one use multiple regression with dummy coding vs. ANCOVA?
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometrics, biology, chemistry, physics, and finance SPSS is not accurate or useful in general. Even within psychology, SPSS preset regression corrections are often problematic.
Within education research here are examples of misuses of multiple regression and ANCOVA; they are similar but it is 100% wrong to say they are the same or almost identical.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5701329/
|
When should one use multiple regression with dummy coding vs. ANCOVA?
ANCOVA is a form of regression but not identical to other multiple regression techniques. SPSS is not robust enough software to trust in anything outside of some psychology research. Within econometri
|
12,199
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends.
Try running both a multiple regression and an ANCOVA, and comparing the results. They probably will not be identical.
ANCOVA and multiple linear regression are similar, but regression is more appropriate when the emphasis is on the dependent outcome variable, while ANCOVA is more appropriate when the emphasis is on comparing the groups from one of the independent variables. In the experiment described above, the emphasis seems clearly to be on the outcome variable.
Finally, unless you are really certain that you way of doing things is better than the Reviewer's, and can explain why, then you should probably just concede to the Reviewer's expertise, so you can get your paper published.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
|
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends.
Try running both a multiple regression and an ANCOVA, and comparing the res
|
When should one use multiple regression with dummy coding vs. ANCOVA?
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends.
Try running both a multiple regression and an ANCOVA, and comparing the results. They probably will not be identical.
ANCOVA and multiple linear regression are similar, but regression is more appropriate when the emphasis is on the dependent outcome variable, while ANCOVA is more appropriate when the emphasis is on comparing the groups from one of the independent variables. In the experiment described above, the emphasis seems clearly to be on the outcome variable.
Finally, unless you are really certain that you way of doing things is better than the Reviewer's, and can explain why, then you should probably just concede to the Reviewer's expertise, so you can get your paper published.
|
When should one use multiple regression with dummy coding vs. ANCOVA?
Multiple linear regression appears to me more appropriate than ANCOVA in this situation, as the journal reviewer recommends.
Try running both a multiple regression and an ANCOVA, and comparing the res
|
12,200
|
Fitting an exponential model to data
|
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give it a try. If x is your independent (or predictor) variable and y is your dependent (or response) variable, then this should work.
# generate data
beta <- 0.05
n <- 100
temp <- data.frame(y = exp(beta * seq(n)) + rnorm(n), x = seq(n))
# plot data
plot(temp$x, temp$y)
# fit non-linear model
mod <- nls(y ~ exp(a + b * x), data = temp, start = list(a = 0, b = 0))
# add fitted curve
lines(temp$x, predict(mod, list(x = temp$x)))
|
Fitting an exponential model to data
|
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give
|
Fitting an exponential model to data
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give it a try. If x is your independent (or predictor) variable and y is your dependent (or response) variable, then this should work.
# generate data
beta <- 0.05
n <- 100
temp <- data.frame(y = exp(beta * seq(n)) + rnorm(n), x = seq(n))
# plot data
plot(temp$x, temp$y)
# fit non-linear model
mod <- nls(y ~ exp(a + b * x), data = temp, start = list(a = 0, b = 0))
# add fitted curve
lines(temp$x, predict(mod, list(x = temp$x)))
|
Fitting an exponential model to data
I am not completely sure what you're asking, because your lingo is off. But assuming that your variables aren't independent of one another (if they were, then they're be no relation to find) I'll give
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.