idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
38,401 | Type I error and type II error trade off | When the boy first pretends there is a wolf and the villagers believe him, it's a Type I error. When he claims there is a wolf again, but no one takes him seriously, though it is true, it's a type II error.
The villagers can avoid type I errors by never believing the boy, but that will always cause a Type II errors when there is a wolf around. Similarly, they can always believe him and never make a Type II, but that will cause lots of Type I errors.
You can think of using how scared the boy as a kind of test statistic. If he's crying, his breathing and heart rate are elevated, he has goose bumps or piloerection (his hair is standing up), then the villagers should take his claim more seriously. Requiring all these symptoms to be present and high is analogous to using a small $\alpha$ in the graph that @slowloris posted. | Type I error and type II error trade off | When the boy first pretends there is a wolf and the villagers believe him, it's a Type I error. When he claims there is a wolf again, but no one takes him seriously, though it is true, it's a type II | Type I error and type II error trade off
When the boy first pretends there is a wolf and the villagers believe him, it's a Type I error. When he claims there is a wolf again, but no one takes him seriously, though it is true, it's a type II error.
The villagers can avoid type I errors by never believing the boy, but that will always cause a Type II errors when there is a wolf around. Similarly, they can always believe him and never make a Type II, but that will cause lots of Type I errors.
You can think of using how scared the boy as a kind of test statistic. If he's crying, his breathing and heart rate are elevated, he has goose bumps or piloerection (his hair is standing up), then the villagers should take his claim more seriously. Requiring all these symptoms to be present and high is analogous to using a small $\alpha$ in the graph that @slowloris posted. | Type I error and type II error trade off
When the boy first pretends there is a wolf and the villagers believe him, it's a Type I error. When he claims there is a wolf again, but no one takes him seriously, though it is true, it's a type II |
38,402 | Type I error and type II error trade off | A life and death example of statistical errors
You are a paramedic and you approach the scene of a car accident. One victim is laying motionless on the road and you must assess whether the victim is dead or alive, and the victim will be treated accordingly. Based on this information which error rate results in the most costly mistake?
Null Hypothesis - The victim's status equals a living person
Alternative Hypothesis - The victim's status is not equivalent to a living person (i.e., they are dead)
Type I error - You reject the null hypothesis when the null hypothesis is actually true.
Type II error - You fail to reject the null hypothesis when the the alternative hypothesis is true.
Cost of Type I error - You erroneously presume that the victim is dead, and they do not receive an ambulance to the hospital for a life saving medical treatment.
Cost of Type II error - You erroneously send a dead person to the hospital in an ambulance.
Answer: As you can see, the Cost of the Type I error is tremendously worse than the cost of the Type II error.
Therefore, you may consider the trade-off of these errors. In traditional statistical hypothesis testing, you could use this cost benefit analysis to determine your alpha (Type I error rate) and beta (Type II error rate) before conducting an experiment. In our example, we would want a very small alpha (typically 0.05), but could live with a larger beta (typically 0.1 - 0.3, but we could live with >0.3 in this case). This would, for example, dramatically effect the required sample size of your experiment because you are OK with accepting the null hypothesis incorrectly and report that dead people are actually alive.
I diagnostic testing, we can look at trade off of Type I and Type II errors in terms of the threshold we place between the distribution of a measurement taken from two populations. Using our example, we might measure the redness of the skin, with redder skin representing a living victim.
In the figure, we can see that the best place to put a threshold between these groups is in the lowest point between the two distributions. This location would result in the least overall error. However, we can make a logical trade off here: By moving the threshold to the Right, the probability of a Type I error is reduced at the expense of increasing the probability of a Type II error. In our example, this trade off is good and would likely save someone's live and our job as a paramedic.
Image Source: http://grasshopper.com/blog/the-errors-of-ab-testing-your-conclusions-can-make-things-worse/ | Type I error and type II error trade off | A life and death example of statistical errors
You are a paramedic and you approach the scene of a car accident. One victim is laying motionless on the road and you must assess whether the victim is d | Type I error and type II error trade off
A life and death example of statistical errors
You are a paramedic and you approach the scene of a car accident. One victim is laying motionless on the road and you must assess whether the victim is dead or alive, and the victim will be treated accordingly. Based on this information which error rate results in the most costly mistake?
Null Hypothesis - The victim's status equals a living person
Alternative Hypothesis - The victim's status is not equivalent to a living person (i.e., they are dead)
Type I error - You reject the null hypothesis when the null hypothesis is actually true.
Type II error - You fail to reject the null hypothesis when the the alternative hypothesis is true.
Cost of Type I error - You erroneously presume that the victim is dead, and they do not receive an ambulance to the hospital for a life saving medical treatment.
Cost of Type II error - You erroneously send a dead person to the hospital in an ambulance.
Answer: As you can see, the Cost of the Type I error is tremendously worse than the cost of the Type II error.
Therefore, you may consider the trade-off of these errors. In traditional statistical hypothesis testing, you could use this cost benefit analysis to determine your alpha (Type I error rate) and beta (Type II error rate) before conducting an experiment. In our example, we would want a very small alpha (typically 0.05), but could live with a larger beta (typically 0.1 - 0.3, but we could live with >0.3 in this case). This would, for example, dramatically effect the required sample size of your experiment because you are OK with accepting the null hypothesis incorrectly and report that dead people are actually alive.
I diagnostic testing, we can look at trade off of Type I and Type II errors in terms of the threshold we place between the distribution of a measurement taken from two populations. Using our example, we might measure the redness of the skin, with redder skin representing a living victim.
In the figure, we can see that the best place to put a threshold between these groups is in the lowest point between the two distributions. This location would result in the least overall error. However, we can make a logical trade off here: By moving the threshold to the Right, the probability of a Type I error is reduced at the expense of increasing the probability of a Type II error. In our example, this trade off is good and would likely save someone's live and our job as a paramedic.
Image Source: http://grasshopper.com/blog/the-errors-of-ab-testing-your-conclusions-can-make-things-worse/ | Type I error and type II error trade off
A life and death example of statistical errors
You are a paramedic and you approach the scene of a car accident. One victim is laying motionless on the road and you must assess whether the victim is d |
38,403 | Type I error and type II error trade off | Google "illustration type 1 type 2 error" and have your pick. This site is one of many you can find a figure like the one you are asking for. | Type I error and type II error trade off | Google "illustration type 1 type 2 error" and have your pick. This site is one of many you can find a figure like the one you are asking for. | Type I error and type II error trade off
Google "illustration type 1 type 2 error" and have your pick. This site is one of many you can find a figure like the one you are asking for. | Type I error and type II error trade off
Google "illustration type 1 type 2 error" and have your pick. This site is one of many you can find a figure like the one you are asking for. |
38,404 | What does it mean for a moment generating function to exist in a neighborhood of 0? | Adding to Bey's answer, there's a reason you might care about this. The idea is that the MGF is a Laplace transform, and in this case it requires that your (continuous) probability density $f(x)$ decreases at least exponentially fast for large $x$, i.e. $e^{tx}f(x)\rightarrow 0$ for $x\rightarrow\infty$. This can be somewhat weakened but the main idea survives.
Anyways, it's usually the case that if $t$ is too large, this becomes false. So for example if $f(x)=2e^{-2x}$, then the MGF exists (i.e. is finite), for $t\in[0,2)$. As long as $f(t)$ is a density, everything is fine for $t<0$ but it turns out $t>0$ contains a wealth more of information. In general, saying that the MGF exists in a neighborhood of $0$ means that there is some $\epsilon>0$ such that your MGF is finite for all $t\in[0,\epsilon)$. Once your MGF exists, by abstract nonsense it corresponds to a unique distribution (your $f(x)$) and you can exploit all of it's nice properties, for example use it to bound probabilities. In similar vein to characteristic functions (i.e. fourier transforms), the regularity of your MGF near $t=0$ is intimately connected to the rate of decay of your density $f(x)$ as $x\rightarrow\infty$, an example of which you can see in the last link.
Perhaps more familiar to you, derivatives of the MGF, evaluated at $t=0$ give you back the moments of your distribution, so perhaps you can believe why you really only need to know what your $MGF$ looks likes near $t=0$ to extract almost everything about your random variable. | What does it mean for a moment generating function to exist in a neighborhood of 0? | Adding to Bey's answer, there's a reason you might care about this. The idea is that the MGF is a Laplace transform, and in this case it requires that your (continuous) probability density $f(x)$ decr | What does it mean for a moment generating function to exist in a neighborhood of 0?
Adding to Bey's answer, there's a reason you might care about this. The idea is that the MGF is a Laplace transform, and in this case it requires that your (continuous) probability density $f(x)$ decreases at least exponentially fast for large $x$, i.e. $e^{tx}f(x)\rightarrow 0$ for $x\rightarrow\infty$. This can be somewhat weakened but the main idea survives.
Anyways, it's usually the case that if $t$ is too large, this becomes false. So for example if $f(x)=2e^{-2x}$, then the MGF exists (i.e. is finite), for $t\in[0,2)$. As long as $f(t)$ is a density, everything is fine for $t<0$ but it turns out $t>0$ contains a wealth more of information. In general, saying that the MGF exists in a neighborhood of $0$ means that there is some $\epsilon>0$ such that your MGF is finite for all $t\in[0,\epsilon)$. Once your MGF exists, by abstract nonsense it corresponds to a unique distribution (your $f(x)$) and you can exploit all of it's nice properties, for example use it to bound probabilities. In similar vein to characteristic functions (i.e. fourier transforms), the regularity of your MGF near $t=0$ is intimately connected to the rate of decay of your density $f(x)$ as $x\rightarrow\infty$, an example of which you can see in the last link.
Perhaps more familiar to you, derivatives of the MGF, evaluated at $t=0$ give you back the moments of your distribution, so perhaps you can believe why you really only need to know what your $MGF$ looks likes near $t=0$ to extract almost everything about your random variable. | What does it mean for a moment generating function to exist in a neighborhood of 0?
Adding to Bey's answer, there's a reason you might care about this. The idea is that the MGF is a Laplace transform, and in this case it requires that your (continuous) probability density $f(x)$ decr |
38,405 | What does it mean for a moment generating function to exist in a neighborhood of 0? | The MGF of a random variable $X$ is given by:
$$\mathrm{MGF}_X = \mathbb{E}[e^{tX}]$$
Where it is a function of $t$.
As you can see, the MGF at $t=0$ is always $1$, even if $X$ is, say, a Cauchy random variable, which has no moments. So, basically, when mathematicians say "a neighborhood", they mean:
$$\exists \epsilon>0: \forall t \in (-\epsilon,\epsilon),\;\;\mathbb{E}[e^{tX}]< \infty$$
Note, it just needs to be some positive constant. | What does it mean for a moment generating function to exist in a neighborhood of 0? | The MGF of a random variable $X$ is given by:
$$\mathrm{MGF}_X = \mathbb{E}[e^{tX}]$$
Where it is a function of $t$.
As you can see, the MGF at $t=0$ is always $1$, even if $X$ is, say, a Cauchy rando | What does it mean for a moment generating function to exist in a neighborhood of 0?
The MGF of a random variable $X$ is given by:
$$\mathrm{MGF}_X = \mathbb{E}[e^{tX}]$$
Where it is a function of $t$.
As you can see, the MGF at $t=0$ is always $1$, even if $X$ is, say, a Cauchy random variable, which has no moments. So, basically, when mathematicians say "a neighborhood", they mean:
$$\exists \epsilon>0: \forall t \in (-\epsilon,\epsilon),\;\;\mathbb{E}[e^{tX}]< \infty$$
Note, it just needs to be some positive constant. | What does it mean for a moment generating function to exist in a neighborhood of 0?
The MGF of a random variable $X$ is given by:
$$\mathrm{MGF}_X = \mathbb{E}[e^{tX}]$$
Where it is a function of $t$.
As you can see, the MGF at $t=0$ is always $1$, even if $X$ is, say, a Cauchy rando |
38,406 | What does it mean for a moment generating function to exist in a neighborhood of 0? | I thought I'd chime in with an example that illustrates when we might have to worry about this.
Suppose $X \sim Exp(\lambda)$ so that the probability density function is:
$$f(x\vert \lambda) = \lambda e^{-\lambda x}$$
Then the moment generating function is:
$$M_X(t) = E(e^{Xt}) = \int_0^\infty e^{xt} \lambda e^{-\lambda x} dx = \lambda \int_0^\infty e^{x(t - \lambda)}dx$$
Notice that if $t - \lambda \geq 0$, then this integral diverges.
$$M_X(t) = \lambda\int_0^\infty e^{x(t - \lambda)}dx = \begin{cases}
\infty, & t \geq \lambda \\
\frac{\lambda}{\lambda - t}, & t < \lambda
\end{cases}$$
In otherwords, $M_X(t)$ is finite only when $t < \lambda$. But since $\lambda$ is strictly positive, there exists a neighborhood $N_\lambda(0)$ for which the MGF exists and we can use it in the usual way. | What does it mean for a moment generating function to exist in a neighborhood of 0? | I thought I'd chime in with an example that illustrates when we might have to worry about this.
Suppose $X \sim Exp(\lambda)$ so that the probability density function is:
$$f(x\vert \lambda) = \lambda | What does it mean for a moment generating function to exist in a neighborhood of 0?
I thought I'd chime in with an example that illustrates when we might have to worry about this.
Suppose $X \sim Exp(\lambda)$ so that the probability density function is:
$$f(x\vert \lambda) = \lambda e^{-\lambda x}$$
Then the moment generating function is:
$$M_X(t) = E(e^{Xt}) = \int_0^\infty e^{xt} \lambda e^{-\lambda x} dx = \lambda \int_0^\infty e^{x(t - \lambda)}dx$$
Notice that if $t - \lambda \geq 0$, then this integral diverges.
$$M_X(t) = \lambda\int_0^\infty e^{x(t - \lambda)}dx = \begin{cases}
\infty, & t \geq \lambda \\
\frac{\lambda}{\lambda - t}, & t < \lambda
\end{cases}$$
In otherwords, $M_X(t)$ is finite only when $t < \lambda$. But since $\lambda$ is strictly positive, there exists a neighborhood $N_\lambda(0)$ for which the MGF exists and we can use it in the usual way. | What does it mean for a moment generating function to exist in a neighborhood of 0?
I thought I'd chime in with an example that illustrates when we might have to worry about this.
Suppose $X \sim Exp(\lambda)$ so that the probability density function is:
$$f(x\vert \lambda) = \lambda |
38,407 | Difference between neural network architectures | To fully answer this question, it would require a lot of pages here. Don't forget, stackexchange is not a textbook from which people read for you.
Multi-layered perceptron (MLP): are the neural networks that (probably) started everything. They are strictly feed-forward (one directional), i.e. a node from one layer can only have connections to a node of the next layer (no crazy stuff here). All layers are fully connected. This is the equivalent to a feed-forward neural network. Both are directed graphs. Backprop is usually used to train these networks. They neurons/nodes in this network perform a dot-product of a weight-vector belonging to this neuron with the input. The output is passed through a sigmoidal function, which later makes it easy to compute gradients and form the backprop algorithms.
Recurrent neural networks (RNNs) are networks which form an undirected cycle, essentially per layer. Meaning that this kind of network has a (fixed) storage capacity of information. It is/was often used on problems that require these specific "memory buffers", e.g. handwriting recognition. Training is usually performed by gradient descent (the principle behind backprop).
Hopfield network: can be seen as an (somewhat unofficial) form of a RNN. It only has one layer, which then (already) provides outputs.. The nodes, however, are interconnected in a special way -- Feedback-Nets (google it). One important point to make is that the neurons/nodes are of binary nature, e.g. they only take 1 or 0 as an input. Training is usually performed by Hebbian learning.
Restricted Boltzman Machines (RBMs) also usually only take binary input. It can be described as a two-layer "network" (better: 'graph'). The first layer are visible units, i.e. we observe them. The second layer are hidden (latent) units, i.e. we have to infer them. These nets are trained using contrastive divergence (a mix of gradient descent and Gibbs Sampling). Note that the training procedure does not optimize the exact energy function (I won't explain that here) but rather a different yet related type. In practice this works well. The power of these models lies in the fact that they can be stacked, i.e. one RBM after another. Training is performed separately. Research on RBMs and their development into stacked models was mainly executed by Geoffrey Hinton and his team. It can be categorized as a form of deep learning.
Recursive neural network: I actually never worked with them, so I probably can't say much about them. I think the main idea is that a neuron can point at itself and therefore enables temporal modeling. These networks can be unrolled and then trained in a regular fashion.
Convolutional neural network: Are usually a special kind of networks in deep learning. Let's first discuss them. 'Deep' here essentially means to have more and more layers in your model. Why didn't we do this before with MLPs? Well, backprob pushes the error the network has produced back to the inputs, i.e. in reverse using the derivatives w.r.t. all parameters. We said before a non-linear transfer function is used in the neurons -- a sigmoidal function. The problem here is, that with many layers, this function causes the gradient to vanish. This is obvious, you put your signal through mutliple sigmoidal functions, which are capped at [0,1] or [-1,1]. They were essentially replaced with rectified linear units (ReLu). These are essentially zero from $-\infty$ to zero and grow linearly from zero to $+\infty$. That solved the issue of the vanishing gradients. Another problem was that it took quite a long time to train such networks on the computers back then. This was resolved by porting the problem to modern GPUs, which can train the most sophisticated nets these days in roughly a week and the more easier ones in less than a day.
CNN: So what is a convolutional neural network? In its simplest form it is a shallow MLP and the input is, e.g. and most often, an image. Convolutional filters are computed over the image and give input to the next (second) layer. Note: The weights of the convolutional filters are learned as well in the process. These days they are almost always used in deep architectures in combination with pooling layers and other tricks of the trade.
Material for you:
Books to read:
Neural Networks for Pattern Recognition by Christopher M. Bishop -everybody working with network structures such as the ones you asked for should have read this book.
The Deep Learning Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville (+ the community), This book is still in progress and hence can be downloaded for free at this point in time: http://www.deeplearningbook.org/
Lectures:
Machine Learning Summer School: http://videolectures.net/mlss09uk_cambridge/?q=Machine%20Learning%20Summer%20School -- a very good summer school and other years are online as well. You should be interested in the talk by Geoffrey Hinton.
Deep Learning summer school: http://videolectures.net/deeplearning2015_montreal/?q=Deep%20Learning%20summer%20school -- this one should help you a lot.
These explanations are by far not complete but hopefully correct. If you want to understand this field, you have to read a lot more than this. | Difference between neural network architectures | To fully answer this question, it would require a lot of pages here. Don't forget, stackexchange is not a textbook from which people read for you.
Multi-layered perceptron (MLP): are the neural netwo | Difference between neural network architectures
To fully answer this question, it would require a lot of pages here. Don't forget, stackexchange is not a textbook from which people read for you.
Multi-layered perceptron (MLP): are the neural networks that (probably) started everything. They are strictly feed-forward (one directional), i.e. a node from one layer can only have connections to a node of the next layer (no crazy stuff here). All layers are fully connected. This is the equivalent to a feed-forward neural network. Both are directed graphs. Backprop is usually used to train these networks. They neurons/nodes in this network perform a dot-product of a weight-vector belonging to this neuron with the input. The output is passed through a sigmoidal function, which later makes it easy to compute gradients and form the backprop algorithms.
Recurrent neural networks (RNNs) are networks which form an undirected cycle, essentially per layer. Meaning that this kind of network has a (fixed) storage capacity of information. It is/was often used on problems that require these specific "memory buffers", e.g. handwriting recognition. Training is usually performed by gradient descent (the principle behind backprop).
Hopfield network: can be seen as an (somewhat unofficial) form of a RNN. It only has one layer, which then (already) provides outputs.. The nodes, however, are interconnected in a special way -- Feedback-Nets (google it). One important point to make is that the neurons/nodes are of binary nature, e.g. they only take 1 or 0 as an input. Training is usually performed by Hebbian learning.
Restricted Boltzman Machines (RBMs) also usually only take binary input. It can be described as a two-layer "network" (better: 'graph'). The first layer are visible units, i.e. we observe them. The second layer are hidden (latent) units, i.e. we have to infer them. These nets are trained using contrastive divergence (a mix of gradient descent and Gibbs Sampling). Note that the training procedure does not optimize the exact energy function (I won't explain that here) but rather a different yet related type. In practice this works well. The power of these models lies in the fact that they can be stacked, i.e. one RBM after another. Training is performed separately. Research on RBMs and their development into stacked models was mainly executed by Geoffrey Hinton and his team. It can be categorized as a form of deep learning.
Recursive neural network: I actually never worked with them, so I probably can't say much about them. I think the main idea is that a neuron can point at itself and therefore enables temporal modeling. These networks can be unrolled and then trained in a regular fashion.
Convolutional neural network: Are usually a special kind of networks in deep learning. Let's first discuss them. 'Deep' here essentially means to have more and more layers in your model. Why didn't we do this before with MLPs? Well, backprob pushes the error the network has produced back to the inputs, i.e. in reverse using the derivatives w.r.t. all parameters. We said before a non-linear transfer function is used in the neurons -- a sigmoidal function. The problem here is, that with many layers, this function causes the gradient to vanish. This is obvious, you put your signal through mutliple sigmoidal functions, which are capped at [0,1] or [-1,1]. They were essentially replaced with rectified linear units (ReLu). These are essentially zero from $-\infty$ to zero and grow linearly from zero to $+\infty$. That solved the issue of the vanishing gradients. Another problem was that it took quite a long time to train such networks on the computers back then. This was resolved by porting the problem to modern GPUs, which can train the most sophisticated nets these days in roughly a week and the more easier ones in less than a day.
CNN: So what is a convolutional neural network? In its simplest form it is a shallow MLP and the input is, e.g. and most often, an image. Convolutional filters are computed over the image and give input to the next (second) layer. Note: The weights of the convolutional filters are learned as well in the process. These days they are almost always used in deep architectures in combination with pooling layers and other tricks of the trade.
Material for you:
Books to read:
Neural Networks for Pattern Recognition by Christopher M. Bishop -everybody working with network structures such as the ones you asked for should have read this book.
The Deep Learning Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville (+ the community), This book is still in progress and hence can be downloaded for free at this point in time: http://www.deeplearningbook.org/
Lectures:
Machine Learning Summer School: http://videolectures.net/mlss09uk_cambridge/?q=Machine%20Learning%20Summer%20School -- a very good summer school and other years are online as well. You should be interested in the talk by Geoffrey Hinton.
Deep Learning summer school: http://videolectures.net/deeplearning2015_montreal/?q=Deep%20Learning%20summer%20school -- this one should help you a lot.
These explanations are by far not complete but hopefully correct. If you want to understand this field, you have to read a lot more than this. | Difference between neural network architectures
To fully answer this question, it would require a lot of pages here. Don't forget, stackexchange is not a textbook from which people read for you.
Multi-layered perceptron (MLP): are the neural netwo |
38,408 | Is the Standard Deviation of a binomial dataset informative? | If you have a binomial random variable $X$, of size $N$, and with success probability $p$, i.e. $X \sim Bin(N;p)$, then the mean of X is $Np$ and its variance is $Np(1-p)$, so as you say the variance is a second degree polynomial in $p$. Note however that the variance is also dependent on $N$ ! The latter is important for estimating $p$:
If you observe 30 successes in 100 then the fraction of successes is 30/100 which is the number of successes divided by the size of the Binomial, i.e. $\frac{X}{N}$.
But if $X$ has mean $Np$, then $\frac{X}{N}$ has a mean equal to the mean of $X$ divided by $N$ because $N$ is a constant. In other words $\frac{X}{N}$ has mean $\frac{Np}{N}=p$. This implies that the fraction of successes observed is an unbiased estimator of the probabiliy $p$.
To compute the variance of the estimator $\frac{X}{N}$, we have to divide the variance of $X$ by $N^2$ (variance of a (variable divided by a constant) is the (variance of the variable) divided by the square of the constant), so the variance of the estimator is $\frac{Np(1-p)}{N^2}=\frac{p(1-p)}{N}$. The standard deviation of the estimator is the square root of the variance so it is $\sqrt{\frac{p(1-p)}{N}}$.
So , if you throw a coin 100 times and you observe 49 heads, then $\frac{49}{100}$ is an estimator of for the probability of tossing head with that coin and the standard deviation of this estimate is $\sqrt{\frac{0.49\times(1-0.49)}{100}}$.
If you toss the coin 1000 times and you observe 490 heads then you estimate the probability of tossing head again at $0.49$ and the standard devtaion at $\sqrt{\frac{0.49\times(1-0.49)}{1000}}$.
Obviously the in the second case the standard deviation is smaller and so the estimator is more precise when you increase the number of tosses.
You can conclude that, for a Binomial random variable, the variance is a quadratic polynomial in p, but it depends also on N and I think that standard deviation does contain information additional to the success probability.
In fact, the Binomial distribution has two parameters and you will always need at least two moments (in this case the mean (=first moment) and the standard deviation (square root of the second moment) ) to fully identify it.
P.S. A somewhat more general development, also for poisson-binomial, can be found in my answer to Estimate accuracy of an estimation on Poisson binomial distribution. | Is the Standard Deviation of a binomial dataset informative? | If you have a binomial random variable $X$, of size $N$, and with success probability $p$, i.e. $X \sim Bin(N;p)$, then the mean of X is $Np$ and its variance is $Np(1-p)$, so as you say the variance | Is the Standard Deviation of a binomial dataset informative?
If you have a binomial random variable $X$, of size $N$, and with success probability $p$, i.e. $X \sim Bin(N;p)$, then the mean of X is $Np$ and its variance is $Np(1-p)$, so as you say the variance is a second degree polynomial in $p$. Note however that the variance is also dependent on $N$ ! The latter is important for estimating $p$:
If you observe 30 successes in 100 then the fraction of successes is 30/100 which is the number of successes divided by the size of the Binomial, i.e. $\frac{X}{N}$.
But if $X$ has mean $Np$, then $\frac{X}{N}$ has a mean equal to the mean of $X$ divided by $N$ because $N$ is a constant. In other words $\frac{X}{N}$ has mean $\frac{Np}{N}=p$. This implies that the fraction of successes observed is an unbiased estimator of the probabiliy $p$.
To compute the variance of the estimator $\frac{X}{N}$, we have to divide the variance of $X$ by $N^2$ (variance of a (variable divided by a constant) is the (variance of the variable) divided by the square of the constant), so the variance of the estimator is $\frac{Np(1-p)}{N^2}=\frac{p(1-p)}{N}$. The standard deviation of the estimator is the square root of the variance so it is $\sqrt{\frac{p(1-p)}{N}}$.
So , if you throw a coin 100 times and you observe 49 heads, then $\frac{49}{100}$ is an estimator of for the probability of tossing head with that coin and the standard deviation of this estimate is $\sqrt{\frac{0.49\times(1-0.49)}{100}}$.
If you toss the coin 1000 times and you observe 490 heads then you estimate the probability of tossing head again at $0.49$ and the standard devtaion at $\sqrt{\frac{0.49\times(1-0.49)}{1000}}$.
Obviously the in the second case the standard deviation is smaller and so the estimator is more precise when you increase the number of tosses.
You can conclude that, for a Binomial random variable, the variance is a quadratic polynomial in p, but it depends also on N and I think that standard deviation does contain information additional to the success probability.
In fact, the Binomial distribution has two parameters and you will always need at least two moments (in this case the mean (=first moment) and the standard deviation (square root of the second moment) ) to fully identify it.
P.S. A somewhat more general development, also for poisson-binomial, can be found in my answer to Estimate accuracy of an estimation on Poisson binomial distribution. | Is the Standard Deviation of a binomial dataset informative?
If you have a binomial random variable $X$, of size $N$, and with success probability $p$, i.e. $X \sim Bin(N;p)$, then the mean of X is $Np$ and its variance is $Np(1-p)$, so as you say the variance |
38,409 | Is the Standard Deviation of a binomial dataset informative? | The family of Bernouli distributions is completely parameterized by one number, usually called $p$. So any population statistic of a Bernouli distribution must be some function of the parameter $p$. This does not mean that those statistics are descriptively useless!
For example, I can completely describe a box by giving its length, width, and height, but the volume is still a useful statistic! | Is the Standard Deviation of a binomial dataset informative? | The family of Bernouli distributions is completely parameterized by one number, usually called $p$. So any population statistic of a Bernouli distribution must be some function of the parameter $p$. | Is the Standard Deviation of a binomial dataset informative?
The family of Bernouli distributions is completely parameterized by one number, usually called $p$. So any population statistic of a Bernouli distribution must be some function of the parameter $p$. This does not mean that those statistics are descriptively useless!
For example, I can completely describe a box by giving its length, width, and height, but the volume is still a useful statistic! | Is the Standard Deviation of a binomial dataset informative?
The family of Bernouli distributions is completely parameterized by one number, usually called $p$. So any population statistic of a Bernouli distribution must be some function of the parameter $p$. |
38,410 | Is the Standard Deviation of a binomial dataset informative? | You might think you have a point if you already knew the true value of the binomial parameter $p$ and that you really were dealing with a binomial experiment (independent Bernoulli trials at constant $p$). With $N$ cases, the variance of the number of successes in a binomial experiment is $N p (1-p)$, and (naively) dividing by $N$ to get the variance in the proportion of successes would give a value independent of $N$. But there are two problems with this. First, if you did know the value of $p$, you wouldn't need to do this analysis. Second, as @f-coppens points out, this naive approach to determining the variance in the observed success proportion is incorrect.
What you have is an estimate of $p$ based on a sample of $N$ cases. The confidence intervals around your estimate of $p$ depend on the value of $N$, improving approximately with the square root of $N$. I suspect that is the point you inquisitor is trying to make. See the Wikipedia page on the binomial distibution for formulas for confidence intervals. And this doesn't even get into whether all of your samples are modeled by a single parameter $p$. | Is the Standard Deviation of a binomial dataset informative? | You might think you have a point if you already knew the true value of the binomial parameter $p$ and that you really were dealing with a binomial experiment (independent Bernoulli trials at constant | Is the Standard Deviation of a binomial dataset informative?
You might think you have a point if you already knew the true value of the binomial parameter $p$ and that you really were dealing with a binomial experiment (independent Bernoulli trials at constant $p$). With $N$ cases, the variance of the number of successes in a binomial experiment is $N p (1-p)$, and (naively) dividing by $N$ to get the variance in the proportion of successes would give a value independent of $N$. But there are two problems with this. First, if you did know the value of $p$, you wouldn't need to do this analysis. Second, as @f-coppens points out, this naive approach to determining the variance in the observed success proportion is incorrect.
What you have is an estimate of $p$ based on a sample of $N$ cases. The confidence intervals around your estimate of $p$ depend on the value of $N$, improving approximately with the square root of $N$. I suspect that is the point you inquisitor is trying to make. See the Wikipedia page on the binomial distibution for formulas for confidence intervals. And this doesn't even get into whether all of your samples are modeled by a single parameter $p$. | Is the Standard Deviation of a binomial dataset informative?
You might think you have a point if you already knew the true value of the binomial parameter $p$ and that you really were dealing with a binomial experiment (independent Bernoulli trials at constant |
38,411 | Why does this data throw an error in R fitdistr? | The Weibull distribution has two parameters, the scale $\lambda$ and shape $k$ (I'm following Wikipedia's notation). Both parameters are positive real numbers.
The function fitdist from the fitdistrplus package uses the optim function to find the maximum likelihood estimations of the parameters. By default, optim imposes no constraints on the parameters and tries out negative numbers as well. But negative values for the scale or shape produce NaNs for the Weibull distribution. By using the options lower and upper, you can impose limits on the parameter search space for optim.
The gamma distribution also has two parameters and as with the Weibull distribution, both are positive. So the same limits lower = c(0, 0) can be used for the gamma distribution.
Edit
Here is a small comparison of the Weibull and gamma fit for the posted data. The errors for the gamma distribution arise because of bad starting values. I provide them manually and then it works fine without errors.
library(fitdistrplus)
temp <- c(477.25, 2615.56, 1279.98, 581.57, 13.55, 80.4, 6640.22, 759.46,
1142.33, 134, 1232.23, 389.81, 7811.65, 992.11, 1152.4, 3139.01,
2636.78, 3294.75, 2266.95, 32.12, 7356.84, 1448.54, 3606.82,
465.39, 950.5, 3721.49, 522.01, 1548.62, 2196.3, 256.8, 2959.72,
214.4, 134, 2307.79, 2112.74)
fit.weibull <- fitdist(temp, distr = "weibull", method = "mle", lower = c(0, 0))
fit.gamma <- fitdist(temp, distr = "gamma", method = "mle", lower = c(0, 0), start = list(scale = 1, shape = 1))
Plot the fit for the Weibull:
plot(fit.weibull)
And for the gamma distribution:
plot(fit.gamma)
They are practically indistinguishable. The AICs are virtually the same for both fits:
gofstat(list(fit.weibull, fit.gamma))
Goodness-of-fit statistics
1-mle-weibull 2-mle-gamma
Kolmogorov-Smirnov statistic 0.07288424 0.07970184
Cramer-von Mises statistic 0.02532353 0.02361358
Anderson-Darling statistic 0.20489012 0.17609146
Goodness-of-fit criteria
1-mle-weibull 2-mle-gamma
Aikake's Information Criterion 601.7909 601.5659
Bayesian Information Criterion 604.9016 604.6766 | Why does this data throw an error in R fitdistr? | The Weibull distribution has two parameters, the scale $\lambda$ and shape $k$ (I'm following Wikipedia's notation). Both parameters are positive real numbers.
The function fitdist from the fitdistrpl | Why does this data throw an error in R fitdistr?
The Weibull distribution has two parameters, the scale $\lambda$ and shape $k$ (I'm following Wikipedia's notation). Both parameters are positive real numbers.
The function fitdist from the fitdistrplus package uses the optim function to find the maximum likelihood estimations of the parameters. By default, optim imposes no constraints on the parameters and tries out negative numbers as well. But negative values for the scale or shape produce NaNs for the Weibull distribution. By using the options lower and upper, you can impose limits on the parameter search space for optim.
The gamma distribution also has two parameters and as with the Weibull distribution, both are positive. So the same limits lower = c(0, 0) can be used for the gamma distribution.
Edit
Here is a small comparison of the Weibull and gamma fit for the posted data. The errors for the gamma distribution arise because of bad starting values. I provide them manually and then it works fine without errors.
library(fitdistrplus)
temp <- c(477.25, 2615.56, 1279.98, 581.57, 13.55, 80.4, 6640.22, 759.46,
1142.33, 134, 1232.23, 389.81, 7811.65, 992.11, 1152.4, 3139.01,
2636.78, 3294.75, 2266.95, 32.12, 7356.84, 1448.54, 3606.82,
465.39, 950.5, 3721.49, 522.01, 1548.62, 2196.3, 256.8, 2959.72,
214.4, 134, 2307.79, 2112.74)
fit.weibull <- fitdist(temp, distr = "weibull", method = "mle", lower = c(0, 0))
fit.gamma <- fitdist(temp, distr = "gamma", method = "mle", lower = c(0, 0), start = list(scale = 1, shape = 1))
Plot the fit for the Weibull:
plot(fit.weibull)
And for the gamma distribution:
plot(fit.gamma)
They are practically indistinguishable. The AICs are virtually the same for both fits:
gofstat(list(fit.weibull, fit.gamma))
Goodness-of-fit statistics
1-mle-weibull 2-mle-gamma
Kolmogorov-Smirnov statistic 0.07288424 0.07970184
Cramer-von Mises statistic 0.02532353 0.02361358
Anderson-Darling statistic 0.20489012 0.17609146
Goodness-of-fit criteria
1-mle-weibull 2-mle-gamma
Aikake's Information Criterion 601.7909 601.5659
Bayesian Information Criterion 604.9016 604.6766 | Why does this data throw an error in R fitdistr?
The Weibull distribution has two parameters, the scale $\lambda$ and shape $k$ (I'm following Wikipedia's notation). Both parameters are positive real numbers.
The function fitdist from the fitdistrpl |
38,412 | How to add noise to a random variable whose range is the unit interval? [closed] | A traditional way to handle constrained variables is to transform them into unconstrained variables, apply the jittering, and turn them back into the original scale.
For instance, if $d_i\in(0,1)$, one can use the logit transform
$$x_i=\text{logit}(d_i)=\log\left(\frac{d_i}{1-d_i}\right)$$
and add as much noise as necessary$$y_i=x_i+\epsilon_i$$ where $\epsilon_i$ is for instance a centred Gaussian variate, before returning
$$\delta_i=\exp(y_i)\big/(1+\exp(y_i))=1\big/(1+\exp(-y_i))$$
Here is an illustration in R:
> d=rbeta(10^4,2.4,6.2)
> logit=function(x){log(x/(1-x))}
> de=1/(1+exp(-rnorm(10^4,mean=logit(d),sd=2))) | How to add noise to a random variable whose range is the unit interval? [closed] | A traditional way to handle constrained variables is to transform them into unconstrained variables, apply the jittering, and turn them back into the original scale.
For instance, if $d_i\in(0,1)$, on | How to add noise to a random variable whose range is the unit interval? [closed]
A traditional way to handle constrained variables is to transform them into unconstrained variables, apply the jittering, and turn them back into the original scale.
For instance, if $d_i\in(0,1)$, one can use the logit transform
$$x_i=\text{logit}(d_i)=\log\left(\frac{d_i}{1-d_i}\right)$$
and add as much noise as necessary$$y_i=x_i+\epsilon_i$$ where $\epsilon_i$ is for instance a centred Gaussian variate, before returning
$$\delta_i=\exp(y_i)\big/(1+\exp(y_i))=1\big/(1+\exp(-y_i))$$
Here is an illustration in R:
> d=rbeta(10^4,2.4,6.2)
> logit=function(x){log(x/(1-x))}
> de=1/(1+exp(-rnorm(10^4,mean=logit(d),sd=2))) | How to add noise to a random variable whose range is the unit interval? [closed]
A traditional way to handle constrained variables is to transform them into unconstrained variables, apply the jittering, and turn them back into the original scale.
For instance, if $d_i\in(0,1)$, on |
38,413 | How to add noise to a random variable whose range is the unit interval? [closed] | It appears there are many ways to accomplish what you are looking for. Here's one suggestion.
Treat each $d$ as if it were the value of some function $\Phi$ from the real number line to the unit interval. For example, let $\Phi$ denote the cumulative density function of the standard normal distribution, so that $d = \Phi(z)$ for some real-valued $z$. Then $z = Probit(d)$, where $Probit$ denotes the inverse of $\Phi$. Let $\hat{z} = z + a \cdot e$, where $e$ is a standard normal random variable and $a >0$ is a scaling factor. Let $\hat{d} = \Phi(\hat{z})$ be your noisy estimate of $d$. Putting everything together, you have
$\hat{d} = \Phi(Probit(d) + a \cdot e)$
which will lie in the open unit interval. By adjusting $a$ you adjust the degree of noise. | How to add noise to a random variable whose range is the unit interval? [closed] | It appears there are many ways to accomplish what you are looking for. Here's one suggestion.
Treat each $d$ as if it were the value of some function $\Phi$ from the real number line to the unit inter | How to add noise to a random variable whose range is the unit interval? [closed]
It appears there are many ways to accomplish what you are looking for. Here's one suggestion.
Treat each $d$ as if it were the value of some function $\Phi$ from the real number line to the unit interval. For example, let $\Phi$ denote the cumulative density function of the standard normal distribution, so that $d = \Phi(z)$ for some real-valued $z$. Then $z = Probit(d)$, where $Probit$ denotes the inverse of $\Phi$. Let $\hat{z} = z + a \cdot e$, where $e$ is a standard normal random variable and $a >0$ is a scaling factor. Let $\hat{d} = \Phi(\hat{z})$ be your noisy estimate of $d$. Putting everything together, you have
$\hat{d} = \Phi(Probit(d) + a \cdot e)$
which will lie in the open unit interval. By adjusting $a$ you adjust the degree of noise. | How to add noise to a random variable whose range is the unit interval? [closed]
It appears there are many ways to accomplish what you are looking for. Here's one suggestion.
Treat each $d$ as if it were the value of some function $\Phi$ from the real number line to the unit inter |
38,414 | Why doesn't my gamma density plot match my histogram of samples? | If $X$ has density (pdf) $f$ then $1/X$ does not have density $1/f$ (which is not even a density). Indeed, the change-of-variables formula teaches us that $Y:=h(X)$ has density
$$\bigl|{(h^{-1})}'(y)\bigr|f(h^{-1}(y))$$
when $h$ is a "nice" invertible transformation.
I gave the formula for a general $h$, because $h^{-1}(y)=1/y$ when $h(x)=1/x$, and that could cause some confusion.
Then denoting by $f$ the density of your simulated Gamma distribution (rgamma), you have to compare your histogram with the density
$$ \frac{1}{y^2} f\left(\frac{1}{y}\right)$$ :
grid <- seq(0,100,by=0.1)
sig.post.shape <- 91
sig.post.rate <- 1247.52
set.seed(1);
hist(1/rgamma(grid, shape = sig.post.shape, rate = sig.post.rate), breaks=10, prob=TRUE)
lines(grid,1/grid^2*dgamma(1/grid, shape = sig.post.shape, rate = sig.post.rate), type = "l") | Why doesn't my gamma density plot match my histogram of samples? | If $X$ has density (pdf) $f$ then $1/X$ does not have density $1/f$ (which is not even a density). Indeed, the change-of-variables formula teaches us that $Y:=h(X)$ has density
$$\bigl|{(h^{-1})}'(y) | Why doesn't my gamma density plot match my histogram of samples?
If $X$ has density (pdf) $f$ then $1/X$ does not have density $1/f$ (which is not even a density). Indeed, the change-of-variables formula teaches us that $Y:=h(X)$ has density
$$\bigl|{(h^{-1})}'(y)\bigr|f(h^{-1}(y))$$
when $h$ is a "nice" invertible transformation.
I gave the formula for a general $h$, because $h^{-1}(y)=1/y$ when $h(x)=1/x$, and that could cause some confusion.
Then denoting by $f$ the density of your simulated Gamma distribution (rgamma), you have to compare your histogram with the density
$$ \frac{1}{y^2} f\left(\frac{1}{y}\right)$$ :
grid <- seq(0,100,by=0.1)
sig.post.shape <- 91
sig.post.rate <- 1247.52
set.seed(1);
hist(1/rgamma(grid, shape = sig.post.shape, rate = sig.post.rate), breaks=10, prob=TRUE)
lines(grid,1/grid^2*dgamma(1/grid, shape = sig.post.shape, rate = sig.post.rate), type = "l") | Why doesn't my gamma density plot match my histogram of samples?
If $X$ has density (pdf) $f$ then $1/X$ does not have density $1/f$ (which is not even a density). Indeed, the change-of-variables formula teaches us that $Y:=h(X)$ has density
$$\bigl|{(h^{-1})}'(y) |
38,415 | Why doesn't my gamma density plot match my histogram of samples? | The problem here isn't at root anything to do with graphics.
The density of an inverse gamma distribution is not the reciprocal of the density of a gamma distribution, which is what your last syntax line implies. To see that, it's sufficient to note that such a relation would map near zero densities of the gamma to near infinite densities of the inverse gamma. In effect, this is what R is trying to draw, with the bizarre results you note. The function you draw isn't even a density function.
UPDATE. Stéphane Laurent followed this quickly, with a much fuller, definitive version. I am letting this stand as a Mickey Mouse "executive summary" answer. | Why doesn't my gamma density plot match my histogram of samples? | The problem here isn't at root anything to do with graphics.
The density of an inverse gamma distribution is not the reciprocal of the density of a gamma distribution, which is what your last syntax | Why doesn't my gamma density plot match my histogram of samples?
The problem here isn't at root anything to do with graphics.
The density of an inverse gamma distribution is not the reciprocal of the density of a gamma distribution, which is what your last syntax line implies. To see that, it's sufficient to note that such a relation would map near zero densities of the gamma to near infinite densities of the inverse gamma. In effect, this is what R is trying to draw, with the bizarre results you note. The function you draw isn't even a density function.
UPDATE. Stéphane Laurent followed this quickly, with a much fuller, definitive version. I am letting this stand as a Mickey Mouse "executive summary" answer. | Why doesn't my gamma density plot match my histogram of samples?
The problem here isn't at root anything to do with graphics.
The density of an inverse gamma distribution is not the reciprocal of the density of a gamma distribution, which is what your last syntax |
38,416 | Simple non-linear regression problem | If you log-transformed your outcome variable and then fit a regression model, just exponentiate the predictions to plot it on the original scale.
In many cases, it's better to use some nonlinear functions such as polynomials or splines on the originale scale, as @hejseb mentioned. This post might be of interest.
Here is an example in R using the mtcars dataset. The variable used here were chosen totally arbitrarily, just for illustration purposes.
First, we plot Log(Miles/Gallon) vs. Displacement. This looks approximately linear.
After fitting a linear regression model with the log-transformed Miles/Gallon, the prediction intervals on the log-scale look like this:
Exponentiating the prediction intervals, we finally get this graphic on the original scale:
This ensures that the prediction intervals never go below 0.
We could also fit a quadratic model on the original scale and plot the prediction intervals.
Using a quadratic fit on the original scale, we cannot be sure that the fit and prediction intervals stay above 0.
Here is the R-code that I used to generate the figures.
#------------------------------------------------------------------------------------------------------------------------------
# Load data
#------------------------------------------------------------------------------------------------------------------------------
data(mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Scatterplot with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp, data = mtcars, las = 1, pch = 16, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
#------------------------------------------------------------------------------------------------------------------------------
# Linear regression with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
log.mod <- lm(log(mpg)~disp, data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(log.mod, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp
, data = mtcars
, ylim = c(2, 4)
, las = 1
, pch = 16
, main = "Log scale"
, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2)
lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe$disp, lty = 2)
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on original scale
#------------------------------------------------------------------------------------------------------------------------------
plot(mpg~disp
, data = mtcars
, ylim = c(8, 38)
, las = 1
, pch = 16
, main = "Original scale"
, xlab = "Displacement", ylab = "Miles/Gallon")
lines(exp(pred[,"fit"])~newframe$disp, col = "steelblue", lwd = 2)
lines(exp(pred[,"lwr"])~newframe$disp, lty = 2)
lines(exp(pred[,"upr"])~newframe$disp, lty = 2)
#------------------------------------------------------------------------------------------------------------------------------
# Quadratic regression on original scale
#------------------------------------------------------------------------------------------------------------------------------
quad.lm <- lm(mpg~poly(disp, 2), data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(quad.lm, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(mpg~disp
, data = mtcars
, ylim = c(7, 36)
, las = 1
, pch = 16
, main = "Original scale"
, xlab = "Displacement", ylab = "Miles/Gallon")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2)
lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe$disp, lty = 2) | Simple non-linear regression problem | If you log-transformed your outcome variable and then fit a regression model, just exponentiate the predictions to plot it on the original scale.
In many cases, it's better to use some nonlinear funct | Simple non-linear regression problem
If you log-transformed your outcome variable and then fit a regression model, just exponentiate the predictions to plot it on the original scale.
In many cases, it's better to use some nonlinear functions such as polynomials or splines on the originale scale, as @hejseb mentioned. This post might be of interest.
Here is an example in R using the mtcars dataset. The variable used here were chosen totally arbitrarily, just for illustration purposes.
First, we plot Log(Miles/Gallon) vs. Displacement. This looks approximately linear.
After fitting a linear regression model with the log-transformed Miles/Gallon, the prediction intervals on the log-scale look like this:
Exponentiating the prediction intervals, we finally get this graphic on the original scale:
This ensures that the prediction intervals never go below 0.
We could also fit a quadratic model on the original scale and plot the prediction intervals.
Using a quadratic fit on the original scale, we cannot be sure that the fit and prediction intervals stay above 0.
Here is the R-code that I used to generate the figures.
#------------------------------------------------------------------------------------------------------------------------------
# Load data
#------------------------------------------------------------------------------------------------------------------------------
data(mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Scatterplot with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp, data = mtcars, las = 1, pch = 16, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
#------------------------------------------------------------------------------------------------------------------------------
# Linear regression with log-transformation
#------------------------------------------------------------------------------------------------------------------------------
log.mod <- lm(log(mpg)~disp, data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(log.mod, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(log(mpg)~disp
, data = mtcars
, ylim = c(2, 4)
, las = 1
, pch = 16
, main = "Log scale"
, xlab = "Displacement", ylab = "Log(Miles/Gallon)")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2)
lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe$disp, lty = 2)
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on original scale
#------------------------------------------------------------------------------------------------------------------------------
plot(mpg~disp
, data = mtcars
, ylim = c(8, 38)
, las = 1
, pch = 16
, main = "Original scale"
, xlab = "Displacement", ylab = "Miles/Gallon")
lines(exp(pred[,"fit"])~newframe$disp, col = "steelblue", lwd = 2)
lines(exp(pred[,"lwr"])~newframe$disp, lty = 2)
lines(exp(pred[,"upr"])~newframe$disp, lty = 2)
#------------------------------------------------------------------------------------------------------------------------------
# Quadratic regression on original scale
#------------------------------------------------------------------------------------------------------------------------------
quad.lm <- lm(mpg~poly(disp, 2), data = mtcars)
#------------------------------------------------------------------------------------------------------------------------------
# Prediction intervals
#------------------------------------------------------------------------------------------------------------------------------
newframe <- data.frame(disp = seq(min(mtcars$disp), max(mtcars$disp), length = 1000))
pred <- predict(quad.lm, newdata = newframe, interval = "prediction")
#------------------------------------------------------------------------------------------------------------------------------
# Plot prediction intervals on log scale
#------------------------------------------------------------------------------------------------------------------------------
plot(mpg~disp
, data = mtcars
, ylim = c(7, 36)
, las = 1
, pch = 16
, main = "Original scale"
, xlab = "Displacement", ylab = "Miles/Gallon")
lines(pred[,"fit"]~newframe$disp, col = "steelblue", lwd = 2)
lines(pred[,"lwr"]~newframe$disp, lty = 2)
lines(pred[,"upr"]~newframe$disp, lty = 2) | Simple non-linear regression problem
If you log-transformed your outcome variable and then fit a regression model, just exponentiate the predictions to plot it on the original scale.
In many cases, it's better to use some nonlinear funct |
38,417 | Simple non-linear regression problem | If all you want is a quadratic term, you can use lm(y~x+I(x^2)). An example:
For your model that would mean predictions <- lm(price~mileage+I(mileage^2), data = ads_clean). For higher order polynomials, you can just add them in the same way. You could also try some nonparametric regression, for example locpoly.
x <- rnorm(100)
y <- x + x^2 + rnorm(100)
plot(x, y)
model1 <- lm(y~ x+ I(x^2))
plotdata <- cbind(x, predict(model1))
lines(plotdata[order(x),], col = "red")
Please be aware that, depending on your goal, this might be associated with other problems such as heteroscedasticity. If you want to make inference, you need to pay extra care that the assumptions you would rely on actually appear to be satisfied. But, if you are truly only interested in how to get a curve instead of a straight line and you're just playing around, this is sufficient. | Simple non-linear regression problem | If all you want is a quadratic term, you can use lm(y~x+I(x^2)). An example:
For your model that would mean predictions <- lm(price~mileage+I(mileage^2), data = ads_clean). For higher order polynomia | Simple non-linear regression problem
If all you want is a quadratic term, you can use lm(y~x+I(x^2)). An example:
For your model that would mean predictions <- lm(price~mileage+I(mileage^2), data = ads_clean). For higher order polynomials, you can just add them in the same way. You could also try some nonparametric regression, for example locpoly.
x <- rnorm(100)
y <- x + x^2 + rnorm(100)
plot(x, y)
model1 <- lm(y~ x+ I(x^2))
plotdata <- cbind(x, predict(model1))
lines(plotdata[order(x),], col = "red")
Please be aware that, depending on your goal, this might be associated with other problems such as heteroscedasticity. If you want to make inference, you need to pay extra care that the assumptions you would rely on actually appear to be satisfied. But, if you are truly only interested in how to get a curve instead of a straight line and you're just playing around, this is sufficient. | Simple non-linear regression problem
If all you want is a quadratic term, you can use lm(y~x+I(x^2)). An example:
For your model that would mean predictions <- lm(price~mileage+I(mileage^2), data = ads_clean). For higher order polynomia |
38,418 | Feature selection before neural network classification | One way to think about the process of building a predictive model (such as a neural network) is that you have a 'budget' of information to spend, much like a certain amount of money for a monthly household budget. With only 87 observations in your training set (and only 36 more in your test set), you have a very skimpy budget. In addition, there is much less information in a binary indicator (i.e., your predicted variable is positive vs. negative) than there is in a continuous variable. In truth, you may only have enough information to reliably estimate the proportion positive.
Neural networks have many advantages, but they require very large numbers of parameters to be estimated. When you have a hidden layer (or more than one hidden layer), and multiple input variables, the number of parameters (link weights) that need to be accurately estimated explodes. But every parameter to be estimated consumes some of your informational budget. You are essentially guaranteed to overfit this model (note that this has nothing to do with the computational feasibility of the algorithm). Unfortunately, I don't think cross-validation will get you out of these problems.
If you are committed to building a predictive model using your continuous variables, I would try a logistic regression model instead of a NN. It will use fewer parameters. I would fit the model with probably only one variable, or at most a couple, and use the test set to see if the additional variables (beyond the intercept only) create instability and reduce your out of sample accuracy.
Regarding the X variables themselves, I would use a method that is blind to the outcome. Specifically, I would try principal components analysis (PCA) and extract just the first one or two PCs. I honestly think this is going to be the best you are going to be able to do. | Feature selection before neural network classification | One way to think about the process of building a predictive model (such as a neural network) is that you have a 'budget' of information to spend, much like a certain amount of money for a monthly hous | Feature selection before neural network classification
One way to think about the process of building a predictive model (such as a neural network) is that you have a 'budget' of information to spend, much like a certain amount of money for a monthly household budget. With only 87 observations in your training set (and only 36 more in your test set), you have a very skimpy budget. In addition, there is much less information in a binary indicator (i.e., your predicted variable is positive vs. negative) than there is in a continuous variable. In truth, you may only have enough information to reliably estimate the proportion positive.
Neural networks have many advantages, but they require very large numbers of parameters to be estimated. When you have a hidden layer (or more than one hidden layer), and multiple input variables, the number of parameters (link weights) that need to be accurately estimated explodes. But every parameter to be estimated consumes some of your informational budget. You are essentially guaranteed to overfit this model (note that this has nothing to do with the computational feasibility of the algorithm). Unfortunately, I don't think cross-validation will get you out of these problems.
If you are committed to building a predictive model using your continuous variables, I would try a logistic regression model instead of a NN. It will use fewer parameters. I would fit the model with probably only one variable, or at most a couple, and use the test set to see if the additional variables (beyond the intercept only) create instability and reduce your out of sample accuracy.
Regarding the X variables themselves, I would use a method that is blind to the outcome. Specifically, I would try principal components analysis (PCA) and extract just the first one or two PCs. I honestly think this is going to be the best you are going to be able to do. | Feature selection before neural network classification
One way to think about the process of building a predictive model (such as a neural network) is that you have a 'budget' of information to spend, much like a certain amount of money for a monthly hous |
38,419 | Feature selection before neural network classification | With respect to your first question, it depends on your computer.
With respect to the second, there is no single best answer. Neural networks are themselves often used for feature selection. This is the paradigm leading to deep learning. In that case it is unlikely you'd want to do any feature selection (except maybe whitening of the data).
If you're fitting a shallow network via backprop and are worried about overfitting, doing the (unprincipled but often effective) PCA and dropping less important components might help you out. | Feature selection before neural network classification | With respect to your first question, it depends on your computer.
With respect to the second, there is no single best answer. Neural networks are themselves often used for feature selection. This is t | Feature selection before neural network classification
With respect to your first question, it depends on your computer.
With respect to the second, there is no single best answer. Neural networks are themselves often used for feature selection. This is the paradigm leading to deep learning. In that case it is unlikely you'd want to do any feature selection (except maybe whitening of the data).
If you're fitting a shallow network via backprop and are worried about overfitting, doing the (unprincipled but often effective) PCA and dropping less important components might help you out. | Feature selection before neural network classification
With respect to your first question, it depends on your computer.
With respect to the second, there is no single best answer. Neural networks are themselves often used for feature selection. This is t |
38,420 | Feature selection before neural network classification | Adding my own two cents to the previous answers. You have a big problem with the curse of dimensionality. This problem with the tiny corpus you have makes me think that the best option in this case would be a simple Bayesian. | Feature selection before neural network classification | Adding my own two cents to the previous answers. You have a big problem with the curse of dimensionality. This problem with the tiny corpus you have makes me think that the best option in this case wo | Feature selection before neural network classification
Adding my own two cents to the previous answers. You have a big problem with the curse of dimensionality. This problem with the tiny corpus you have makes me think that the best option in this case would be a simple Bayesian. | Feature selection before neural network classification
Adding my own two cents to the previous answers. You have a big problem with the curse of dimensionality. This problem with the tiny corpus you have makes me think that the best option in this case wo |
38,421 | Should I categorise my continuous variable for use in binary logistic regression | This can cut two ways, but mostly one. In logistic regression, as with any flavour of regression, it is fine, indeed usually better, to have continuous predictors.
Given a choice between a continuous variable as a predictor and categorising a continuous variable for predictors, the first is usually to be preferred. At the crudest level, you are just throwing away information by categorising a continuous variable. There is discussion in several places. Frank Harrell in his Regression modeling strategies (New York: Springer, 2001; Cham, Springer, 2015) has a nice treatment of the issue and gives references.
Also, there is not really a equivalent of empty cells to worry about. Values of family size, which is your leading example here, may not exist for 13, 14, 15, 16 members, or for that matter for 42 or 420. This is no more a problem than using persons' height as a predictor and not having someone 3 metres tall in your dataset.
It's true that the same problem may bite in terms of outliers, but that can happen with the categorical solution too. If some points are 0 and a few are 1 or a very few are 5, that's possibly an outlier situation too.
The qualification is that by entering a predictor as is you are implying that its effect is additive and linear. But that's not a fatal objection: just consider adding an interaction term or transforming it, as appropriate. Or a treatment in terms of splines: the book just cited is rich in examples. | Should I categorise my continuous variable for use in binary logistic regression | This can cut two ways, but mostly one. In logistic regression, as with any flavour of regression, it is fine, indeed usually better, to have continuous predictors.
Given a choice between a continuous | Should I categorise my continuous variable for use in binary logistic regression
This can cut two ways, but mostly one. In logistic regression, as with any flavour of regression, it is fine, indeed usually better, to have continuous predictors.
Given a choice between a continuous variable as a predictor and categorising a continuous variable for predictors, the first is usually to be preferred. At the crudest level, you are just throwing away information by categorising a continuous variable. There is discussion in several places. Frank Harrell in his Regression modeling strategies (New York: Springer, 2001; Cham, Springer, 2015) has a nice treatment of the issue and gives references.
Also, there is not really a equivalent of empty cells to worry about. Values of family size, which is your leading example here, may not exist for 13, 14, 15, 16 members, or for that matter for 42 or 420. This is no more a problem than using persons' height as a predictor and not having someone 3 metres tall in your dataset.
It's true that the same problem may bite in terms of outliers, but that can happen with the categorical solution too. If some points are 0 and a few are 1 or a very few are 5, that's possibly an outlier situation too.
The qualification is that by entering a predictor as is you are implying that its effect is additive and linear. But that's not a fatal objection: just consider adding an interaction term or transforming it, as appropriate. Or a treatment in terms of splines: the book just cited is rich in examples. | Should I categorise my continuous variable for use in binary logistic regression
This can cut two ways, but mostly one. In logistic regression, as with any flavour of regression, it is fine, indeed usually better, to have continuous predictors.
Given a choice between a continuous |
38,422 | Should I categorise my continuous variable for use in binary logistic regression | It is true that reducing an ordinal or even continuous variable to dichotomous level loses a lot of information, but this is a concern for the dependent variable (i.e. dichotomizing a continuous dependent variable) in logistic regression. For continuous predictors (independent variables), logistic regression assumes that predictors are linearly related to the log odds of the outcome (an assumption known as “linearity of the logit”). If this assumption is violated, logistic regression underestimates the strength of the association and rejects the association too easily, that is being not significant (not rejecting the null hypothesis) where it should be significant. The Box–Tidwell test can be performed to assess linearity in the log(odds) as required by logistic regression. If linearity is not observed, categorical scales for the continuous predictor can be examined on the basis of quartiles and logit graphs. Fractional polynomials and spline functions can also be used to model continuous predictors. For a good discussion of methods to examine the scale of a continuous covariate in the logodds, I suggest reading chapter 4 of Applied Logistic Regression, 3rd Edition, by Hosmer, Lemeshow, and Sturdivant. | Should I categorise my continuous variable for use in binary logistic regression | It is true that reducing an ordinal or even continuous variable to dichotomous level loses a lot of information, but this is a concern for the dependent variable (i.e. dichotomizing a continuous depen | Should I categorise my continuous variable for use in binary logistic regression
It is true that reducing an ordinal or even continuous variable to dichotomous level loses a lot of information, but this is a concern for the dependent variable (i.e. dichotomizing a continuous dependent variable) in logistic regression. For continuous predictors (independent variables), logistic regression assumes that predictors are linearly related to the log odds of the outcome (an assumption known as “linearity of the logit”). If this assumption is violated, logistic regression underestimates the strength of the association and rejects the association too easily, that is being not significant (not rejecting the null hypothesis) where it should be significant. The Box–Tidwell test can be performed to assess linearity in the log(odds) as required by logistic regression. If linearity is not observed, categorical scales for the continuous predictor can be examined on the basis of quartiles and logit graphs. Fractional polynomials and spline functions can also be used to model continuous predictors. For a good discussion of methods to examine the scale of a continuous covariate in the logodds, I suggest reading chapter 4 of Applied Logistic Regression, 3rd Edition, by Hosmer, Lemeshow, and Sturdivant. | Should I categorise my continuous variable for use in binary logistic regression
It is true that reducing an ordinal or even continuous variable to dichotomous level loses a lot of information, but this is a concern for the dependent variable (i.e. dichotomizing a continuous depen |
38,423 | Interpreting p-values in Fisher vs Neyman-Pearson frameworks | Fisher uses p-values as a continuous measure of evidence against a null hypothesis?
Perhaps. What convinces you of this?
So a p-value of 0.06 would indicate that there is no difference and the null hypothesis is true?
Not at all. How did you go from 'continuous measure of evidence against' to 'there is no difference'?
In particular, Fisher would not make the mistake of thinking that failure to reject makes $H_0$ actually true.
Does a p-value greater than alpha indicate that there is >5% chance of a type one error occuring
No, for two reasons.
(i) if $p>\alpha$ you won't reject, so you can't commit a type I error at all
(ii) You don't even have an $\alpha$ probability of making a type I error, since the type I error rate is a conditional probability, and in real situations, the joint probability is close to zero (that is, point null hypotheses are almost never exactly true; you can only make a type I error when they are exactly true).
[ ... I suppose that I'm arguably acting more as a Bayesian there] | Interpreting p-values in Fisher vs Neyman-Pearson frameworks | Fisher uses p-values as a continuous measure of evidence against a null hypothesis?
Perhaps. What convinces you of this?
So a p-value of 0.06 would indicate that there is no difference and the null | Interpreting p-values in Fisher vs Neyman-Pearson frameworks
Fisher uses p-values as a continuous measure of evidence against a null hypothesis?
Perhaps. What convinces you of this?
So a p-value of 0.06 would indicate that there is no difference and the null hypothesis is true?
Not at all. How did you go from 'continuous measure of evidence against' to 'there is no difference'?
In particular, Fisher would not make the mistake of thinking that failure to reject makes $H_0$ actually true.
Does a p-value greater than alpha indicate that there is >5% chance of a type one error occuring
No, for two reasons.
(i) if $p>\alpha$ you won't reject, so you can't commit a type I error at all
(ii) You don't even have an $\alpha$ probability of making a type I error, since the type I error rate is a conditional probability, and in real situations, the joint probability is close to zero (that is, point null hypotheses are almost never exactly true; you can only make a type I error when they are exactly true).
[ ... I suppose that I'm arguably acting more as a Bayesian there] | Interpreting p-values in Fisher vs Neyman-Pearson frameworks
Fisher uses p-values as a continuous measure of evidence against a null hypothesis?
Perhaps. What convinces you of this?
So a p-value of 0.06 would indicate that there is no difference and the null |
38,424 | Interpreting p-values in Fisher vs Neyman-Pearson frameworks | The issue here is that you need to be clearer on the definitions of these terms, and what those definitions imply.
Taking the p-value as a continuous measure of evidence against the null hypothesis means that there is no 'bright line' between "no difference" and "difference". As a result of this, $p=.04$ is essentially identical to $p=.06$, $p=.06$ is essentially identical to $p=.08$, and, moreover, $p=.04$ is still pretty similar to $p=.08$, in terms of the amount of evidence against the null hypothesis.
If you follow the Neyman-Pearson paradigm correctly, you would not reject the null hypothesis when $p>\alpha$. Thus, a type I error is not possible. Remember that a type I error is defined as rejecting the null hypothesis when it's true. Since you're not rejecting the null hypothesis, this can't apply.
It may help you to read my answer to a related question: When to use Fisher and Neyman-Pearson framework? | Interpreting p-values in Fisher vs Neyman-Pearson frameworks | The issue here is that you need to be clearer on the definitions of these terms, and what those definitions imply.
Taking the p-value as a continuous measure of evidence against the null hypothesis | Interpreting p-values in Fisher vs Neyman-Pearson frameworks
The issue here is that you need to be clearer on the definitions of these terms, and what those definitions imply.
Taking the p-value as a continuous measure of evidence against the null hypothesis means that there is no 'bright line' between "no difference" and "difference". As a result of this, $p=.04$ is essentially identical to $p=.06$, $p=.06$ is essentially identical to $p=.08$, and, moreover, $p=.04$ is still pretty similar to $p=.08$, in terms of the amount of evidence against the null hypothesis.
If you follow the Neyman-Pearson paradigm correctly, you would not reject the null hypothesis when $p>\alpha$. Thus, a type I error is not possible. Remember that a type I error is defined as rejecting the null hypothesis when it's true. Since you're not rejecting the null hypothesis, this can't apply.
It may help you to read my answer to a related question: When to use Fisher and Neyman-Pearson framework? | Interpreting p-values in Fisher vs Neyman-Pearson frameworks
The issue here is that you need to be clearer on the definitions of these terms, and what those definitions imply.
Taking the p-value as a continuous measure of evidence against the null hypothesis |
38,425 | How do I mathematically prove that k-means clustering converges to minimum squared error? | There is no k-means algorithm. K-means is the problem. Algorithms for it include MacQueen, Lloyd/Forgy, Hartigan/Wong and many more.
Most of these algorithms (all but exhaustive search I guess) will only find a local optimum, not the global optimum. They are heuristics. Fast heuristics...
It turns out the usual nearest-center heuristic sometimes even misses the local optimum, if cluster sizes vary much. Not by much. But rarely, points should not be assigned the nearest center.
Global optimium search is IIRC NP-hard, so you do not want to use a perfect algorithm, unless you assume that P=NP. | How do I mathematically prove that k-means clustering converges to minimum squared error? | There is no k-means algorithm. K-means is the problem. Algorithms for it include MacQueen, Lloyd/Forgy, Hartigan/Wong and many more.
Most of these algorithms (all but exhaustive search I guess) will o | How do I mathematically prove that k-means clustering converges to minimum squared error?
There is no k-means algorithm. K-means is the problem. Algorithms for it include MacQueen, Lloyd/Forgy, Hartigan/Wong and many more.
Most of these algorithms (all but exhaustive search I guess) will only find a local optimum, not the global optimum. They are heuristics. Fast heuristics...
It turns out the usual nearest-center heuristic sometimes even misses the local optimum, if cluster sizes vary much. Not by much. But rarely, points should not be assigned the nearest center.
Global optimium search is IIRC NP-hard, so you do not want to use a perfect algorithm, unless you assume that P=NP. | How do I mathematically prove that k-means clustering converges to minimum squared error?
There is no k-means algorithm. K-means is the problem. Algorithms for it include MacQueen, Lloyd/Forgy, Hartigan/Wong and many more.
Most of these algorithms (all but exhaustive search I guess) will o |
38,426 | How do I mathematically prove that k-means clustering converges to minimum squared error? | K-means clustering does not guarantee you global optimum (although I'd not call K-means a "heuristic" technique). However you can do this: run K-means a number of times, each time with different random initial centres seed, and obtain a set of final cluster centres each time. If these sets appear similar enough - in the sense that you can easily identify the "same" final centres across the runs - then you are surely close to the global optimum. Then just average those corresponding final centres across the runs and input the obtained averaged centers as initial ones for one final run. That run is almost sure to give you the global optimum solution.
Another, similar to this, trick to obtain "good" initial centres is to randomly split the total sample into subsamples and to perform K-means on each, then again averaging the final centres and running clustering of the total sample. Yet one more way to get "good" initial centres is to run Ward hierarchical clustering first and get K clusters with it, and use their centres as the input to K-means. Read about some variants of K-means initializing.
Under the following conditions (or "assumptions") you are more likely to get at the global optimal solution in K-means clustering:
there is cluster structure in the data, i.e. the data is not single-cluster;
your K, the-number-of-clusters specification, is correct;
the number of variables is not very great: K-means is sensitive to the "curse of dimensionality", with many variables, a preliminary PCA would be a good idea;
clusters in the data are more or less spherical, and compact in their middle (such as normally distributed); variances in clusters are about equal.
K-means assignes, at each iteration, each object to the closest cluster centre. After all objects were thus assigned, the K centres are updated. It thus appears that a centre moves further towards the set of objects that were already "its" objects. That's why each iteration is an improvement, and the optimum - local or global, dependent on the initial centres choice - is reached. The optimized function is the pooled within-cluster sum-of-squares (because mean is the locus of minimal SS deviations from it), which is equivalent to minimizing the pooled within-cluster sum of pairwise squared euclidean distances normalized by the respective number-of-objects in a cluster. | How do I mathematically prove that k-means clustering converges to minimum squared error? | K-means clustering does not guarantee you global optimum (although I'd not call K-means a "heuristic" technique). However you can do this: run K-means a number of times, each time with different rando | How do I mathematically prove that k-means clustering converges to minimum squared error?
K-means clustering does not guarantee you global optimum (although I'd not call K-means a "heuristic" technique). However you can do this: run K-means a number of times, each time with different random initial centres seed, and obtain a set of final cluster centres each time. If these sets appear similar enough - in the sense that you can easily identify the "same" final centres across the runs - then you are surely close to the global optimum. Then just average those corresponding final centres across the runs and input the obtained averaged centers as initial ones for one final run. That run is almost sure to give you the global optimum solution.
Another, similar to this, trick to obtain "good" initial centres is to randomly split the total sample into subsamples and to perform K-means on each, then again averaging the final centres and running clustering of the total sample. Yet one more way to get "good" initial centres is to run Ward hierarchical clustering first and get K clusters with it, and use their centres as the input to K-means. Read about some variants of K-means initializing.
Under the following conditions (or "assumptions") you are more likely to get at the global optimal solution in K-means clustering:
there is cluster structure in the data, i.e. the data is not single-cluster;
your K, the-number-of-clusters specification, is correct;
the number of variables is not very great: K-means is sensitive to the "curse of dimensionality", with many variables, a preliminary PCA would be a good idea;
clusters in the data are more or less spherical, and compact in their middle (such as normally distributed); variances in clusters are about equal.
K-means assignes, at each iteration, each object to the closest cluster centre. After all objects were thus assigned, the K centres are updated. It thus appears that a centre moves further towards the set of objects that were already "its" objects. That's why each iteration is an improvement, and the optimum - local or global, dependent on the initial centres choice - is reached. The optimized function is the pooled within-cluster sum-of-squares (because mean is the locus of minimal SS deviations from it), which is equivalent to minimizing the pooled within-cluster sum of pairwise squared euclidean distances normalized by the respective number-of-objects in a cluster. | How do I mathematically prove that k-means clustering converges to minimum squared error?
K-means clustering does not guarantee you global optimum (although I'd not call K-means a "heuristic" technique). However you can do this: run K-means a number of times, each time with different rando |
38,427 | How do I mathematically prove that k-means clustering converges to minimum squared error? | You can actually prove that it converges to a local minimum and that, actually, you are performing Newton minimization on the quantization error functional. All the details are in the Léon Bottou and Yoshua Bengio's paper "Convergence Properties of the K-Means Algorithms" | How do I mathematically prove that k-means clustering converges to minimum squared error? | You can actually prove that it converges to a local minimum and that, actually, you are performing Newton minimization on the quantization error functional. All the details are in the Léon Bottou and | How do I mathematically prove that k-means clustering converges to minimum squared error?
You can actually prove that it converges to a local minimum and that, actually, you are performing Newton minimization on the quantization error functional. All the details are in the Léon Bottou and Yoshua Bengio's paper "Convergence Properties of the K-Means Algorithms" | How do I mathematically prove that k-means clustering converges to minimum squared error?
You can actually prove that it converges to a local minimum and that, actually, you are performing Newton minimization on the quantization error functional. All the details are in the Léon Bottou and |
38,428 | How do I mathematically prove that k-means clustering converges to minimum squared error? | The objective function in this type of clustering is to assign points to clusters in a way that the sum of squared distances to the centroids are minimized. However, with K-Means we (approximately) solve a different problem which has the same optimal solution as our original problem:
\begin{align}
&\min_{x} \sum_{i=1}^n \sum_{j=1}^k x_{ij} || p_i - y_j||^2\\
&\text{subject to:} \\
&\sum_{j=1}^k x_{ij} = 1 \quad \forall i\\
&x_{ij} \in \{0,1\} \quad \forall i, j \\
&y_j \in \mathbb{R}^d \quad \forall j
\end{align}
Instead of minimizing the distance to centroids, we minimize the distance to just any set of points that will give a better solution. It turns out that these points are exactly the centroids.
In this post, I have explained in detail that how the two steps of K-Means solve this optimization problem approximately. | How do I mathematically prove that k-means clustering converges to minimum squared error? | The objective function in this type of clustering is to assign points to clusters in a way that the sum of squared distances to the centroids are minimized. However, with K-Means we (approximately) so | How do I mathematically prove that k-means clustering converges to minimum squared error?
The objective function in this type of clustering is to assign points to clusters in a way that the sum of squared distances to the centroids are minimized. However, with K-Means we (approximately) solve a different problem which has the same optimal solution as our original problem:
\begin{align}
&\min_{x} \sum_{i=1}^n \sum_{j=1}^k x_{ij} || p_i - y_j||^2\\
&\text{subject to:} \\
&\sum_{j=1}^k x_{ij} = 1 \quad \forall i\\
&x_{ij} \in \{0,1\} \quad \forall i, j \\
&y_j \in \mathbb{R}^d \quad \forall j
\end{align}
Instead of minimizing the distance to centroids, we minimize the distance to just any set of points that will give a better solution. It turns out that these points are exactly the centroids.
In this post, I have explained in detail that how the two steps of K-Means solve this optimization problem approximately. | How do I mathematically prove that k-means clustering converges to minimum squared error?
The objective function in this type of clustering is to assign points to clusters in a way that the sum of squared distances to the centroids are minimized. However, with K-Means we (approximately) so |
38,429 | Why are my p-values so high? | There are many possible explanations but some of the most common are:
The explanatory variables are not related to your response. So no problem here, you just have a negative finding.
Some or all of the variables are related, but they are highly correlated so you have a problem with "variance inflation factors". If you removed some of the variables you might find that the remaining ones do seem to make a significant contribution. However, this is not a solution for finding a final model - search for "multicollinearity" to find some of the issues and possible solutions.
Your sample size is too small.
Your model is mis-specified in some other way eg the continuous data actually have a non-linear relationship to the logit of your response.
Your data is under- or over-dispersed for some reason, more than would be expected of a binomial variable (in which case you might be able to fix the problem by fitting a quasi likelihood model).
You somehow scrambled your data during manipulating it into shape to fit your model.
There's probably more but those are the obvious ones. | Why are my p-values so high? | There are many possible explanations but some of the most common are:
The explanatory variables are not related to your response. So no problem here, you just have a negative finding.
Some or all of | Why are my p-values so high?
There are many possible explanations but some of the most common are:
The explanatory variables are not related to your response. So no problem here, you just have a negative finding.
Some or all of the variables are related, but they are highly correlated so you have a problem with "variance inflation factors". If you removed some of the variables you might find that the remaining ones do seem to make a significant contribution. However, this is not a solution for finding a final model - search for "multicollinearity" to find some of the issues and possible solutions.
Your sample size is too small.
Your model is mis-specified in some other way eg the continuous data actually have a non-linear relationship to the logit of your response.
Your data is under- or over-dispersed for some reason, more than would be expected of a binomial variable (in which case you might be able to fix the problem by fitting a quasi likelihood model).
You somehow scrambled your data during manipulating it into shape to fit your model.
There's probably more but those are the obvious ones. | Why are my p-values so high?
There are many possible explanations but some of the most common are:
The explanatory variables are not related to your response. So no problem here, you just have a negative finding.
Some or all of |
38,430 | Why are my p-values so high? | Because your response is a ratio between 0 and 1, it might be possible there is a heterogeneous (skewness) between data. In this case, logistic regression can not take into account this heterogeneity very well. An alternative model is beta regression. Beta distribution is a good choice to consider heterogeneity and skewness between data. Also, there is a R package to do this regression. | Why are my p-values so high? | Because your response is a ratio between 0 and 1, it might be possible there is a heterogeneous (skewness) between data. In this case, logistic regression can not take into account this heterogeneity | Why are my p-values so high?
Because your response is a ratio between 0 and 1, it might be possible there is a heterogeneous (skewness) between data. In this case, logistic regression can not take into account this heterogeneity very well. An alternative model is beta regression. Beta distribution is a good choice to consider heterogeneity and skewness between data. Also, there is a R package to do this regression. | Why are my p-values so high?
Because your response is a ratio between 0 and 1, it might be possible there is a heterogeneous (skewness) between data. In this case, logistic regression can not take into account this heterogeneity |
38,431 | Why are my p-values so high? | I've seen p-values very close to 1.0 in ordinary least squares. A likely explanation is omitting an important explanatory variable. That inflates the error term making other variables look "suspiciously insignificant". A quickly constructed simulation follows but I've seen real cases that were more dramatic.
Omitted Included Omitted +
in Model N(0,1)/10
1 1 1.21268754
0 1 0.22398027
1 1 1.15664332
0 1 -0.0321136
1 1 0.84352449
0 1 -0.0770372
1 1 1.01447926
0 1 0.00845824
1 1 0.97894666
0 1 -0.160745
1 2 0.91627849
0 2 0.04907199
1 2 1.05989843
0 2 0.09999067
1 2 0.86461229
0 2 -0.0017369
1 2 0.9341983
0 2 0.03245005
1 2 1.09566388
0 2 0.22768268 | Why are my p-values so high? | I've seen p-values very close to 1.0 in ordinary least squares. A likely explanation is omitting an important explanatory variable. That inflates the error term making other variables look "suspicio | Why are my p-values so high?
I've seen p-values very close to 1.0 in ordinary least squares. A likely explanation is omitting an important explanatory variable. That inflates the error term making other variables look "suspiciously insignificant". A quickly constructed simulation follows but I've seen real cases that were more dramatic.
Omitted Included Omitted +
in Model N(0,1)/10
1 1 1.21268754
0 1 0.22398027
1 1 1.15664332
0 1 -0.0321136
1 1 0.84352449
0 1 -0.0770372
1 1 1.01447926
0 1 0.00845824
1 1 0.97894666
0 1 -0.160745
1 2 0.91627849
0 2 0.04907199
1 2 1.05989843
0 2 0.09999067
1 2 0.86461229
0 2 -0.0017369
1 2 0.9341983
0 2 0.03245005
1 2 1.09566388
0 2 0.22768268 | Why are my p-values so high?
I've seen p-values very close to 1.0 in ordinary least squares. A likely explanation is omitting an important explanatory variable. That inflates the error term making other variables look "suspicio |
38,432 | Does ensembling (boosting) cause overfitting? | First of all, let's make sure what you mean by overfitting. I assume you mean that the algorithm has learned too many of the nuances of the training data and will not perform well when you apply it to new data it hasn't seen before (from a similar population). This would also be known as poor generalization.
All machine learning algorithms, boosting included, can overfit. Of course, standard multivariate linear regression is guaranteed to overfit due to Stein's phenomena. If you care about overfitting and want to combat this, you need to make sure and "regularize" any algorithm that you apply. For regular regression, the simplest and often best method of regularization would be ridging.
For boosting specifically: to combat overfitting is usually as simple as using cross validation to determine how many boosting steps to take. On a more subtle level you probably want to make sure and use a small enough learning rate. Really small learning rates can take forever to overfit (take a ton of steps) so it's harder to screw them up. For pure accuracy though, you want to use as small of learning rate as you can and push the boosting steps right up until it does start to overfit, so if you really care you need to find the smallest learning rate that you can feasibly "bottom out". I believe gbm in R also bags a sample for each step, although I'm not sure that actually combats overfitting as much as it does spread the learning across the training data.
So, to your specific question, we can't really know if 400 ensembles is too many. In fact, the only way you really can is via Cross Validation or a hold out set (Or a kind of OOB estimate if your boosting algorithm does do the bagging at each step). If your base learner in each step is too strong or the learning rate is too high, then those 400 ensembles could easily be a drastic overfit. With no other data than a 21% to 70% gain, I would lean towards it overfitting. | Does ensembling (boosting) cause overfitting? | First of all, let's make sure what you mean by overfitting. I assume you mean that the algorithm has learned too many of the nuances of the training data and will not perform well when you apply it t | Does ensembling (boosting) cause overfitting?
First of all, let's make sure what you mean by overfitting. I assume you mean that the algorithm has learned too many of the nuances of the training data and will not perform well when you apply it to new data it hasn't seen before (from a similar population). This would also be known as poor generalization.
All machine learning algorithms, boosting included, can overfit. Of course, standard multivariate linear regression is guaranteed to overfit due to Stein's phenomena. If you care about overfitting and want to combat this, you need to make sure and "regularize" any algorithm that you apply. For regular regression, the simplest and often best method of regularization would be ridging.
For boosting specifically: to combat overfitting is usually as simple as using cross validation to determine how many boosting steps to take. On a more subtle level you probably want to make sure and use a small enough learning rate. Really small learning rates can take forever to overfit (take a ton of steps) so it's harder to screw them up. For pure accuracy though, you want to use as small of learning rate as you can and push the boosting steps right up until it does start to overfit, so if you really care you need to find the smallest learning rate that you can feasibly "bottom out". I believe gbm in R also bags a sample for each step, although I'm not sure that actually combats overfitting as much as it does spread the learning across the training data.
So, to your specific question, we can't really know if 400 ensembles is too many. In fact, the only way you really can is via Cross Validation or a hold out set (Or a kind of OOB estimate if your boosting algorithm does do the bagging at each step). If your base learner in each step is too strong or the learning rate is too high, then those 400 ensembles could easily be a drastic overfit. With no other data than a 21% to 70% gain, I would lean towards it overfitting. | Does ensembling (boosting) cause overfitting?
First of all, let's make sure what you mean by overfitting. I assume you mean that the algorithm has learned too many of the nuances of the training data and will not perform well when you apply it t |
38,433 | Does ensembling (boosting) cause overfitting? | The overfitting of boosting techniques is a topic that is not yet theoretically understood, but empirically results show that boosting seems to be very robust against overfitting.
The usual explanation for this phenomena is as follow: the samples that are incorrectly predicted in one iteration will have higher weight in the next one. Thus, isolated and mislabelled points tend to strongly force the classifier to create complicated hypothesis to fit them, which we will call overfitting. However, as the hypothesis is very non-linear due to the combination of several classifiers (ensembles), the hypothesis around those problematic points is so narrow that is practically impossible that another point lies there.
For example, in the below example, we can see one data points that is mislabelled. The generated hypothesis (green line) will create a narrow circle around this point, but it is very unlikely that another point in the test data lies exactly in the same position.
Quoting Patrick H. Winston lecture of Artificial Intelligence in the MIT: This [boosting] doesn't seem to overfit. That is an experimental result for which the literature is confused respect to providing an explanation. So, this stuff has been tried in all sort of problems like handwriting recognition, understanding speech, all sort of stuff uses boosting, and unlike other methods for some reason, and yet imperfectly understood, it does not seem to overfit. [...] In conclusion, this is magic, you always want to use it and it will work with any classifier. | Does ensembling (boosting) cause overfitting? | The overfitting of boosting techniques is a topic that is not yet theoretically understood, but empirically results show that boosting seems to be very robust against overfitting.
The usual explanatio | Does ensembling (boosting) cause overfitting?
The overfitting of boosting techniques is a topic that is not yet theoretically understood, but empirically results show that boosting seems to be very robust against overfitting.
The usual explanation for this phenomena is as follow: the samples that are incorrectly predicted in one iteration will have higher weight in the next one. Thus, isolated and mislabelled points tend to strongly force the classifier to create complicated hypothesis to fit them, which we will call overfitting. However, as the hypothesis is very non-linear due to the combination of several classifiers (ensembles), the hypothesis around those problematic points is so narrow that is practically impossible that another point lies there.
For example, in the below example, we can see one data points that is mislabelled. The generated hypothesis (green line) will create a narrow circle around this point, but it is very unlikely that another point in the test data lies exactly in the same position.
Quoting Patrick H. Winston lecture of Artificial Intelligence in the MIT: This [boosting] doesn't seem to overfit. That is an experimental result for which the literature is confused respect to providing an explanation. So, this stuff has been tried in all sort of problems like handwriting recognition, understanding speech, all sort of stuff uses boosting, and unlike other methods for some reason, and yet imperfectly understood, it does not seem to overfit. [...] In conclusion, this is magic, you always want to use it and it will work with any classifier. | Does ensembling (boosting) cause overfitting?
The overfitting of boosting techniques is a topic that is not yet theoretically understood, but empirically results show that boosting seems to be very robust against overfitting.
The usual explanatio |
38,434 | Does ensembling (boosting) cause overfitting? | I think a good way to see if you are overfitting is to see the agreement between the individual nodes in the ensemble. If you get a very high agreement of 95%, there is high chance that your model may be predicting a very low accuracy of the target variable. This would definitely be a high indicator that your model has been overtrained and is unable to generalize well. | Does ensembling (boosting) cause overfitting? | I think a good way to see if you are overfitting is to see the agreement between the individual nodes in the ensemble. If you get a very high agreement of 95%, there is high chance that your model may | Does ensembling (boosting) cause overfitting?
I think a good way to see if you are overfitting is to see the agreement between the individual nodes in the ensemble. If you get a very high agreement of 95%, there is high chance that your model may be predicting a very low accuracy of the target variable. This would definitely be a high indicator that your model has been overtrained and is unable to generalize well. | Does ensembling (boosting) cause overfitting?
I think a good way to see if you are overfitting is to see the agreement between the individual nodes in the ensemble. If you get a very high agreement of 95%, there is high chance that your model may |
38,435 | How to detect structural change in a timeseries | @Dail if you're more inclined to the applied rather than the theoretical behind detection of structural break, you might want try http://cran.r-project.org/web/packages/cpm/index.html this is the link for CPM package of R, where you can use processStream to find multiple break point in your time series. | How to detect structural change in a timeseries | @Dail if you're more inclined to the applied rather than the theoretical behind detection of structural break, you might want try http://cran.r-project.org/web/packages/cpm/index.html this is the link | How to detect structural change in a timeseries
@Dail if you're more inclined to the applied rather than the theoretical behind detection of structural break, you might want try http://cran.r-project.org/web/packages/cpm/index.html this is the link for CPM package of R, where you can use processStream to find multiple break point in your time series. | How to detect structural change in a timeseries
@Dail if you're more inclined to the applied rather than the theoretical behind detection of structural break, you might want try http://cran.r-project.org/web/packages/cpm/index.html this is the link |
38,436 | How to detect structural change in a timeseries | Change Points can arise from a number of possible causes. Each of the possible causes can be evaluated. In terms of increasing complexity : 1. detecting a change in the expected value is essentially Intervention Detection. Pursue the work of Ruey Tsay to understand what you need to do. His work does not cover detecting the onset of a new time trend, The second item that you might consider is detecting when and if the parameters of the model have changed. If you pursue Gregory Chow's work on testing the difference between parameters for known groupings and simply generalize that to search for possible points in time where the parameters have changed you could be successful. Next in terms of complexity is to conduct a test for a significant change in the variance of the residuals. Simply evaluate different possible breal points for variance change and conduct a sequence of F tests to find the point ( if any ) that the variance has changed. I have had personal experience in developing each of these three tests and possible cures in order to render the final error process Gaussian.
Thanks for the kudos Whuber ! | How to detect structural change in a timeseries | Change Points can arise from a number of possible causes. Each of the possible causes can be evaluated. In terms of increasing complexity : 1. detecting a change in the expected value is essentially I | How to detect structural change in a timeseries
Change Points can arise from a number of possible causes. Each of the possible causes can be evaluated. In terms of increasing complexity : 1. detecting a change in the expected value is essentially Intervention Detection. Pursue the work of Ruey Tsay to understand what you need to do. His work does not cover detecting the onset of a new time trend, The second item that you might consider is detecting when and if the parameters of the model have changed. If you pursue Gregory Chow's work on testing the difference between parameters for known groupings and simply generalize that to search for possible points in time where the parameters have changed you could be successful. Next in terms of complexity is to conduct a test for a significant change in the variance of the residuals. Simply evaluate different possible breal points for variance change and conduct a sequence of F tests to find the point ( if any ) that the variance has changed. I have had personal experience in developing each of these three tests and possible cures in order to render the final error process Gaussian.
Thanks for the kudos Whuber ! | How to detect structural change in a timeseries
Change Points can arise from a number of possible causes. Each of the possible causes can be evaluated. In terms of increasing complexity : 1. detecting a change in the expected value is essentially I |
38,437 | How to detect structural change in a timeseries | Here's some demo R code that shows how to detect (endogenously) structural breaks in time series / longitudinal data.
# assuming you have a 'ts' object in R
# 1. install package 'strucchange'
# 2. Then write down this code:
library(strucchange)
# store the breakdates
bp_ts <- breakpoints(ts)
# this will give you the break dates and their confidence intervals
summary(bp_ts)
# store the confidence intervals
ci_ts <- confint(bp_ts)
## to plot the breakpoints with confidence intervals
plot(ts)
lines(bp_ts)
lines(ci_ts)
Check out this example case that I have blogged about. | How to detect structural change in a timeseries | Here's some demo R code that shows how to detect (endogenously) structural breaks in time series / longitudinal data.
# assuming you have a 'ts' object in R
# 1. install package 'strucchange'
# 2. T | How to detect structural change in a timeseries
Here's some demo R code that shows how to detect (endogenously) structural breaks in time series / longitudinal data.
# assuming you have a 'ts' object in R
# 1. install package 'strucchange'
# 2. Then write down this code:
library(strucchange)
# store the breakdates
bp_ts <- breakpoints(ts)
# this will give you the break dates and their confidence intervals
summary(bp_ts)
# store the confidence intervals
ci_ts <- confint(bp_ts)
## to plot the breakpoints with confidence intervals
plot(ts)
lines(bp_ts)
lines(ci_ts)
Check out this example case that I have blogged about. | How to detect structural change in a timeseries
Here's some demo R code that shows how to detect (endogenously) structural breaks in time series / longitudinal data.
# assuming you have a 'ts' object in R
# 1. install package 'strucchange'
# 2. T |
38,438 | How to detect structural change in a timeseries | If you care the use of R, a selection of packages available are summarized in the CRAN task view on time series (https://cran.r-project.org/web/views/TimeSeries.html). Below is the relevant portion:
Change point detection is provided in strucchange and strucchangeRcpp
(using linear regression models) and in trend (using nonparametric
tests). The changepoint package provides many popular changepoint
methods, and ecp does nonparametric changepoint detection for
univariate and multivariate series. changepoint.np implements the
nonparametric PELT algorithm, changepoint.mv detects changepoints in
multivariate time series, while changepoint.geo implements the
high-dimensional changepoint detection method GeomCP. Factor-augmented
VAR (FAVAR) models are estimated by a Bayesian method with FAVAR.
InspectChangepoint uses sparse projection to estimate changepoints in
high-dimensional time series. Rbeast provides Bayesian change-point
detection and time series decomposition. breakfast includes methods
for fast multiple change-point detection and estimation. | How to detect structural change in a timeseries | If you care the use of R, a selection of packages available are summarized in the CRAN task view on time series (https://cran.r-project.org/web/views/TimeSeries.html). Below is the relevant portion:
| How to detect structural change in a timeseries
If you care the use of R, a selection of packages available are summarized in the CRAN task view on time series (https://cran.r-project.org/web/views/TimeSeries.html). Below is the relevant portion:
Change point detection is provided in strucchange and strucchangeRcpp
(using linear regression models) and in trend (using nonparametric
tests). The changepoint package provides many popular changepoint
methods, and ecp does nonparametric changepoint detection for
univariate and multivariate series. changepoint.np implements the
nonparametric PELT algorithm, changepoint.mv detects changepoints in
multivariate time series, while changepoint.geo implements the
high-dimensional changepoint detection method GeomCP. Factor-augmented
VAR (FAVAR) models are estimated by a Bayesian method with FAVAR.
InspectChangepoint uses sparse projection to estimate changepoints in
high-dimensional time series. Rbeast provides Bayesian change-point
detection and time series decomposition. breakfast includes methods
for fast multiple change-point detection and estimation. | How to detect structural change in a timeseries
If you care the use of R, a selection of packages available are summarized in the CRAN task view on time series (https://cran.r-project.org/web/views/TimeSeries.html). Below is the relevant portion:
|
38,439 | How to detect structural change in a timeseries | Searching on Google scholar for "Bayesian changepoint detection" will produce some useful references, such as Adams and MacKay, which looks very interesting and sounds the sort of thing you are looking for. There is also a good book on "Numerical Bayesian Methods Applied to Signal Processing" by O'Ruanaidh and Fitzgerald that I remember being very good on this sort of thing, but I don't have a copy anymore, so I can't check for relevant pages (but the index suggests there is a chapter on retrospective changepoint detection). | How to detect structural change in a timeseries | Searching on Google scholar for "Bayesian changepoint detection" will produce some useful references, such as Adams and MacKay, which looks very interesting and sounds the sort of thing you are lookin | How to detect structural change in a timeseries
Searching on Google scholar for "Bayesian changepoint detection" will produce some useful references, such as Adams and MacKay, which looks very interesting and sounds the sort of thing you are looking for. There is also a good book on "Numerical Bayesian Methods Applied to Signal Processing" by O'Ruanaidh and Fitzgerald that I remember being very good on this sort of thing, but I don't have a copy anymore, so I can't check for relevant pages (but the index suggests there is a chapter on retrospective changepoint detection). | How to detect structural change in a timeseries
Searching on Google scholar for "Bayesian changepoint detection" will produce some useful references, such as Adams and MacKay, which looks very interesting and sounds the sort of thing you are lookin |
38,440 | How to detect structural change in a timeseries | @Dail -- You don't need to know the date in advance. There are many options beyond the Chow Test. In practice, the Chow test can be undesirable because it assumes homoskedasticity, which is very often violated in real time series data. There is a famous paper on testing for structural breaks when the break dates are unknown and methods are now quite well developed. The reference is Andrews, 1993 but you probably would prefer to just have a look at these slides though, which provide an overview of the various tests, the theory, and examples of practical applications. There is an R package that you can use to implement the tests called strucchange which you can find more info about here | How to detect structural change in a timeseries | @Dail -- You don't need to know the date in advance. There are many options beyond the Chow Test. In practice, the Chow test can be undesirable because it assumes homoskedasticity, which is very often | How to detect structural change in a timeseries
@Dail -- You don't need to know the date in advance. There are many options beyond the Chow Test. In practice, the Chow test can be undesirable because it assumes homoskedasticity, which is very often violated in real time series data. There is a famous paper on testing for structural breaks when the break dates are unknown and methods are now quite well developed. The reference is Andrews, 1993 but you probably would prefer to just have a look at these slides though, which provide an overview of the various tests, the theory, and examples of practical applications. There is an R package that you can use to implement the tests called strucchange which you can find more info about here | How to detect structural change in a timeseries
@Dail -- You don't need to know the date in advance. There are many options beyond the Chow Test. In practice, the Chow test can be undesirable because it assumes homoskedasticity, which is very often |
38,441 | How to detect structural change in a timeseries | I'd like to second what IrishStat has said and point you directly to two of Ruey Tsay's books:
Analysis of Financial Time Series, Third Edition, Wiley, 2010.
ISBN: 0-470-41435-9; 10-digits: 978-0470414354 (book's website with some R code)
An Introduction to Analysis of Financial Data with R, Wiley, 2013
ISBN: 0-470-89081-3; 10-digits: 978-0470890813
Furthermore, I suggest investigating:
Modelling Nonlinear Economic Time Series (Advanced Texts in Econometrics) (2011) by Timo Terasvirta, Dag Tjostheim, and Clive W. J. Granger. This book is very detailed and contains a great number of references. Chapter 6 of the book is where you should look. | How to detect structural change in a timeseries | I'd like to second what IrishStat has said and point you directly to two of Ruey Tsay's books:
Analysis of Financial Time Series, Third Edition, Wiley, 2010.
ISBN: 0-470-41435-9; 10-digits: 978-04704 | How to detect structural change in a timeseries
I'd like to second what IrishStat has said and point you directly to two of Ruey Tsay's books:
Analysis of Financial Time Series, Third Edition, Wiley, 2010.
ISBN: 0-470-41435-9; 10-digits: 978-0470414354 (book's website with some R code)
An Introduction to Analysis of Financial Data with R, Wiley, 2013
ISBN: 0-470-89081-3; 10-digits: 978-0470890813
Furthermore, I suggest investigating:
Modelling Nonlinear Economic Time Series (Advanced Texts in Econometrics) (2011) by Timo Terasvirta, Dag Tjostheim, and Clive W. J. Granger. This book is very detailed and contains a great number of references. Chapter 6 of the book is where you should look. | How to detect structural change in a timeseries
I'd like to second what IrishStat has said and point you directly to two of Ruey Tsay's books:
Analysis of Financial Time Series, Third Edition, Wiley, 2010.
ISBN: 0-470-41435-9; 10-digits: 978-04704 |
38,442 | How to detect structural change in a timeseries | Chow test, multiple break-point test (such as Bia Perron test), Quant-Andrews break point test etc. are different test available in Eviews.
Read description, assumptions and interpretation of each test before applying it your data set. | How to detect structural change in a timeseries | Chow test, multiple break-point test (such as Bia Perron test), Quant-Andrews break point test etc. are different test available in Eviews.
Read description, assumptions and interpretation of each tes | How to detect structural change in a timeseries
Chow test, multiple break-point test (such as Bia Perron test), Quant-Andrews break point test etc. are different test available in Eviews.
Read description, assumptions and interpretation of each test before applying it your data set. | How to detect structural change in a timeseries
Chow test, multiple break-point test (such as Bia Perron test), Quant-Andrews break point test etc. are different test available in Eviews.
Read description, assumptions and interpretation of each tes |
38,443 | What are good techniques and resources for teaching Bayes theorem? | I have to recommend the book "Doing Bayesian Data Analysis" by John Kruschke (Indiana). Having sampled a few "introductory" texts over the last while, this one really shines.
There are many really well explained points but I suppose the best lever he uses to introduce the notion of combining prior and evidence is to introduce Bayes in the context of a multi-way table, where the data cause you to restrict your attention to one row, and sum over marginals to get a posterior for the cell. It is then easily expansible to continuous variables and thence to multi-way distributions.
Might be worth your while looking at it. | What are good techniques and resources for teaching Bayes theorem? | I have to recommend the book "Doing Bayesian Data Analysis" by John Kruschke (Indiana). Having sampled a few "introductory" texts over the last while, this one really shines.
There are many really wel | What are good techniques and resources for teaching Bayes theorem?
I have to recommend the book "Doing Bayesian Data Analysis" by John Kruschke (Indiana). Having sampled a few "introductory" texts over the last while, this one really shines.
There are many really well explained points but I suppose the best lever he uses to introduce the notion of combining prior and evidence is to introduce Bayes in the context of a multi-way table, where the data cause you to restrict your attention to one row, and sum over marginals to get a posterior for the cell. It is then easily expansible to continuous variables and thence to multi-way distributions.
Might be worth your while looking at it. | What are good techniques and resources for teaching Bayes theorem?
I have to recommend the book "Doing Bayesian Data Analysis" by John Kruschke (Indiana). Having sampled a few "introductory" texts over the last while, this one really shines.
There are many really wel |
38,444 | What are good techniques and resources for teaching Bayes theorem? | For the basic Bayes Formula one common example to use is disease screening. Assume that you have a test for a disease that if used on someone who has the disease will show positive with 95% probability and if used with someone without the disease will show negative with 90% probability; further we know that 1 in 1,000 in the population have the disease. We randomly choose a person from the population (don't know ahead of time if they have the disease) and do the test which turns out positive: what is the probability that they have the disease? This example is often eye-opening to a lot of people. One way to demonstrate this (and quickly show the effect of changes) is using the SensSpec.demo function in the TeachingDemos function for R (also see tkexamp in the same package for a GUI interface to this in the examples).
If you want to expand to Bayesian statistics then one fun approach is to start by showing the students a simple success/fail game like throwing a dart at a target, tossing a wadded up piece of paper into a basket, etc., and choosing a student that will play the game. Ask the students how many times out of 4 they predict the student will succeed, and use their prediction as parameters for a Beta distribution as the prior distribution (plot this to show where they think the true probability could be). Now have the student do the game 10 times and count the successes, use this as the data for a binomial likelihood, and combine with the prior to get a posterior distribution for the student's proportion of successes. Show how you moved from a prior to a posterior using data and fairly simple calculations. If you have time you can let the student play the game more times and use the first posterior as a new prior, then get an updated posterior, and show how the distribution changes with additional information. | What are good techniques and resources for teaching Bayes theorem? | For the basic Bayes Formula one common example to use is disease screening. Assume that you have a test for a disease that if used on someone who has the disease will show positive with 95% probabili | What are good techniques and resources for teaching Bayes theorem?
For the basic Bayes Formula one common example to use is disease screening. Assume that you have a test for a disease that if used on someone who has the disease will show positive with 95% probability and if used with someone without the disease will show negative with 90% probability; further we know that 1 in 1,000 in the population have the disease. We randomly choose a person from the population (don't know ahead of time if they have the disease) and do the test which turns out positive: what is the probability that they have the disease? This example is often eye-opening to a lot of people. One way to demonstrate this (and quickly show the effect of changes) is using the SensSpec.demo function in the TeachingDemos function for R (also see tkexamp in the same package for a GUI interface to this in the examples).
If you want to expand to Bayesian statistics then one fun approach is to start by showing the students a simple success/fail game like throwing a dart at a target, tossing a wadded up piece of paper into a basket, etc., and choosing a student that will play the game. Ask the students how many times out of 4 they predict the student will succeed, and use their prediction as parameters for a Beta distribution as the prior distribution (plot this to show where they think the true probability could be). Now have the student do the game 10 times and count the successes, use this as the data for a binomial likelihood, and combine with the prior to get a posterior distribution for the student's proportion of successes. Show how you moved from a prior to a posterior using data and fairly simple calculations. If you have time you can let the student play the game more times and use the first posterior as a new prior, then get an updated posterior, and show how the distribution changes with additional information. | What are good techniques and resources for teaching Bayes theorem?
For the basic Bayes Formula one common example to use is disease screening. Assume that you have a test for a disease that if used on someone who has the disease will show positive with 95% probabili |
38,445 | What are good techniques and resources for teaching Bayes theorem? | The LessWrong website actually has a great visual explanation of Bayes' Theorem: Bayes' Theorem Illustrated (My Way). | What are good techniques and resources for teaching Bayes theorem? | The LessWrong website actually has a great visual explanation of Bayes' Theorem: Bayes' Theorem Illustrated (My Way). | What are good techniques and resources for teaching Bayes theorem?
The LessWrong website actually has a great visual explanation of Bayes' Theorem: Bayes' Theorem Illustrated (My Way). | What are good techniques and resources for teaching Bayes theorem?
The LessWrong website actually has a great visual explanation of Bayes' Theorem: Bayes' Theorem Illustrated (My Way). |
38,446 | What are good techniques and resources for teaching Bayes theorem? | Use a deck of cards.
What are the chances this card is a spade? What are the chances this card is a spade if I know the card is black?
What are the chances this card is a king? What are the chances this card is a king if I know it is a diamond? What are the chances it is a king if I know it is a face card?
Show them how it's used in everyday life. What are the chances it will take me less than 30 minutes to get to work.. What if I leave at 8am? What are the chances I will wait in line at the check out counter? What if it is 6pm? What are the chances this person committed the murder? what if we know he has the same blood type as the murderer. | What are good techniques and resources for teaching Bayes theorem? | Use a deck of cards.
What are the chances this card is a spade? What are the chances this card is a spade if I know the card is black?
What are the chances this card is a king? What are the chance | What are good techniques and resources for teaching Bayes theorem?
Use a deck of cards.
What are the chances this card is a spade? What are the chances this card is a spade if I know the card is black?
What are the chances this card is a king? What are the chances this card is a king if I know it is a diamond? What are the chances it is a king if I know it is a face card?
Show them how it's used in everyday life. What are the chances it will take me less than 30 minutes to get to work.. What if I leave at 8am? What are the chances I will wait in line at the check out counter? What if it is 6pm? What are the chances this person committed the murder? what if we know he has the same blood type as the murderer. | What are good techniques and resources for teaching Bayes theorem?
Use a deck of cards.
What are the chances this card is a spade? What are the chances this card is a spade if I know the card is black?
What are the chances this card is a king? What are the chance |
38,447 | Is there an R package with a pretty function that can deal effectively with outliers? | If you're importing your data with a command like, say,
read.table('yourfile.txt', header=TRUE, ...)
you can indicate what values are to be considered as "null" or NA values, by specifying na.strings = "999999999". We can also consider different values for indicating NA values. Consider the following file (fake.txt) where we want to treat "." and "999999999" as NA values:
1 2 .
3 999999999 4
5 6 7
then in R we would do:
> a <- read.table("fake.txt", na.strings=c(".","999999999"))
> a
V1 V2 V3
1 1 2 NA
2 3 NA 4
3 5 6 7
Otherwise, you can always filter your data as indicated by @Sacha in his comment. Here, it could be something like
a[a=="." | a==999999999] <- NA
Edit
In case there are multiple abnormal values that can possibly be observed in different columns with different values, but you know the likely range of admissible values, you can apply a function to each column. For example, define the following filter:
my.filter <- function(x, threshold=100) ifelse(x > threshold, NA, x)
then
a.filt <- apply(a, 2, my.filter)
will replace every value > 100 with NA in the matrix a.
Example:
> a <- replicate(10, rnorm(10))
> a[1,3] <- 99999999
> a[5,6] <- 99999999
> a[8,10] <- 99999990
> summary(a[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1e+00 0e+00 0e+00 1e+07 1e+00 1e+08
> af <- apply(a, 2, my.filter)
> summary(af[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000
It can be vector-based of course:
> summary(my.filter(a[,3], 500))
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000 | Is there an R package with a pretty function that can deal effectively with outliers? | If you're importing your data with a command like, say,
read.table('yourfile.txt', header=TRUE, ...)
you can indicate what values are to be considered as "null" or NA values, by specifying na.strings | Is there an R package with a pretty function that can deal effectively with outliers?
If you're importing your data with a command like, say,
read.table('yourfile.txt', header=TRUE, ...)
you can indicate what values are to be considered as "null" or NA values, by specifying na.strings = "999999999". We can also consider different values for indicating NA values. Consider the following file (fake.txt) where we want to treat "." and "999999999" as NA values:
1 2 .
3 999999999 4
5 6 7
then in R we would do:
> a <- read.table("fake.txt", na.strings=c(".","999999999"))
> a
V1 V2 V3
1 1 2 NA
2 3 NA 4
3 5 6 7
Otherwise, you can always filter your data as indicated by @Sacha in his comment. Here, it could be something like
a[a=="." | a==999999999] <- NA
Edit
In case there are multiple abnormal values that can possibly be observed in different columns with different values, but you know the likely range of admissible values, you can apply a function to each column. For example, define the following filter:
my.filter <- function(x, threshold=100) ifelse(x > threshold, NA, x)
then
a.filt <- apply(a, 2, my.filter)
will replace every value > 100 with NA in the matrix a.
Example:
> a <- replicate(10, rnorm(10))
> a[1,3] <- 99999999
> a[5,6] <- 99999999
> a[8,10] <- 99999990
> summary(a[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1e+00 0e+00 0e+00 1e+07 1e+00 1e+08
> af <- apply(a, 2, my.filter)
> summary(af[,3])
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000
It can be vector-based of course:
> summary(my.filter(a[,3], 500))
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-1.4640 -0.2680 0.4671 -0.0418 0.4981 0.7444 1.0000 | Is there an R package with a pretty function that can deal effectively with outliers?
If you're importing your data with a command like, say,
read.table('yourfile.txt', header=TRUE, ...)
you can indicate what values are to be considered as "null" or NA values, by specifying na.strings |
38,448 | Is there an R package with a pretty function that can deal effectively with outliers? | I encounter this quite frequently when dealing with customer daily time series data. It appears that many accounting systems IGNORE daily data that didn't occur i.e. no transactions were recorded for that day (time interval/bucket) and don't fill in a '0" number . Since time series analysis require a reading for every interval/bucket we need to inject a "0" for the omitted observation. Intervention Detection is essentially a scheme to detect the anomaly and replace it with an expected value based on an identified profile/signal/prediction. If there are many of these 'missing values" the system can break down The problem becomes a little more complex when there is strong day-of-the-week profile in the historical data and a "sequential patch of values" are not recorded, suggesting that replacement values be obtained by computing local daily averages as a precursor to fine-tuning these values. | Is there an R package with a pretty function that can deal effectively with outliers? | I encounter this quite frequently when dealing with customer daily time series data. It appears that many accounting systems IGNORE daily data that didn't occur i.e. no transactions were recorded for | Is there an R package with a pretty function that can deal effectively with outliers?
I encounter this quite frequently when dealing with customer daily time series data. It appears that many accounting systems IGNORE daily data that didn't occur i.e. no transactions were recorded for that day (time interval/bucket) and don't fill in a '0" number . Since time series analysis require a reading for every interval/bucket we need to inject a "0" for the omitted observation. Intervention Detection is essentially a scheme to detect the anomaly and replace it with an expected value based on an identified profile/signal/prediction. If there are many of these 'missing values" the system can break down The problem becomes a little more complex when there is strong day-of-the-week profile in the historical data and a "sequential patch of values" are not recorded, suggesting that replacement values be obtained by computing local daily averages as a precursor to fine-tuning these values. | Is there an R package with a pretty function that can deal effectively with outliers?
I encounter this quite frequently when dealing with customer daily time series data. It appears that many accounting systems IGNORE daily data that didn't occur i.e. no transactions were recorded for |
38,449 | L1 & L2 double role in Regularization and Cost functions? | They are distinct notions.
Point #2 refers to the usual kind of loss function. The first example almost anyone who studies statistics or data analysis of any kind sees is the square loss in ordinary least squares linear regression: add up the squared residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
$$
Another viable loss function is to add up the absolute residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert
$$
Each of these can be expressed in terms of $p$-norms.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2=\vert\vert y-\hat y\vert\vert_2^2\\
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert = \vert\vert y-\hat y\vert\vert_1
$$
Consequently, it is reasonable to refer to these as $\ell_2$ and $\ell_1$ loss, respectively.
The penalization from the regularization in point #1 is separate. First, there is not necessarily a need to include penalization, so it might be that you just find the regression parameters that lead to predictions $\hat y_i$ giving the minimal $\ell_p$ loss, and this is exactly what ordinary least squares estimation does for $\ell_2$ loss. However, there are various reasons why the best loss value might not be desirable. Regularization is a way of sacrificing the training loss value in order to improve some other facet of performance, a major example being to sacrifice the in-sample fit of a machine learning model to quell overfitting and improve out-of-sample performance.
You can mix-and-match loss functions and regularization to your heart's content. For instance, ridge regression uses square loss with an added penalty term that involves the $\ell_2$ norm of the regression parameter vector.
$$
L_{\text{ridge}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_2
$$
LASSO regression uses square loss with a penalty term that uses the $\ell_1$ norm of the parameter vector.
$$
L_{\text{LASSO}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_1
$$
Elastic net uses both types of penalty.
$$
L_{\text{Elastic Net}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
Finally, while I do not see this approach discussed much, you could use $\ell_1$ loss with either penalty or even both.
$$
L_{\text{Other}}=\vert\vert y-\hat y\vert\vert_1 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
(The $\lambda$ parameters control how much of a penalty there is for having large coefficients in the parameter vector. It is common to tune these using cross validation.)
Getting to other types of models, nothing stops you from using $\ell_1$ or $\ell_2$ penalization (or both) with, say, logistic regression and its associated "log loss".
$$
L_{\text{Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)\\
L_{\text{Penalized Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)+ \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$ | L1 & L2 double role in Regularization and Cost functions? | They are distinct notions.
Point #2 refers to the usual kind of loss function. The first example almost anyone who studies statistics or data analysis of any kind sees is the square loss in ordinary l | L1 & L2 double role in Regularization and Cost functions?
They are distinct notions.
Point #2 refers to the usual kind of loss function. The first example almost anyone who studies statistics or data analysis of any kind sees is the square loss in ordinary least squares linear regression: add up the squared residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
$$
Another viable loss function is to add up the absolute residuals.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert
$$
Each of these can be expressed in terms of $p$-norms.
$$
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2=\vert\vert y-\hat y\vert\vert_2^2\\
L(y,\hat y)=\overset{n}{\underset{i=1}{\sum}}\left\vert
y_i - \hat y_i
\right\vert = \vert\vert y-\hat y\vert\vert_1
$$
Consequently, it is reasonable to refer to these as $\ell_2$ and $\ell_1$ loss, respectively.
The penalization from the regularization in point #1 is separate. First, there is not necessarily a need to include penalization, so it might be that you just find the regression parameters that lead to predictions $\hat y_i$ giving the minimal $\ell_p$ loss, and this is exactly what ordinary least squares estimation does for $\ell_2$ loss. However, there are various reasons why the best loss value might not be desirable. Regularization is a way of sacrificing the training loss value in order to improve some other facet of performance, a major example being to sacrifice the in-sample fit of a machine learning model to quell overfitting and improve out-of-sample performance.
You can mix-and-match loss functions and regularization to your heart's content. For instance, ridge regression uses square loss with an added penalty term that involves the $\ell_2$ norm of the regression parameter vector.
$$
L_{\text{ridge}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_2
$$
LASSO regression uses square loss with a penalty term that uses the $\ell_1$ norm of the parameter vector.
$$
L_{\text{LASSO}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda\vert\vert\hat\beta\vert\vert_1
$$
Elastic net uses both types of penalty.
$$
L_{\text{Elastic Net}}=\vert\vert y-\hat y\vert\vert^2_2 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
Finally, while I do not see this approach discussed much, you could use $\ell_1$ loss with either penalty or even both.
$$
L_{\text{Other}}=\vert\vert y-\hat y\vert\vert_1 + \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$
(The $\lambda$ parameters control how much of a penalty there is for having large coefficients in the parameter vector. It is common to tune these using cross validation.)
Getting to other types of models, nothing stops you from using $\ell_1$ or $\ell_2$ penalization (or both) with, say, logistic regression and its associated "log loss".
$$
L_{\text{Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)\\
L_{\text{Penalized Log}}=-\overset{n}{\underset{i=1}{\sum}}\left(
y_i\log(\hat y_1) + (1 - y_1)\log(1 - \hat y_1)
\right)+ \lambda_1\vert\vert\hat\beta\vert\vert_1+ \lambda_2\vert\vert\hat\beta\vert\vert_2
$$ | L1 & L2 double role in Regularization and Cost functions?
They are distinct notions.
Point #2 refers to the usual kind of loss function. The first example almost anyone who studies statistics or data analysis of any kind sees is the square loss in ordinary l |
38,450 | L1 & L2 double role in Regularization and Cost functions? | no, they refer to two different things:
prior over parameters (that's your belief of how the parameters should be distributed)
assumption on the "noise" of the measurements (given an observation, what's the distribution that you think describes the noise)
MAE and L1 is a Laplacian prior, MSE and L2 is a Gaussian likelihood/prior, and they are both derived using the max log-likelihood principle | L1 & L2 double role in Regularization and Cost functions? | no, they refer to two different things:
prior over parameters (that's your belief of how the parameters should be distributed)
assumption on the "noise" of the measurements (given an observation, wha | L1 & L2 double role in Regularization and Cost functions?
no, they refer to two different things:
prior over parameters (that's your belief of how the parameters should be distributed)
assumption on the "noise" of the measurements (given an observation, what's the distribution that you think describes the noise)
MAE and L1 is a Laplacian prior, MSE and L2 is a Gaussian likelihood/prior, and they are both derived using the max log-likelihood principle | L1 & L2 double role in Regularization and Cost functions?
no, they refer to two different things:
prior over parameters (that's your belief of how the parameters should be distributed)
assumption on the "noise" of the measurements (given an observation, wha |
38,451 | L1 & L2 double role in Regularization and Cost functions? | The terms "L1" and "L2" refer to special functions called norms, which measure the length or size of a vector. You are correct in that they are used in two different contexts in statistics and machine learning, but their meanings are the same in both contexts.
In the context of regularization, the L1 and/or L2 norm restricts the magnitude of the parameter vector of a model. The difference between L1 and L2 regularization comes down to the differences between the L1 and L2 norms. See e.g. https://medium.com/analytics-vidhya/effects-of-l1-and-l2-regularization-explained-5a916ecf4f06. As pointed out in other answers, L1 regularization in a regression model corresponds to a Laplace prior on coefficients in Bayesian modeling, and L2 regularization corresponds to a Gaussian prior.
In the context of loss functions, the L1 or L2 norm measures the magnitude of the error vector of the model on a train/test/validation set. L1 loss is the Median Absolute Error (MAE), and L2 loss is the Root Mean Squared Error (RMSE). As pointed out in the comments, regression models fitted with L1 loss are models of a conditional median, while models fitted with L2 loss are models of a conditional expectation (conditional mean). The latter also happens to correspond with a Gaussian GLM maximum-likelihood model, where the conditional distribution of the data follows a Gaussian distribution centered at the regression prediction.
The L2 norm corresponds to our conventional notion of Euclidean distance, which is essentially a multi-dimensional extension of the Pythagorean theorem. You can think of Euclidean distances as the lengths of hypotenuses of right triangles drawn between points.
The L1 norm corresponds to the weirder notion of Manhattan (aka "Taxicab") distance, so named because distances resemble the distance traveled by a taxi cab following the grid layout of streets in Manhattan, New York.
It's very common in statistics and machine learning to use L2 loss (MSE) with L1 regularization, or even both L1 and L2 regularization in the same model.
L1 loss (MAE) is much less common than L2 in general, in part because the absolute value is not differentiable. However there is a "smooth" differentiable L1 loss that attempts to mimic the properties of true L1 loss, see e.g. How to interpret smooth l1 loss?. | L1 & L2 double role in Regularization and Cost functions? | The terms "L1" and "L2" refer to special functions called norms, which measure the length or size of a vector. You are correct in that they are used in two different contexts in statistics and machine | L1 & L2 double role in Regularization and Cost functions?
The terms "L1" and "L2" refer to special functions called norms, which measure the length or size of a vector. You are correct in that they are used in two different contexts in statistics and machine learning, but their meanings are the same in both contexts.
In the context of regularization, the L1 and/or L2 norm restricts the magnitude of the parameter vector of a model. The difference between L1 and L2 regularization comes down to the differences between the L1 and L2 norms. See e.g. https://medium.com/analytics-vidhya/effects-of-l1-and-l2-regularization-explained-5a916ecf4f06. As pointed out in other answers, L1 regularization in a regression model corresponds to a Laplace prior on coefficients in Bayesian modeling, and L2 regularization corresponds to a Gaussian prior.
In the context of loss functions, the L1 or L2 norm measures the magnitude of the error vector of the model on a train/test/validation set. L1 loss is the Median Absolute Error (MAE), and L2 loss is the Root Mean Squared Error (RMSE). As pointed out in the comments, regression models fitted with L1 loss are models of a conditional median, while models fitted with L2 loss are models of a conditional expectation (conditional mean). The latter also happens to correspond with a Gaussian GLM maximum-likelihood model, where the conditional distribution of the data follows a Gaussian distribution centered at the regression prediction.
The L2 norm corresponds to our conventional notion of Euclidean distance, which is essentially a multi-dimensional extension of the Pythagorean theorem. You can think of Euclidean distances as the lengths of hypotenuses of right triangles drawn between points.
The L1 norm corresponds to the weirder notion of Manhattan (aka "Taxicab") distance, so named because distances resemble the distance traveled by a taxi cab following the grid layout of streets in Manhattan, New York.
It's very common in statistics and machine learning to use L2 loss (MSE) with L1 regularization, or even both L1 and L2 regularization in the same model.
L1 loss (MAE) is much less common than L2 in general, in part because the absolute value is not differentiable. However there is a "smooth" differentiable L1 loss that attempts to mimic the properties of true L1 loss, see e.g. How to interpret smooth l1 loss?. | L1 & L2 double role in Regularization and Cost functions?
The terms "L1" and "L2" refer to special functions called norms, which measure the length or size of a vector. You are correct in that they are used in two different contexts in statistics and machine |
38,452 | What type of regression to use when outcome is integers from 0 to 5 | @Doctor Milt's response is on the right track, but I think this is much more naturally handled using a multilevel logistic (or probit) regression, each person's response to each item (0 or 1) as the outcome variable.
You would definitely want to allow the average probability of a 1 vary across participants and across questions (random intercept). You would probably also allow the influence of your predictors to vary across questions (random slopes), although depending on your data set this model might be too complicated to estimate. This is a class of item response theory model.
With a data frame containing one row per response, the random intercepts and slope model would be coded in R as
glmer(response ~ predictor1 + predictor2 +
(1 | participant_id) +
(1 + predictor1 + predictor2 | question_id),
data = your_data, family = binomial)
You might also consider using brms to fit this model. brms has excellent support for item response theory models (see https://arxiv.org/pdf/1905.09501.pdf). | What type of regression to use when outcome is integers from 0 to 5 | @Doctor Milt's response is on the right track, but I think this is much more naturally handled using a multilevel logistic (or probit) regression, each person's response to each item (0 or 1) as the o | What type of regression to use when outcome is integers from 0 to 5
@Doctor Milt's response is on the right track, but I think this is much more naturally handled using a multilevel logistic (or probit) regression, each person's response to each item (0 or 1) as the outcome variable.
You would definitely want to allow the average probability of a 1 vary across participants and across questions (random intercept). You would probably also allow the influence of your predictors to vary across questions (random slopes), although depending on your data set this model might be too complicated to estimate. This is a class of item response theory model.
With a data frame containing one row per response, the random intercepts and slope model would be coded in R as
glmer(response ~ predictor1 + predictor2 +
(1 | participant_id) +
(1 + predictor1 + predictor2 | question_id),
data = your_data, family = binomial)
You might also consider using brms to fit this model. brms has excellent support for item response theory models (see https://arxiv.org/pdf/1905.09501.pdf). | What type of regression to use when outcome is integers from 0 to 5
@Doctor Milt's response is on the right track, but I think this is much more naturally handled using a multilevel logistic (or probit) regression, each person's response to each item (0 or 1) as the o |
38,453 | What type of regression to use when outcome is integers from 0 to 5 | I like the idea of a Bernoulli model, as you could start with some strong assumptions and gradually relax them.
Let $Y_{ij} \sim \mathrm{Bernoulli}(p_{ij})$, $i=1,\ldots,n$, $j=1,\ldots,5$, be the response given by the $i$th person to the $j$th question. The probability $p_{ij}$ is a function of the explanatory variables, $\mathrm{logit}(p_{ij}) = \beta^{(0)}_{j} + \sum_{k=1}^K \beta^{(k)}_{j} x_{ik}$.
You could try:
$p_{ij}=p_i$, i.e. a person is equally likely to respond yes to any of the 5 questions. Their final score is then $S_i=\sum_{j=1}^5 Y_{ij} \sim \mathrm{Bin}(5, p_i)$, as in @utobi's comment. You can drop the $j$ subscripts from the regression coefficients.
A person is more likely to respond yes to some questions than others, but the relationship between predictors and outcome is the same for every question. This means that the slope coefficients ($\beta^{(k)}_{j}$) are the same for all $j$, but the intercepts ($\beta^{(0)}_{j}$) are different for different $j$.
The relationship between predictors and outcome varies by question, so you have different intercepts and slopes for each $j$. At this point, you could think about whether a prior distribution on the regression coefficients makes sense. | What type of regression to use when outcome is integers from 0 to 5 | I like the idea of a Bernoulli model, as you could start with some strong assumptions and gradually relax them.
Let $Y_{ij} \sim \mathrm{Bernoulli}(p_{ij})$, $i=1,\ldots,n$, $j=1,\ldots,5$, be the res | What type of regression to use when outcome is integers from 0 to 5
I like the idea of a Bernoulli model, as you could start with some strong assumptions and gradually relax them.
Let $Y_{ij} \sim \mathrm{Bernoulli}(p_{ij})$, $i=1,\ldots,n$, $j=1,\ldots,5$, be the response given by the $i$th person to the $j$th question. The probability $p_{ij}$ is a function of the explanatory variables, $\mathrm{logit}(p_{ij}) = \beta^{(0)}_{j} + \sum_{k=1}^K \beta^{(k)}_{j} x_{ik}$.
You could try:
$p_{ij}=p_i$, i.e. a person is equally likely to respond yes to any of the 5 questions. Their final score is then $S_i=\sum_{j=1}^5 Y_{ij} \sim \mathrm{Bin}(5, p_i)$, as in @utobi's comment. You can drop the $j$ subscripts from the regression coefficients.
A person is more likely to respond yes to some questions than others, but the relationship between predictors and outcome is the same for every question. This means that the slope coefficients ($\beta^{(k)}_{j}$) are the same for all $j$, but the intercepts ($\beta^{(0)}_{j}$) are different for different $j$.
The relationship between predictors and outcome varies by question, so you have different intercepts and slopes for each $j$. At this point, you could think about whether a prior distribution on the regression coefficients makes sense. | What type of regression to use when outcome is integers from 0 to 5
I like the idea of a Bernoulli model, as you could start with some strong assumptions and gradually relax them.
Let $Y_{ij} \sim \mathrm{Bernoulli}(p_{ij})$, $i=1,\ldots,n$, $j=1,\ldots,5$, be the res |
38,454 | Interpreting the Lambdas of Yeo Johnson Transformation? | A table adds little, but a picture can add a lot more to our understanding. I offer two pictures.
Unlike the Box-Cox transformation, which applies to positive numbers, the Yeo-Johnson transformation applies to all numbers. It does so by splitting the real line at zero, shifting the positive values by $1$ and the negative values by $-1,$ and applying a Box-Cox transformation to the absolute values, negating them when the argument is negative. In effect, it sews two Box-Cox transformations together. However, they have "inverse" Box-Cox parameters. The natural origin of the Box-Cox parameters is $\lambda = 1$ and the "inverse" parameter is
$$\lambda^\prime = 2 - \lambda,$$
reflecting the parameter line around $\lambda = 1.$ The sewing is smooth (as you will see in the first plot below) because all Box-Cox transformations are by design made to agree with the identity transformation at $x = 1.$
For pictures of the Box-Cox transformations and some explanation of their construction, see https://stats.stackexchange.com/a/467525/919. These transformations are given by
$$\operatorname{BC}(x;\lambda) = \frac{x^\lambda - 1}{\lambda}$$
(which has the limiting value of $\log(x)$ when $\lambda = 0$). They can be inverted: when $y$ is the transformed value, the original $x$ is recovered by
$$\operatorname{BC}^{-1}(y;\lambda) = (1 + \lambda y)^{1/\lambda}$$
(limiting to the exponential function when $\lambda = 0$).
The Yeo-Johnson transformation is
$$\operatorname{YJ}(x;\lambda) = \left\{\begin{aligned}\operatorname{BC}(1+x,\lambda), && x \ge 0\\ -\operatorname{BC}(1-x, \lambda^\prime),&& x \lt 0.\end{aligned} \right.$$
These can all be inverted by inverting the positive and negative values separately.
The implementation in any programming language is thereby simple. In R, for instance, it is
BC <- function(x, lambda) ifelse(lambda != 0, (x^lambda - 1) / lambda, log(x))
YJ <- function(y, lambda) ifelse(y >= 0, BC(y + 1, lambda), -BC(1 - y, 2-lambda))
The graphs of $\operatorname{YJ}$ show the effects on the data for various $\lambda,$
Here's what they do to a reference (Normal) distribution (the green distribution for $\lambda = 1$ in the middle panel):
Like the Box-Cox family, these transformations make a distribution more positively skewed when $\lambda \gt 1$ and more negatively skewed when $\lambda \lt 1.$ | Interpreting the Lambdas of Yeo Johnson Transformation? | A table adds little, but a picture can add a lot more to our understanding. I offer two pictures.
Unlike the Box-Cox transformation, which applies to positive numbers, the Yeo-Johnson transformation | Interpreting the Lambdas of Yeo Johnson Transformation?
A table adds little, but a picture can add a lot more to our understanding. I offer two pictures.
Unlike the Box-Cox transformation, which applies to positive numbers, the Yeo-Johnson transformation applies to all numbers. It does so by splitting the real line at zero, shifting the positive values by $1$ and the negative values by $-1,$ and applying a Box-Cox transformation to the absolute values, negating them when the argument is negative. In effect, it sews two Box-Cox transformations together. However, they have "inverse" Box-Cox parameters. The natural origin of the Box-Cox parameters is $\lambda = 1$ and the "inverse" parameter is
$$\lambda^\prime = 2 - \lambda,$$
reflecting the parameter line around $\lambda = 1.$ The sewing is smooth (as you will see in the first plot below) because all Box-Cox transformations are by design made to agree with the identity transformation at $x = 1.$
For pictures of the Box-Cox transformations and some explanation of their construction, see https://stats.stackexchange.com/a/467525/919. These transformations are given by
$$\operatorname{BC}(x;\lambda) = \frac{x^\lambda - 1}{\lambda}$$
(which has the limiting value of $\log(x)$ when $\lambda = 0$). They can be inverted: when $y$ is the transformed value, the original $x$ is recovered by
$$\operatorname{BC}^{-1}(y;\lambda) = (1 + \lambda y)^{1/\lambda}$$
(limiting to the exponential function when $\lambda = 0$).
The Yeo-Johnson transformation is
$$\operatorname{YJ}(x;\lambda) = \left\{\begin{aligned}\operatorname{BC}(1+x,\lambda), && x \ge 0\\ -\operatorname{BC}(1-x, \lambda^\prime),&& x \lt 0.\end{aligned} \right.$$
These can all be inverted by inverting the positive and negative values separately.
The implementation in any programming language is thereby simple. In R, for instance, it is
BC <- function(x, lambda) ifelse(lambda != 0, (x^lambda - 1) / lambda, log(x))
YJ <- function(y, lambda) ifelse(y >= 0, BC(y + 1, lambda), -BC(1 - y, 2-lambda))
The graphs of $\operatorname{YJ}$ show the effects on the data for various $\lambda,$
Here's what they do to a reference (Normal) distribution (the green distribution for $\lambda = 1$ in the middle panel):
Like the Box-Cox family, these transformations make a distribution more positively skewed when $\lambda \gt 1$ and more negatively skewed when $\lambda \lt 1.$ | Interpreting the Lambdas of Yeo Johnson Transformation?
A table adds little, but a picture can add a lot more to our understanding. I offer two pictures.
Unlike the Box-Cox transformation, which applies to positive numbers, the Yeo-Johnson transformation |
38,455 | What is the 'right' slope formula of a regression? deltas or Pearson? | For only two points they are the same.
The slope of simple linear regression is
$$
\hat \beta = \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2}
$$
that is the same form you mentioned in the second bullet. Notice that when we have two points, then things like $x_1 - \bar x$ can be written as
$$
x_1 - \frac{x_1 + x_2}{2} = \frac{x_1 - x_2}{2}
$$
So we can re-write
$$
\require{cancel}
\begin{align}
\hat \beta &= \frac{\frac{x_1 - x_2}{2}\frac{y_1 - y_2}{2} + \frac{x_2 - x_1}{2} \frac{y_2 - y_1}{2}}{ \frac{x_1 - x_2}{2}\frac{x_1 - x_2}{2} + \frac{x_2 - x_1}{2}\frac{x_2 - x_1}{2} } \\
&= \frac{
(\frac{x_1y_1}{4} - \frac{x_1y_2}{4} - \frac{x_2y_1}{4} + \frac{x_2y_2}{4})
+ (\frac{x_2y_2}{4} - \frac{x_2y_1}{4} - \frac{x_1y_2}{4} + \frac{x_1y_1}{4})
}{\cancel{2} \frac{x_1 - x_2}{\cancel 2}\frac{x_1 - x_2}{2}} \\
&= \frac{\frac{x_1 y_1 - x_1 y_2 - x_2 y_1 + x_2 y_2}{\cancel 2}}{ (x_1 - x_2)\frac{x_1 - x_2}{\cancel 2}} \\
&= \frac{\cancel{(x_1 - x_2)}( y_1 - y_2)}{\cancel{(x_1 - x_2)}(x_1 - x_2)} \\
&= \frac{y_1 - y_2}{x_1 - x_2} = \frac{y_2 - y_1}{x_2 - x_1}
\end{align}$$
They must have been the same because the regression line calculated for two points needs to pass through them as there's no "noise". Linear regression is a linear model, so they need to be algebraically the same.
If you have more than two points, as Dave mentioned, you cannot use the rise/run formula anymore. | What is the 'right' slope formula of a regression? deltas or Pearson? | For only two points they are the same.
The slope of simple linear regression is
$$
\hat \beta = \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2}
$$
that is the same form you mentio | What is the 'right' slope formula of a regression? deltas or Pearson?
For only two points they are the same.
The slope of simple linear regression is
$$
\hat \beta = \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2}
$$
that is the same form you mentioned in the second bullet. Notice that when we have two points, then things like $x_1 - \bar x$ can be written as
$$
x_1 - \frac{x_1 + x_2}{2} = \frac{x_1 - x_2}{2}
$$
So we can re-write
$$
\require{cancel}
\begin{align}
\hat \beta &= \frac{\frac{x_1 - x_2}{2}\frac{y_1 - y_2}{2} + \frac{x_2 - x_1}{2} \frac{y_2 - y_1}{2}}{ \frac{x_1 - x_2}{2}\frac{x_1 - x_2}{2} + \frac{x_2 - x_1}{2}\frac{x_2 - x_1}{2} } \\
&= \frac{
(\frac{x_1y_1}{4} - \frac{x_1y_2}{4} - \frac{x_2y_1}{4} + \frac{x_2y_2}{4})
+ (\frac{x_2y_2}{4} - \frac{x_2y_1}{4} - \frac{x_1y_2}{4} + \frac{x_1y_1}{4})
}{\cancel{2} \frac{x_1 - x_2}{\cancel 2}\frac{x_1 - x_2}{2}} \\
&= \frac{\frac{x_1 y_1 - x_1 y_2 - x_2 y_1 + x_2 y_2}{\cancel 2}}{ (x_1 - x_2)\frac{x_1 - x_2}{\cancel 2}} \\
&= \frac{\cancel{(x_1 - x_2)}( y_1 - y_2)}{\cancel{(x_1 - x_2)}(x_1 - x_2)} \\
&= \frac{y_1 - y_2}{x_1 - x_2} = \frac{y_2 - y_1}{x_2 - x_1}
\end{align}$$
They must have been the same because the regression line calculated for two points needs to pass through them as there's no "noise". Linear regression is a linear model, so they need to be algebraically the same.
If you have more than two points, as Dave mentioned, you cannot use the rise/run formula anymore. | What is the 'right' slope formula of a regression? deltas or Pearson?
For only two points they are the same.
The slope of simple linear regression is
$$
\hat \beta = \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2}
$$
that is the same form you mentio |
38,456 | What is the 'right' slope formula of a regression? deltas or Pearson? | These are fairly unrelated concepts.
In the first equation, that is the slope of the line connecting two points.
In the second, you are finding a line that fits multiple points best (according to a particular and common definition of best). This line is called the ordinary least squares regression line. This is what is produced by “add trendline” in Excel. Unlike in the first case, your “trendline” might not hit any of the points.
The two will coincide if you have two points and fit a “trendline” to those points (aside from a technicality if the points have the same $x$- or $y$-coordinate), but I’d say that the similarities mostly end there. | What is the 'right' slope formula of a regression? deltas or Pearson? | These are fairly unrelated concepts.
In the first equation, that is the slope of the line connecting two points.
In the second, you are finding a line that fits multiple points best (according to a pa | What is the 'right' slope formula of a regression? deltas or Pearson?
These are fairly unrelated concepts.
In the first equation, that is the slope of the line connecting two points.
In the second, you are finding a line that fits multiple points best (according to a particular and common definition of best). This line is called the ordinary least squares regression line. This is what is produced by “add trendline” in Excel. Unlike in the first case, your “trendline” might not hit any of the points.
The two will coincide if you have two points and fit a “trendline” to those points (aside from a technicality if the points have the same $x$- or $y$-coordinate), but I’d say that the similarities mostly end there. | What is the 'right' slope formula of a regression? deltas or Pearson?
These are fairly unrelated concepts.
In the first equation, that is the slope of the line connecting two points.
In the second, you are finding a line that fits multiple points best (according to a pa |
38,457 | Trying to understand Bootstrapping w/ Python | When you calculate your lower and upper bounds, you are dividing your standard deviations of your means by $\sqrt{1000}$, which I assume you are doing because you want to convert your standard deviation to a standard error. But the standard deviation you are using is the standard deviation (or standard error) of the mean, so you don't need to do this. If you replace those lines with:
lower_bound = mean - 1.96*std
higher_bound = mean + 1.96*std
Your results should look better. | Trying to understand Bootstrapping w/ Python | When you calculate your lower and upper bounds, you are dividing your standard deviations of your means by $\sqrt{1000}$, which I assume you are doing because you want to convert your standard deviati | Trying to understand Bootstrapping w/ Python
When you calculate your lower and upper bounds, you are dividing your standard deviations of your means by $\sqrt{1000}$, which I assume you are doing because you want to convert your standard deviation to a standard error. But the standard deviation you are using is the standard deviation (or standard error) of the mean, so you don't need to do this. If you replace those lines with:
lower_bound = mean - 1.96*std
higher_bound = mean + 1.96*std
Your results should look better. | Trying to understand Bootstrapping w/ Python
When you calculate your lower and upper bounds, you are dividing your standard deviations of your means by $\sqrt{1000}$, which I assume you are doing because you want to convert your standard deviati |
38,458 | Trying to understand Bootstrapping w/ Python | This is a comment to complement @Lynn's answer (+1) which explains the big mistake in your implementation of the bootstrap.
There is another error. It's more subtle as it doesn't have an obvious effect on the coverage of the confidence interval: You should use the sample statistic, not the average of the bootstrapped statistics as the center of the bootstrap confidence interval.
Here is the updated python code:
import numpy as np
np.random.seed(1234)
sample_size = 100
bootstrap_reps = 1000
population_mean = 100
population_std = 5
sample = np.random.normal(population_mean, population_std, sample_size)
# Function to generate bootstrap samples
bootstrap_sample = lambda: np.random.choice(sample, size=sample_size, replace=True)
# The sample mean is an estimator of the population mean
estimator = np.mean
sample_statistic = estimator(sample)
bootstrapped_statistics = [
estimator(bootstrap_sample())
for _ in range(bootstrap_reps)
]
# Use the sample statistic [here the sample mean],
# not the average of the bootstrapped statistics,
# to construct the confidence interval
sample_statistic, np.mean(bootstrapped_statistics)
# (100.17556141562721, 100.18181492004352)
# Lower and upper limit of the 95% confidence interval for the population parameter
z_alpha = 1.96
ci_lower = sample_statistic - z_alpha * np.std(bootstrapped_statistics)
ci_upper = sample_statistic + z_alpha * np.std(bootstrapped_statistics) | Trying to understand Bootstrapping w/ Python | This is a comment to complement @Lynn's answer (+1) which explains the big mistake in your implementation of the bootstrap.
There is another error. It's more subtle as it doesn't have an obvious effec | Trying to understand Bootstrapping w/ Python
This is a comment to complement @Lynn's answer (+1) which explains the big mistake in your implementation of the bootstrap.
There is another error. It's more subtle as it doesn't have an obvious effect on the coverage of the confidence interval: You should use the sample statistic, not the average of the bootstrapped statistics as the center of the bootstrap confidence interval.
Here is the updated python code:
import numpy as np
np.random.seed(1234)
sample_size = 100
bootstrap_reps = 1000
population_mean = 100
population_std = 5
sample = np.random.normal(population_mean, population_std, sample_size)
# Function to generate bootstrap samples
bootstrap_sample = lambda: np.random.choice(sample, size=sample_size, replace=True)
# The sample mean is an estimator of the population mean
estimator = np.mean
sample_statistic = estimator(sample)
bootstrapped_statistics = [
estimator(bootstrap_sample())
for _ in range(bootstrap_reps)
]
# Use the sample statistic [here the sample mean],
# not the average of the bootstrapped statistics,
# to construct the confidence interval
sample_statistic, np.mean(bootstrapped_statistics)
# (100.17556141562721, 100.18181492004352)
# Lower and upper limit of the 95% confidence interval for the population parameter
z_alpha = 1.96
ci_lower = sample_statistic - z_alpha * np.std(bootstrapped_statistics)
ci_upper = sample_statistic + z_alpha * np.std(bootstrapped_statistics) | Trying to understand Bootstrapping w/ Python
This is a comment to complement @Lynn's answer (+1) which explains the big mistake in your implementation of the bootstrap.
There is another error. It's more subtle as it doesn't have an obvious effec |
38,459 | How to find median value for five given elements based on the max min and sum of the elements | If you only know the min, max, and sum of the 5 numbers, you cannot determine the median.
E.g.
median(1, 2, 3, 4, 5)=3
median(1, 2.1, 2.8, 4.1, 5)=2.8.
But both have (min, max, sum) = (1, 5, 15). | How to find median value for five given elements based on the max min and sum of the elements | If you only know the min, max, and sum of the 5 numbers, you cannot determine the median.
E.g.
median(1, 2, 3, 4, 5)=3
median(1, 2.1, 2.8, 4.1, 5)=2.8.
But both have (min, max, sum) = (1, 5, 15). | How to find median value for five given elements based on the max min and sum of the elements
If you only know the min, max, and sum of the 5 numbers, you cannot determine the median.
E.g.
median(1, 2, 3, 4, 5)=3
median(1, 2.1, 2.8, 4.1, 5)=2.8.
But both have (min, max, sum) = (1, 5, 15). | How to find median value for five given elements based on the max min and sum of the elements
If you only know the min, max, and sum of the 5 numbers, you cannot determine the median.
E.g.
median(1, 2, 3, 4, 5)=3
median(1, 2.1, 2.8, 4.1, 5)=2.8.
But both have (min, max, sum) = (1, 5, 15). |
38,460 | How to find median value for five given elements based on the max min and sum of the elements | Find the max and min of the initial list.
Create a new list without those 2 elements.
Find the new max, min and sum and use the method you mentioned,
median = sum - min - max | How to find median value for five given elements based on the max min and sum of the elements | Find the max and min of the initial list.
Create a new list without those 2 elements.
Find the new max, min and sum and use the method you mentioned,
median = sum - min - max | How to find median value for five given elements based on the max min and sum of the elements
Find the max and min of the initial list.
Create a new list without those 2 elements.
Find the new max, min and sum and use the method you mentioned,
median = sum - min - max | How to find median value for five given elements based on the max min and sum of the elements
Find the max and min of the initial list.
Create a new list without those 2 elements.
Find the new max, min and sum and use the method you mentioned,
median = sum - min - max |
38,461 | Asymptotic Normality and Consistency | Convergence to a constant does not mean that you have an estimator that is exactly equal to this constant at any given time. It just means that given a big enough sample size, you can expect that you estimator will be close to the true value of the parameter.
Asymptotic normality most often includes some sort of a scaling (often by $\sqrt{n}$, where $n$ is the sample size). It means that we are not looking at the estimator per se, but at a scaled version of it which does not converge to a constant.
You can look at it from this way (this explanation is not very precise): without such scaling, when you sample size is big enough, the distribution of the estimator can be approximated by a normal distribution with variance which is decreasing with the sample size. The bigger the sample size, the smaller the variance $\Rightarrow$ in the limit, the variance vanishes completely, hence, convergence to a constant is achieved. | Asymptotic Normality and Consistency | Convergence to a constant does not mean that you have an estimator that is exactly equal to this constant at any given time. It just means that given a big enough sample size, you can expect that you | Asymptotic Normality and Consistency
Convergence to a constant does not mean that you have an estimator that is exactly equal to this constant at any given time. It just means that given a big enough sample size, you can expect that you estimator will be close to the true value of the parameter.
Asymptotic normality most often includes some sort of a scaling (often by $\sqrt{n}$, where $n$ is the sample size). It means that we are not looking at the estimator per se, but at a scaled version of it which does not converge to a constant.
You can look at it from this way (this explanation is not very precise): without such scaling, when you sample size is big enough, the distribution of the estimator can be approximated by a normal distribution with variance which is decreasing with the sample size. The bigger the sample size, the smaller the variance $\Rightarrow$ in the limit, the variance vanishes completely, hence, convergence to a constant is achieved. | Asymptotic Normality and Consistency
Convergence to a constant does not mean that you have an estimator that is exactly equal to this constant at any given time. It just means that given a big enough sample size, you can expect that you |
38,462 | Asymptotic Normality and Consistency | I think you may either be confused with the modes of convergence associated with consistency and asymptotic normality or with the definition of consistency. It is important to note that consistency is defined as convergence to the true value of the parameter of interest. Regardless, let us begin with the definitions,
We say that an estimator, $\theta_n$, is consistent if it converges in probability to the true value, $\theta_0$, of the parameter which can be denoted by,
$$\theta_n \overset{p}{\to} \theta_0 \iff \lim_{n\to\infty}\Pr[||\theta_n -\theta||>\epsilon]=0$$
We say that an estimator, $\theta_n$, is asymptotically normal if it converges in distribution (or in law, or weakly) to a normally distributed random variable. That is,
$$\sqrt{n}(\theta_n - \theta_0)\overset{d}{\to} N(0,\sigma^2)$$
There are several definitions of convergence in distribution depending on the level of abstraction you want to consider, for now, let us use the most commonly used definition:
We say that a sequence of random variables $Y_1, Y_2,\dots$ converges in distribution if $F_{Y_n}(y)\to F_Y(y)$ for all continuity points $y$ of the cdf $F_{Y}$.
To answer your question you need to understand the differences between these modes of convergence. The first thing to note is that convergence in the distribution is weaker than convergence in probability. In fact, convergence in probability will imply convergence in distribution. That is,
$$|Y_n - X_n|\overset{p}{\to}0, X_n \overset{d}{\to}X \implies Y_n \overset{d}{\to} X$$
This takes some work to show for purposes of keeping this answer succinct here is the proof on Wikipedia.
Intuitively, this makes sense since if your sequence of random variables converges to another sequence of random variables you would expect that they both share the same limiting distribution. This is not generally true in reverse.
However, and quite importantly, if a random variable converges in distribution to a constant then it also converges in probability to that constant. That is,
$$X_n \overset{d}{\to}c\implies X_n \overset{p}{\to}c$$
Here is a proof of this fact.
Important bits: Notice that we have only been talking about convergence in probability and convergence in the distribution not consistency or asymptotic normality! This is very important because it actually turns out that,
$$\text{Asymptotic Normality} \implies \text{Consistency}$$
But not vice-versa! Well, this is counter-intuitive given what we just saw but it actually makes sense given the definitions of the two concepts. The main idea is that consistency does not say that our estimator converges to the true estimator just to the true value of a parameter, as you say, a constant. Thus, consistency only takes a stand on where the distribution concentrates not the whole distribution. However, this is usually not enough to do inference because we want to know not just where our estimator concentrates but also some sense of the precision with which it concentrates (i.e. we want to get some approximation of the estimator so we can determine standard errors). This can be done with convergence in distribution.
Asymptotic normality says that our scaled and differenced estimator converges in distribution to a random variable. This means that we can do the following computation,
$$X_n - X = O_p(\frac{1}{\sqrt{n}})=o_p(1) \text{ as } n\to\infty$$
The first equality follows because by the definition of convergence in distribution,
$$\Pr[||X_n-X||> M]\to\Pr[||Z||>M]$$
where $Z=N(0,\sigma^2)$ then by Markov's inequality,
$$\Pr[||Z||>M]\leq \frac{\mathbb{E}[||Z||]}{M}$$
So we can just choose $M$ sufficiently large such that $\Pr[||X_n-X||\geq M] < \epsilon$.
Thus, we can even think of consistency as a necessary condition for asymptotic normality. Of course, the reverse is not true. For a counterexample take a consistent and asymptotically normal estimator $\theta_n$ and define $\tilde\theta_n = \theta_n + n^{-1/3}$. Clearly $\tilde\theta_n \to \theta_0$ and is thus consistent but $\sqrt{n}(\tilde\theta_n -\theta_0)$ will not converge.
The idea here is that consistency is a weaker notion and refers only to the value of our estimator as $n\to\infty$. Thus, it does not affect our ability to talk about the asymptotic normality of our estimator which is really talking about the limiting behavior of our estimator beyond the value it concentrates around. | Asymptotic Normality and Consistency | I think you may either be confused with the modes of convergence associated with consistency and asymptotic normality or with the definition of consistency. It is important to note that consistency is | Asymptotic Normality and Consistency
I think you may either be confused with the modes of convergence associated with consistency and asymptotic normality or with the definition of consistency. It is important to note that consistency is defined as convergence to the true value of the parameter of interest. Regardless, let us begin with the definitions,
We say that an estimator, $\theta_n$, is consistent if it converges in probability to the true value, $\theta_0$, of the parameter which can be denoted by,
$$\theta_n \overset{p}{\to} \theta_0 \iff \lim_{n\to\infty}\Pr[||\theta_n -\theta||>\epsilon]=0$$
We say that an estimator, $\theta_n$, is asymptotically normal if it converges in distribution (or in law, or weakly) to a normally distributed random variable. That is,
$$\sqrt{n}(\theta_n - \theta_0)\overset{d}{\to} N(0,\sigma^2)$$
There are several definitions of convergence in distribution depending on the level of abstraction you want to consider, for now, let us use the most commonly used definition:
We say that a sequence of random variables $Y_1, Y_2,\dots$ converges in distribution if $F_{Y_n}(y)\to F_Y(y)$ for all continuity points $y$ of the cdf $F_{Y}$.
To answer your question you need to understand the differences between these modes of convergence. The first thing to note is that convergence in the distribution is weaker than convergence in probability. In fact, convergence in probability will imply convergence in distribution. That is,
$$|Y_n - X_n|\overset{p}{\to}0, X_n \overset{d}{\to}X \implies Y_n \overset{d}{\to} X$$
This takes some work to show for purposes of keeping this answer succinct here is the proof on Wikipedia.
Intuitively, this makes sense since if your sequence of random variables converges to another sequence of random variables you would expect that they both share the same limiting distribution. This is not generally true in reverse.
However, and quite importantly, if a random variable converges in distribution to a constant then it also converges in probability to that constant. That is,
$$X_n \overset{d}{\to}c\implies X_n \overset{p}{\to}c$$
Here is a proof of this fact.
Important bits: Notice that we have only been talking about convergence in probability and convergence in the distribution not consistency or asymptotic normality! This is very important because it actually turns out that,
$$\text{Asymptotic Normality} \implies \text{Consistency}$$
But not vice-versa! Well, this is counter-intuitive given what we just saw but it actually makes sense given the definitions of the two concepts. The main idea is that consistency does not say that our estimator converges to the true estimator just to the true value of a parameter, as you say, a constant. Thus, consistency only takes a stand on where the distribution concentrates not the whole distribution. However, this is usually not enough to do inference because we want to know not just where our estimator concentrates but also some sense of the precision with which it concentrates (i.e. we want to get some approximation of the estimator so we can determine standard errors). This can be done with convergence in distribution.
Asymptotic normality says that our scaled and differenced estimator converges in distribution to a random variable. This means that we can do the following computation,
$$X_n - X = O_p(\frac{1}{\sqrt{n}})=o_p(1) \text{ as } n\to\infty$$
The first equality follows because by the definition of convergence in distribution,
$$\Pr[||X_n-X||> M]\to\Pr[||Z||>M]$$
where $Z=N(0,\sigma^2)$ then by Markov's inequality,
$$\Pr[||Z||>M]\leq \frac{\mathbb{E}[||Z||]}{M}$$
So we can just choose $M$ sufficiently large such that $\Pr[||X_n-X||\geq M] < \epsilon$.
Thus, we can even think of consistency as a necessary condition for asymptotic normality. Of course, the reverse is not true. For a counterexample take a consistent and asymptotically normal estimator $\theta_n$ and define $\tilde\theta_n = \theta_n + n^{-1/3}$. Clearly $\tilde\theta_n \to \theta_0$ and is thus consistent but $\sqrt{n}(\tilde\theta_n -\theta_0)$ will not converge.
The idea here is that consistency is a weaker notion and refers only to the value of our estimator as $n\to\infty$. Thus, it does not affect our ability to talk about the asymptotic normality of our estimator which is really talking about the limiting behavior of our estimator beyond the value it concentrates around. | Asymptotic Normality and Consistency
I think you may either be confused with the modes of convergence associated with consistency and asymptotic normality or with the definition of consistency. It is important to note that consistency is |
38,463 | Biased estimates in logistic regression due to class imbalance | Let's find out.
To begin with, what happens with balanced datasets?
Here is a scatterplot of a dataset of $200$ observations, of which $50\%$ are zeros and the remainder are ones. On it I have graphed the underlying ("true") probabilities and the probabilities as fit with logistic regression. The two graphs agree closely, indicating logistic regression did a good job in this case.
To understand it better, I kept the same $x$ values but regenerated the $y$ values randomly $500$ times. Each fit yielded its estimate of the intercept and slope ($\hat\beta$) in this logistic regression. Here is a scatterplot of those estimates.
The central red triangle plots the true coefficients $(0, -3).$ The ellipses are the second order approximations to this point cloud: one is intended to enclose about half the points and the other is intended to enclose about 95% of the points. That they do so indicates they give a solid indication of how uncertain any given estimate of such a dataset might be: the intercept could be off by about $\pm 0.45$ (the width of the outer ellipse) and the slope could be off by about $\pm 1$ (the height of the outer ellipse). These are margins of error.
What happens with imbalanced datasets?
Here's a similar setup but with only $5\%$ of the points in one class (give or take a few points, depending on the randomness involved in making these observations):
($5\%$ is truly small: it tells us to expect to see only $10$ or so values in one class with the other $190$ in the other class.)
The fit now visibly departs from the true graph -- but is this evidence of logistic regression failing to be "robust"? Again, we can find out by repeating the process of generating random data and estimating the fit many times. Here is the scatterplot of $500$ estimates.
By and large the estimates stay near the true value near $(-4,-3).$ In this sense, logistic regression looks "robust." (I kept the same slope of $-3$ as before and adjusted the intercept to reduce the rate of of the $+1$ observations.)
The margins of error have changed : the semi-axis of the outer ellipse that (sort of) describes the uncertainty in the intercept has grown from $0.45$ to over $4$ while the other semi-axis has shrunk a little from $1$ to about $0.8;$ and the whole picture has tilted.
The ellipses no longer describe the point cloud quite as well as before: now, there is some tendency for logistic regression to estimate extremely negative slopes and intercepts. The tilting indicates noticeable correlation among the estimates: low (negative) intercepts tend to be associated with low negative slopes (which compensate for the small intercepts by predicting some $1$ values near $x=-1.$) But such correlation is to be expected: this looks just like ordinary least squares regression whenever the point of averages of the data is not close to the vertical axis.
What do these experiments show?
For datasets this size (or larger), at least:
Logistic regression tends to work well and give values reasonably close to the correct parameters even when the outcomes are imbalanced.
Second-order descriptions of the correlation between the parameter estimates (which are routine outputs of logistic regression) don't quite capture the possibility that the estimates could simultaneously be quite far away from the truth.
A meta-conclusion
You can assess the "robustness" (or, more generally, the salient statistical properties) of any procedure, such as logistic regression, by running it repeatedly on data generated according to a known realistic model and tracking the outputs that are important to you.
This is the R code that produced the figures. For the first two figures, the first line was altered to p <- 50/100. Remove the set.seed call to generate additional random examples.
Experimenting with simulations like this (extended to more explanatory variables) might persuade you of the utility of a standard rule of thumb:
Let the number of observations in the smaller class guide the complexity of the model.
Whereas in ordinary least squares regression we might be comfortable having ten observations (total) for each explanatory variable, for logistic regression we will want to have ten observations in the smaller class for each explanatory variable.
p <- 5/100 # Proportion of one class
n <- 200 # Dataset size
x <- seq(-1, 1, length.out=n) # Explanatory variable
beta <- -3 # Slope
#
# Find an intercept that yields `p` as the true proportion for these `x`.
#
logistic <- function(z) 1 - 1/(1 + exp(z))
alpha <- uniroot(function(a) mean(logistic(a + beta*x)) - p, c(-5,5))$root
#
# Create and plot a dataset with an expected value of `p`.
#
set.seed(17)
y <- rbinom(n, 1, logistic(alpha + beta*x))
plot(range(x), c(-0.015, 1.015), type="n", bty="n", xlab="x", ylab="y",
main="Data with True (Solid) and Fitted (Dashed) Probabilities")
curve(logistic(alpha + beta*x), add=TRUE, col="Gray", lwd=2)
rug(x[y==0], side=1, col="Red")
rug(x[y==1], side=3, col="Red")
points(x, y, pch=21, bg="#00000020")
#
# Fit a logistic model.
#
X <- data.frame(x=x, y=y)
fit <- glm(y ~ x, data=X, family="binomial")
summary(fit)
#
# Sketch the fit.
#
b <- coefficients(fit)
curve(logistic(b[1] + b[2]*x), add=TRUE, col="Black", lty=3, lwd=2)
#
# Evaluate the robustness of the fit.
#
sim <- replicate(500, {
X$y.new <- with(X, rbinom(n, 1, logistic(alpha + beta*x)))
coefficients(glm(y.new ~ x, data=X, family="binomial"))
})
plot(t(sim), main="Estimated Coefficients", ylab="")
mtext(expression(hat(beta)), side=2, line=2.5, las=2, cex=1.25)
points(alpha, beta, pch=24, bg="#ff0000c0", cex=1.6)
#
# Plot second moment ellipses.
#
V <- cov(t(sim))
obj <- eigen(V)
a <- seq(0, 2*pi, length.out=361)
for (level in c(.50, .95)) {
lambda <- sqrt(obj$values) * sqrt(qchisq(level, 2))
st <- obj$vectors %*% (rbind(cos(a), sin(a)) * lambda) + c(alpha, beta)
polygon(st[1,], st[2,], col="#ffff0010")
} | Biased estimates in logistic regression due to class imbalance | Let's find out.
To begin with, what happens with balanced datasets?
Here is a scatterplot of a dataset of $200$ observations, of which $50\%$ are zeros and the remainder are ones. On it I have graphe | Biased estimates in logistic regression due to class imbalance
Let's find out.
To begin with, what happens with balanced datasets?
Here is a scatterplot of a dataset of $200$ observations, of which $50\%$ are zeros and the remainder are ones. On it I have graphed the underlying ("true") probabilities and the probabilities as fit with logistic regression. The two graphs agree closely, indicating logistic regression did a good job in this case.
To understand it better, I kept the same $x$ values but regenerated the $y$ values randomly $500$ times. Each fit yielded its estimate of the intercept and slope ($\hat\beta$) in this logistic regression. Here is a scatterplot of those estimates.
The central red triangle plots the true coefficients $(0, -3).$ The ellipses are the second order approximations to this point cloud: one is intended to enclose about half the points and the other is intended to enclose about 95% of the points. That they do so indicates they give a solid indication of how uncertain any given estimate of such a dataset might be: the intercept could be off by about $\pm 0.45$ (the width of the outer ellipse) and the slope could be off by about $\pm 1$ (the height of the outer ellipse). These are margins of error.
What happens with imbalanced datasets?
Here's a similar setup but with only $5\%$ of the points in one class (give or take a few points, depending on the randomness involved in making these observations):
($5\%$ is truly small: it tells us to expect to see only $10$ or so values in one class with the other $190$ in the other class.)
The fit now visibly departs from the true graph -- but is this evidence of logistic regression failing to be "robust"? Again, we can find out by repeating the process of generating random data and estimating the fit many times. Here is the scatterplot of $500$ estimates.
By and large the estimates stay near the true value near $(-4,-3).$ In this sense, logistic regression looks "robust." (I kept the same slope of $-3$ as before and adjusted the intercept to reduce the rate of of the $+1$ observations.)
The margins of error have changed : the semi-axis of the outer ellipse that (sort of) describes the uncertainty in the intercept has grown from $0.45$ to over $4$ while the other semi-axis has shrunk a little from $1$ to about $0.8;$ and the whole picture has tilted.
The ellipses no longer describe the point cloud quite as well as before: now, there is some tendency for logistic regression to estimate extremely negative slopes and intercepts. The tilting indicates noticeable correlation among the estimates: low (negative) intercepts tend to be associated with low negative slopes (which compensate for the small intercepts by predicting some $1$ values near $x=-1.$) But such correlation is to be expected: this looks just like ordinary least squares regression whenever the point of averages of the data is not close to the vertical axis.
What do these experiments show?
For datasets this size (or larger), at least:
Logistic regression tends to work well and give values reasonably close to the correct parameters even when the outcomes are imbalanced.
Second-order descriptions of the correlation between the parameter estimates (which are routine outputs of logistic regression) don't quite capture the possibility that the estimates could simultaneously be quite far away from the truth.
A meta-conclusion
You can assess the "robustness" (or, more generally, the salient statistical properties) of any procedure, such as logistic regression, by running it repeatedly on data generated according to a known realistic model and tracking the outputs that are important to you.
This is the R code that produced the figures. For the first two figures, the first line was altered to p <- 50/100. Remove the set.seed call to generate additional random examples.
Experimenting with simulations like this (extended to more explanatory variables) might persuade you of the utility of a standard rule of thumb:
Let the number of observations in the smaller class guide the complexity of the model.
Whereas in ordinary least squares regression we might be comfortable having ten observations (total) for each explanatory variable, for logistic regression we will want to have ten observations in the smaller class for each explanatory variable.
p <- 5/100 # Proportion of one class
n <- 200 # Dataset size
x <- seq(-1, 1, length.out=n) # Explanatory variable
beta <- -3 # Slope
#
# Find an intercept that yields `p` as the true proportion for these `x`.
#
logistic <- function(z) 1 - 1/(1 + exp(z))
alpha <- uniroot(function(a) mean(logistic(a + beta*x)) - p, c(-5,5))$root
#
# Create and plot a dataset with an expected value of `p`.
#
set.seed(17)
y <- rbinom(n, 1, logistic(alpha + beta*x))
plot(range(x), c(-0.015, 1.015), type="n", bty="n", xlab="x", ylab="y",
main="Data with True (Solid) and Fitted (Dashed) Probabilities")
curve(logistic(alpha + beta*x), add=TRUE, col="Gray", lwd=2)
rug(x[y==0], side=1, col="Red")
rug(x[y==1], side=3, col="Red")
points(x, y, pch=21, bg="#00000020")
#
# Fit a logistic model.
#
X <- data.frame(x=x, y=y)
fit <- glm(y ~ x, data=X, family="binomial")
summary(fit)
#
# Sketch the fit.
#
b <- coefficients(fit)
curve(logistic(b[1] + b[2]*x), add=TRUE, col="Black", lty=3, lwd=2)
#
# Evaluate the robustness of the fit.
#
sim <- replicate(500, {
X$y.new <- with(X, rbinom(n, 1, logistic(alpha + beta*x)))
coefficients(glm(y.new ~ x, data=X, family="binomial"))
})
plot(t(sim), main="Estimated Coefficients", ylab="")
mtext(expression(hat(beta)), side=2, line=2.5, las=2, cex=1.25)
points(alpha, beta, pch=24, bg="#ff0000c0", cex=1.6)
#
# Plot second moment ellipses.
#
V <- cov(t(sim))
obj <- eigen(V)
a <- seq(0, 2*pi, length.out=361)
for (level in c(.50, .95)) {
lambda <- sqrt(obj$values) * sqrt(qchisq(level, 2))
st <- obj$vectors %*% (rbind(cos(a), sin(a)) * lambda) + c(alpha, beta)
polygon(st[1,], st[2,], col="#ffff0010")
} | Biased estimates in logistic regression due to class imbalance
Let's find out.
To begin with, what happens with balanced datasets?
Here is a scatterplot of a dataset of $200$ observations, of which $50\%$ are zeros and the remainder are ones. On it I have graphe |
38,464 | Number of expected pairs in a random shuffle | First, we find the probability that two adjacent individuals are of different genders. Those two people are equally likely to be any of the ${34 \choose 2}$ pairs of people, of which $18 \times 16$ are male-female pairs, so this probability is $(18 \times 16)/{34 \choose 2}$.
The total number of pairs is $X_1 + X_2 + \cdots + X_{33}$ where $X_i$ is an indicator random variable that is 1 if person $i$ and person $i+1$ are of opposite genders and 0 otherwise. Its expectation is $E(X_1) + \cdots + E(X_{33})$, but all these variables have the same expectation, the probability we found above. So the answer is
$$ 33 \times {18 \times 16 \over {34 \choose 2}} = {33 \times 18 \times 16 \over (34 \times 33)/2} = {18 \times 16 \over 17} = {17^2 - 1 \over 17} = 17 - {1 \over 17} \approx 16.94.$$
This agrees with Bernhard's simulation.
More generally, if you have $m$ males and $f$ females and the same problem you get
$$ (m+f-1) {mf \over {m+f \choose 2}} = {(m+f-1) mf \over (m+f)(m+f-1)/2} = {2mf \over m+f}$$
which can also be checked by simulation. | Number of expected pairs in a random shuffle | First, we find the probability that two adjacent individuals are of different genders. Those two people are equally likely to be any of the ${34 \choose 2}$ pairs of people, of which $18 \times 16$ | Number of expected pairs in a random shuffle
First, we find the probability that two adjacent individuals are of different genders. Those two people are equally likely to be any of the ${34 \choose 2}$ pairs of people, of which $18 \times 16$ are male-female pairs, so this probability is $(18 \times 16)/{34 \choose 2}$.
The total number of pairs is $X_1 + X_2 + \cdots + X_{33}$ where $X_i$ is an indicator random variable that is 1 if person $i$ and person $i+1$ are of opposite genders and 0 otherwise. Its expectation is $E(X_1) + \cdots + E(X_{33})$, but all these variables have the same expectation, the probability we found above. So the answer is
$$ 33 \times {18 \times 16 \over {34 \choose 2}} = {33 \times 18 \times 16 \over (34 \times 33)/2} = {18 \times 16 \over 17} = {17^2 - 1 \over 17} = 17 - {1 \over 17} \approx 16.94.$$
This agrees with Bernhard's simulation.
More generally, if you have $m$ males and $f$ females and the same problem you get
$$ (m+f-1) {mf \over {m+f \choose 2}} = {(m+f-1) mf \over (m+f)(m+f-1)/2} = {2mf \over m+f}$$
which can also be checked by simulation. | Number of expected pairs in a random shuffle
First, we find the probability that two adjacent individuals are of different genders. Those two people are equally likely to be any of the ${34 \choose 2}$ pairs of people, of which $18 \times 16$ |
38,465 | Number of expected pairs in a random shuffle | Someone much wiser then me will post a theoretical and exact solution. Meanwhile my attempt at a simulation:
once <- function(){
row <- sample(c(rep("m", 18), rep("f",16)))
count <- 0
for(i in 1:33){
if(row[i]!=row[i+1])
count <- count + 1
}
return(count)
}
run <- replicate(1e5, once())
plot(table(run))
> table(run)
run
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
5 25 79 280 663 1643 3070 5555 8057 11169 12883 14309 12818 11020 7682 5279 2931 1535 635 261 78 17 4 2
> summary(run)
Min. 1st Qu. Median Mean 3rd Qu. Max.
6.00 15.00 17.00 16.96 19.00 29.00 | Number of expected pairs in a random shuffle | Someone much wiser then me will post a theoretical and exact solution. Meanwhile my attempt at a simulation:
once <- function(){
row <- sample(c(rep("m", 18), rep("f",16)))
count <- 0
for( | Number of expected pairs in a random shuffle
Someone much wiser then me will post a theoretical and exact solution. Meanwhile my attempt at a simulation:
once <- function(){
row <- sample(c(rep("m", 18), rep("f",16)))
count <- 0
for(i in 1:33){
if(row[i]!=row[i+1])
count <- count + 1
}
return(count)
}
run <- replicate(1e5, once())
plot(table(run))
> table(run)
run
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
5 25 79 280 663 1643 3070 5555 8057 11169 12883 14309 12818 11020 7682 5279 2931 1535 635 261 78 17 4 2
> summary(run)
Min. 1st Qu. Median Mean 3rd Qu. Max.
6.00 15.00 17.00 16.96 19.00 29.00 | Number of expected pairs in a random shuffle
Someone much wiser then me will post a theoretical and exact solution. Meanwhile my attempt at a simulation:
once <- function(){
row <- sample(c(rep("m", 18), rep("f",16)))
count <- 0
for( |
38,466 | What are the advantages of FC layers over Conv layers? | The strength of convolutional layers over fully connected layers is precisely that they represent a narrower range of features than fully-connected layers. A neuron in a fully connected layer is connected to every neuron in the preceding layer, and so can change if any of the neurons from the preceding layer changes. A neuron in a convolutional layer, however, is only connected to "nearby" neurons from the preceding layer within the width of the convolutional kernel. As a result, the neurons from a convolutional layer can represent a narrower range of features in the sense that the activation of any one neuron is insensitive to the activations of most of the neurons from the previous layer.
Restricting the range of features in this way can be useful in cases where we expect most of the information to be local. In image classification, for example, a bird will look like a bird based on the pixels in the location of the bird, regardless of its location in the image and regardless of whether there is also a car somewhere else in the image. The utility of this prior expectation is born out by the observation that even CNNs with totally random weights provide features that are nearly as useful for classification as fully-trained CNNs. | What are the advantages of FC layers over Conv layers? | The strength of convolutional layers over fully connected layers is precisely that they represent a narrower range of features than fully-connected layers. A neuron in a fully connected layer is conne | What are the advantages of FC layers over Conv layers?
The strength of convolutional layers over fully connected layers is precisely that they represent a narrower range of features than fully-connected layers. A neuron in a fully connected layer is connected to every neuron in the preceding layer, and so can change if any of the neurons from the preceding layer changes. A neuron in a convolutional layer, however, is only connected to "nearby" neurons from the preceding layer within the width of the convolutional kernel. As a result, the neurons from a convolutional layer can represent a narrower range of features in the sense that the activation of any one neuron is insensitive to the activations of most of the neurons from the previous layer.
Restricting the range of features in this way can be useful in cases where we expect most of the information to be local. In image classification, for example, a bird will look like a bird based on the pixels in the location of the bird, regardless of its location in the image and regardless of whether there is also a car somewhere else in the image. The utility of this prior expectation is born out by the observation that even CNNs with totally random weights provide features that are nearly as useful for classification as fully-trained CNNs. | What are the advantages of FC layers over Conv layers?
The strength of convolutional layers over fully connected layers is precisely that they represent a narrower range of features than fully-connected layers. A neuron in a fully connected layer is conne |
38,467 | What are the advantages of FC layers over Conv layers? | As mentioned in the wiki article, convolutional layers are optimized for translationally-invariant parameters, such as pixel intensities in images and video. If your parameters represent a discretized sample of a continuous variable, such as space or time, then translational invariance means that every window of the parameters (such as a 10x10 pixel slice of the image) is to some extent similar to every other and benefits to be pre-processed (filtered) by the same means. In this case, you can select a convolutional layer, and by doing so, enforce your knowledge about the symmetries of this world onto your neuronal network.
On the other hand side, if you have a bunch of input parameters whose indices are not related to their meaning (e.g. params=[temperature, pressure, volume, loudness, brightness, ...]), then they are most certainly not translationally-invariant, the intrinsic assumptions of the convolution layer are not met, and it is only detremental to use it | What are the advantages of FC layers over Conv layers? | As mentioned in the wiki article, convolutional layers are optimized for translationally-invariant parameters, such as pixel intensities in images and video. If your parameters represent a discretized | What are the advantages of FC layers over Conv layers?
As mentioned in the wiki article, convolutional layers are optimized for translationally-invariant parameters, such as pixel intensities in images and video. If your parameters represent a discretized sample of a continuous variable, such as space or time, then translational invariance means that every window of the parameters (such as a 10x10 pixel slice of the image) is to some extent similar to every other and benefits to be pre-processed (filtered) by the same means. In this case, you can select a convolutional layer, and by doing so, enforce your knowledge about the symmetries of this world onto your neuronal network.
On the other hand side, if you have a bunch of input parameters whose indices are not related to their meaning (e.g. params=[temperature, pressure, volume, loudness, brightness, ...]), then they are most certainly not translationally-invariant, the intrinsic assumptions of the convolution layer are not met, and it is only detremental to use it | What are the advantages of FC layers over Conv layers?
As mentioned in the wiki article, convolutional layers are optimized for translationally-invariant parameters, such as pixel intensities in images and video. If your parameters represent a discretized |
38,468 | Why is considering a maximum likelihood as a random variable a frequentist approach? | This is a frequentist approach because we are considering $\mu$ to be fixed. Thus, all the variance of $\hat \mu$ comes from the data.
Technically, a Bayesian would say that $V[\hat \mu] = V[\mu] + V[\hat \mu | \mu]$ (assuming the variance of $\hat \mu$ is independent of $\mu$).
But more over, I think the authors are just saying they are using MLE's, which have a frequentist justification. | Why is considering a maximum likelihood as a random variable a frequentist approach? | This is a frequentist approach because we are considering $\mu$ to be fixed. Thus, all the variance of $\hat \mu$ comes from the data.
Technically, a Bayesian would say that $V[\hat \mu] = V[\mu] + V | Why is considering a maximum likelihood as a random variable a frequentist approach?
This is a frequentist approach because we are considering $\mu$ to be fixed. Thus, all the variance of $\hat \mu$ comes from the data.
Technically, a Bayesian would say that $V[\hat \mu] = V[\mu] + V[\hat \mu | \mu]$ (assuming the variance of $\hat \mu$ is independent of $\mu$).
But more over, I think the authors are just saying they are using MLE's, which have a frequentist justification. | Why is considering a maximum likelihood as a random variable a frequentist approach?
This is a frequentist approach because we are considering $\mu$ to be fixed. Thus, all the variance of $\hat \mu$ comes from the data.
Technically, a Bayesian would say that $V[\hat \mu] = V[\mu] + V |
38,469 | Why is considering a maximum likelihood as a random variable a frequentist approach? | Considering an MLE as a random variable is not exclusively a frequentist approach, and indeed, the author you quote does not claim this. The operative word in the quote is "and" --- we use a frequentist approach and consider the maximum likelihood estimator as a random variable. The latter is not an exclusive consequence of the former. | Why is considering a maximum likelihood as a random variable a frequentist approach? | Considering an MLE as a random variable is not exclusively a frequentist approach, and indeed, the author you quote does not claim this. The operative word in the quote is "and" --- we use a frequent | Why is considering a maximum likelihood as a random variable a frequentist approach?
Considering an MLE as a random variable is not exclusively a frequentist approach, and indeed, the author you quote does not claim this. The operative word in the quote is "and" --- we use a frequentist approach and consider the maximum likelihood estimator as a random variable. The latter is not an exclusive consequence of the former. | Why is considering a maximum likelihood as a random variable a frequentist approach?
Considering an MLE as a random variable is not exclusively a frequentist approach, and indeed, the author you quote does not claim this. The operative word in the quote is "and" --- we use a frequent |
38,470 | Accounting for grouped random effects in lme4 | A couple of points:
If I understood correctly, your response variable is a grade, ranging from 1 to 5. This is an ordinal variable with relatively few levels. Hence, assuming a normal distribution for the residuals may not be appropriate. You could consider a mixed model for ordinal data instead, such as the continuation ratio model.
You are right that subject and simulation seem to be fully crossed factors. Hence, a possible model to consider is:
grade ~ category * gender + (1 | subject) + (1 | simulation) | Accounting for grouped random effects in lme4 | A couple of points:
If I understood correctly, your response variable is a grade, ranging from 1 to 5. This is an ordinal variable with relatively few levels. Hence, assuming a normal distribution fo | Accounting for grouped random effects in lme4
A couple of points:
If I understood correctly, your response variable is a grade, ranging from 1 to 5. This is an ordinal variable with relatively few levels. Hence, assuming a normal distribution for the residuals may not be appropriate. You could consider a mixed model for ordinal data instead, such as the continuation ratio model.
You are right that subject and simulation seem to be fully crossed factors. Hence, a possible model to consider is:
grade ~ category * gender + (1 | subject) + (1 | simulation) | Accounting for grouped random effects in lme4
A couple of points:
If I understood correctly, your response variable is a grade, ranging from 1 to 5. This is an ordinal variable with relatively few levels. Hence, assuming a normal distribution fo |
38,471 | Accounting for grouped random effects in lme4 | If you believe the different categories vary in terms of their measurement and you want to test that, you need to model the category as a random intercept with (1|category).
However, I think we need more information as to what you are actually looking to decide. For example, are you wondering if each person measures them different? Or if each stimulus is measured differently? | Accounting for grouped random effects in lme4 | If you believe the different categories vary in terms of their measurement and you want to test that, you need to model the category as a random intercept with (1|category).
However, I think we need m | Accounting for grouped random effects in lme4
If you believe the different categories vary in terms of their measurement and you want to test that, you need to model the category as a random intercept with (1|category).
However, I think we need more information as to what you are actually looking to decide. For example, are you wondering if each person measures them different? Or if each stimulus is measured differently? | Accounting for grouped random effects in lme4
If you believe the different categories vary in terms of their measurement and you want to test that, you need to model the category as a random intercept with (1|category).
However, I think we need m |
38,472 | Accounting for grouped random effects in lme4 | It seems that in the model:
lmer(measure ~ category + (1|subject) + (1|stimulus), data = My_data)
category is being used to denote the levels of stimulus. As such, this does not make sense, since category is not an actual variable.
Even though stimulus is random in the sense that it has (presumably) been randomly assigned to each subject, this does not mean that is should be included as a random effect - unless there is no interest in the treatment effect of stimulus. In that case, you would simply be partitioning variance into the subject level and the stimulus level.
It seems more likely that you are in fact interested in the associations between each level of the stimulus and the outcome - that is, you are interested in the treatment effect and therefore the model should be of the form:
measure ~ stimulus + (1|subject) | Accounting for grouped random effects in lme4 | It seems that in the model:
lmer(measure ~ category + (1|subject) + (1|stimulus), data = My_data)
category is being used to denote the levels of stimulus. As such, this does not make sense, since cat | Accounting for grouped random effects in lme4
It seems that in the model:
lmer(measure ~ category + (1|subject) + (1|stimulus), data = My_data)
category is being used to denote the levels of stimulus. As such, this does not make sense, since category is not an actual variable.
Even though stimulus is random in the sense that it has (presumably) been randomly assigned to each subject, this does not mean that is should be included as a random effect - unless there is no interest in the treatment effect of stimulus. In that case, you would simply be partitioning variance into the subject level and the stimulus level.
It seems more likely that you are in fact interested in the associations between each level of the stimulus and the outcome - that is, you are interested in the treatment effect and therefore the model should be of the form:
measure ~ stimulus + (1|subject) | Accounting for grouped random effects in lme4
It seems that in the model:
lmer(measure ~ category + (1|subject) + (1|stimulus), data = My_data)
category is being used to denote the levels of stimulus. As such, this does not make sense, since cat |
38,473 | Accounting for grouped random effects in lme4 | In your setting, subject and stimulus seem to be fully crossed random grouping factors - since each subject sees each stimulus and (I am assuming) you are using the subjects and stimuli included in your studies to represent all the subjects and all the stimuli you wish to generalize your study findings to.
The key word here is grouping - for your model to be a linear mixed effects model (lmer), each subject by stimulus combination should act like a container which holds together a group of values for your measure outcome. All the values of measure that belong to the same container are more similar to each other than values that belong to different containers, as they are subjected to the same subject-level and stimulus-level influences (presuming these influences are constant over time).
The group of values in a specific container could arise, for instance, if you record the value of measure at several time points for each subject by stimulus combination, or under two or more different conditions, etc.
If you only have one value of measure per subject by stimulus combination, then you're dealing with a linear model (lm). There is no grouping of observations according to each subject per stimulus combination, so there are no random grouping factors which means there aren't any effects that can vary randomly across combinations of levels of the grouping factors (i.e., random effects). If there aren't any random effects, there can't be a mixed effects model, as such a model would require both fixed and random effects to be part of it!
If you do have multiple values of measure per container (i.e., subject by stimulus combination), then your model can include subject-level predictors (e.g., subject gender, subject age) and/or stimulus-level predictors (e.g., stimulus category). | Accounting for grouped random effects in lme4 | In your setting, subject and stimulus seem to be fully crossed random grouping factors - since each subject sees each stimulus and (I am assuming) you are using the subjects and stimuli included in yo | Accounting for grouped random effects in lme4
In your setting, subject and stimulus seem to be fully crossed random grouping factors - since each subject sees each stimulus and (I am assuming) you are using the subjects and stimuli included in your studies to represent all the subjects and all the stimuli you wish to generalize your study findings to.
The key word here is grouping - for your model to be a linear mixed effects model (lmer), each subject by stimulus combination should act like a container which holds together a group of values for your measure outcome. All the values of measure that belong to the same container are more similar to each other than values that belong to different containers, as they are subjected to the same subject-level and stimulus-level influences (presuming these influences are constant over time).
The group of values in a specific container could arise, for instance, if you record the value of measure at several time points for each subject by stimulus combination, or under two or more different conditions, etc.
If you only have one value of measure per subject by stimulus combination, then you're dealing with a linear model (lm). There is no grouping of observations according to each subject per stimulus combination, so there are no random grouping factors which means there aren't any effects that can vary randomly across combinations of levels of the grouping factors (i.e., random effects). If there aren't any random effects, there can't be a mixed effects model, as such a model would require both fixed and random effects to be part of it!
If you do have multiple values of measure per container (i.e., subject by stimulus combination), then your model can include subject-level predictors (e.g., subject gender, subject age) and/or stimulus-level predictors (e.g., stimulus category). | Accounting for grouped random effects in lme4
In your setting, subject and stimulus seem to be fully crossed random grouping factors - since each subject sees each stimulus and (I am assuming) you are using the subjects and stimuli included in yo |
38,474 | Accounting for grouped random effects in lme4 | One important point is what you mean by "a1:c4 at day 1 are different from a1:c4 at day 2 and so on". If this means that the a1 from day 1 is not more linked to a1 from day 2 that to a2 from day 2, then you don't have 10 simulations, but 10x30= 300 simulations (90 from category A, 90 from category B and 120 from category C). You should name them differently (e.g a1day1, a2day1, ..., a3day30, b1day1, ...) and call this column e.g. AllSim. Then AllSim is you random effect column and you can construct models from it. The fact that subjects see only 10 of these 300 simulations is not a problem and is handled automatically by lme4. My guess is that the most interesting models will be
grade ~ category * gender + (1 | subject) + (1 | AllSim)
or if you put the maximal random slopes
grade ~ category * gender + (category | subject) + (gender | AllSim)
or even more complex models if you beleive e.g. that subjects get better and better with practice over they 10 simulations.
Edit: I read it too quickly, and actually your simulation:day, when put on the right of the "|" is exactly the same as my AllSim, so for the basic random effects, your 3 first models are totally right (and the two first ones correspond to my two models). The choice between these models should be either data-based or theory-driven. And as Box said, "all models are wrong, but some are useful." | Accounting for grouped random effects in lme4 | One important point is what you mean by "a1:c4 at day 1 are different from a1:c4 at day 2 and so on". If this means that the a1 from day 1 is not more linked to a1 from day 2 that to a2 from day 2, th | Accounting for grouped random effects in lme4
One important point is what you mean by "a1:c4 at day 1 are different from a1:c4 at day 2 and so on". If this means that the a1 from day 1 is not more linked to a1 from day 2 that to a2 from day 2, then you don't have 10 simulations, but 10x30= 300 simulations (90 from category A, 90 from category B and 120 from category C). You should name them differently (e.g a1day1, a2day1, ..., a3day30, b1day1, ...) and call this column e.g. AllSim. Then AllSim is you random effect column and you can construct models from it. The fact that subjects see only 10 of these 300 simulations is not a problem and is handled automatically by lme4. My guess is that the most interesting models will be
grade ~ category * gender + (1 | subject) + (1 | AllSim)
or if you put the maximal random slopes
grade ~ category * gender + (category | subject) + (gender | AllSim)
or even more complex models if you beleive e.g. that subjects get better and better with practice over they 10 simulations.
Edit: I read it too quickly, and actually your simulation:day, when put on the right of the "|" is exactly the same as my AllSim, so for the basic random effects, your 3 first models are totally right (and the two first ones correspond to my two models). The choice between these models should be either data-based or theory-driven. And as Box said, "all models are wrong, but some are useful." | Accounting for grouped random effects in lme4
One important point is what you mean by "a1:c4 at day 1 are different from a1:c4 at day 2 and so on". If this means that the a1 from day 1 is not more linked to a1 from day 2 that to a2 from day 2, th |
38,475 | Is the average of betas from Y ~ X and X ~ Y valid? | To see the connection between both representations, take a bivariate Normal vector:
$$
\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix}
\mu_1 \\
\mu_2
\end{pmatrix} , \begin{pmatrix}
\sigma^2_1 & \rho \sigma_1 \sigma_2 \\
\rho \sigma_1 \sigma_2 & \sigma^2_2
\end{pmatrix} \right)
$$
with conditionals
$$X_1 \mid X_2=x_2 \sim \mathcal{N} \left( \mu_1 + \rho \frac{\sigma_1}{\sigma_2}(x_2 - \mu_2),(1-\rho^2)\sigma^2_1 \right)$$
and
$$X_2 \mid X_1=x_1 \sim \mathcal{N} \left( \mu_2 + \rho \frac{\sigma_2}{\sigma_1}(x_1 - \mu_1),(1-\rho^2)\sigma^2_2 \right)$$
This means that
$$X_1=\underbrace{\left(\mu_1-\rho \frac{\sigma_1}{\sigma_2}\mu_2\right)}_\alpha+\underbrace{\rho \frac{\sigma_1}{\sigma_2}}_\beta X_2+\sqrt{1-\rho^2}\sigma_1\epsilon_1$$
and
$$X_2=\underbrace{\left(\mu_2-\rho \frac{\sigma_2}{\sigma_1}\mu_1\right)}_\kappa+\underbrace{\rho \frac{\sigma_2}{\sigma_1}}_\gamma X_1+\sqrt{1-\rho^2}\sigma_2\epsilon_2$$
which means (a) $\gamma$ is not $1/\beta$ and (b) the connection between the two regressions depends on the joint distribution of $(X_1,X_2)$. | Is the average of betas from Y ~ X and X ~ Y valid? | To see the connection between both representations, take a bivariate Normal vector:
$$
\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix}
\mu_1 \\
\mu_2
\end{pmatrix | Is the average of betas from Y ~ X and X ~ Y valid?
To see the connection between both representations, take a bivariate Normal vector:
$$
\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix}
\mu_1 \\
\mu_2
\end{pmatrix} , \begin{pmatrix}
\sigma^2_1 & \rho \sigma_1 \sigma_2 \\
\rho \sigma_1 \sigma_2 & \sigma^2_2
\end{pmatrix} \right)
$$
with conditionals
$$X_1 \mid X_2=x_2 \sim \mathcal{N} \left( \mu_1 + \rho \frac{\sigma_1}{\sigma_2}(x_2 - \mu_2),(1-\rho^2)\sigma^2_1 \right)$$
and
$$X_2 \mid X_1=x_1 \sim \mathcal{N} \left( \mu_2 + \rho \frac{\sigma_2}{\sigma_1}(x_1 - \mu_1),(1-\rho^2)\sigma^2_2 \right)$$
This means that
$$X_1=\underbrace{\left(\mu_1-\rho \frac{\sigma_1}{\sigma_2}\mu_2\right)}_\alpha+\underbrace{\rho \frac{\sigma_1}{\sigma_2}}_\beta X_2+\sqrt{1-\rho^2}\sigma_1\epsilon_1$$
and
$$X_2=\underbrace{\left(\mu_2-\rho \frac{\sigma_2}{\sigma_1}\mu_1\right)}_\kappa+\underbrace{\rho \frac{\sigma_2}{\sigma_1}}_\gamma X_1+\sqrt{1-\rho^2}\sigma_2\epsilon_2$$
which means (a) $\gamma$ is not $1/\beta$ and (b) the connection between the two regressions depends on the joint distribution of $(X_1,X_2)$. | Is the average of betas from Y ~ X and X ~ Y valid?
To see the connection between both representations, take a bivariate Normal vector:
$$
\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix} \sim \mathcal{N} \left( \begin{pmatrix}
\mu_1 \\
\mu_2
\end{pmatrix |
38,476 | Is the average of betas from Y ~ X and X ~ Y valid? | Converted from a comment.....
The exact values of $\beta$ and $\gamma$
can be found in this answer of mine to Effect of switching responses and explanatory variables in simple linear regression, and, as you suspect,
$\beta$ is not the reciprocal of $\gamma$, and averaging $\beta$ and $\gamma$
(or averaging $\beta$ and $1/\gamma$) is not the right way to go. A pictorial view of what $\beta$ and $\gamma$
are minimizing is given in Elvis's answer to the same question, and in the answer, he introduces a "least rectangles" regression that might be what you are looking for. The comments following Elvis's answer should not be neglected; they relate this "least rectangles" regression to other, previously studied, techniques. In particular, note that Moderator chl points out that this method is of interest when it is not clear which is the predictor variable and which the response variable. | Is the average of betas from Y ~ X and X ~ Y valid? | Converted from a comment.....
The exact values of $\beta$ and $\gamma$
can be found in this answer of mine to Effect of switching responses and explanatory variables in simple linear regression, and, | Is the average of betas from Y ~ X and X ~ Y valid?
Converted from a comment.....
The exact values of $\beta$ and $\gamma$
can be found in this answer of mine to Effect of switching responses and explanatory variables in simple linear regression, and, as you suspect,
$\beta$ is not the reciprocal of $\gamma$, and averaging $\beta$ and $\gamma$
(or averaging $\beta$ and $1/\gamma$) is not the right way to go. A pictorial view of what $\beta$ and $\gamma$
are minimizing is given in Elvis's answer to the same question, and in the answer, he introduces a "least rectangles" regression that might be what you are looking for. The comments following Elvis's answer should not be neglected; they relate this "least rectangles" regression to other, previously studied, techniques. In particular, note that Moderator chl points out that this method is of interest when it is not clear which is the predictor variable and which the response variable. | Is the average of betas from Y ~ X and X ~ Y valid?
Converted from a comment.....
The exact values of $\beta$ and $\gamma$
can be found in this answer of mine to Effect of switching responses and explanatory variables in simple linear regression, and, |
38,477 | Is the average of betas from Y ~ X and X ~ Y valid? | $\beta$ and $\gamma$
As Xi'an noted in his answer the $\beta$ and $\gamma$ are related to each other by relating to the conditional means $X|Y$ and $Y|X$ (which in their turn relate to a single joint distribution) these are not symmetric in the sense that $\beta \neq 1/\gamma$. This is neither the case if you would 'know' the true $\sigma$ and $\rho$ instead of using estimates. You have $$\beta = \rho_{XY} \frac{\sigma_Y}{\sigma_X}$$ and $$\gamma = \rho_{XY} \frac{\sigma_X}{\sigma_Y}$$
or you could say
$$\beta \gamma = \rho_{XY}^2 \leq 1$$
See also simple linear regression on wikipedia for computation of the $\beta$ and $\gamma$.
It is this correlation term which sort of disturbs the symmetry. When the $\beta$ and $\gamma$ would be simply the ratio of the standard deviation $\sigma_Y/\sigma_X$ and $\sigma_X/\sigma_Y$ then they would indeed be each others inverse. The $\rho_{XY}$ term can be seen as modifying this as a sort of regression to the mean.
With perfect correlation $\rho_{XY} = 1$ then you can fully predict $X$ based on $Y$ or vice versa. The slopes will be equal $$\beta \gamma = 1$$
But with less than perfect correlation, $\rho_{XY} < 1$, you can not make those perfect predictions and the conditional mean will be somewhat closer to the unconditional mean, in comparison to a simple scaling by $\sigma_Y/\sigma_X$ or $\sigma_X/\sigma_Y$. The slopes of the regression lines will be less steep. The slopes will be not related as each others reciprocal and their product will be smaller than one $$\beta \gamma < 1$$
Is a regression line the right method?
You may wonder whether these conditional probabilities and regression lines is what you need to determine your ratios of $X$ and $Y$. It is unclear to me how you would wish to use a regression line in the computation of an optimal ratio.
Below is an alternative way to compute the ratio. This method does have symmetry (ie if you switch X and Y then you will get the same ratio).
Alternative
Say, the yields of bonds $X$ and $Y$ are distributed according to a multivariate normal distribution$^\dagger$ with correlation $\rho_{XY}$ and standard deviations $\sigma_X$ and $\sigma_Y$ then the yield of a hedge that is sum of $X$ and $Y$ will be normal distributed:
$$H = \alpha X + (1-\alpha) Y \sim N(\mu_H,\sigma_H^2)$$
were $0 \leq \alpha \leq 1$ and with
$$\begin{array}{rcl}
\mu_H &=& \alpha \mu_X+(1-\alpha) \mu_Y \\
\sigma_H^2 &=& \alpha^2 \sigma_X^2 + (1-\alpha)^2 \sigma_Y^2 + 2 \alpha (1-\alpha) \rho_{XY} \sigma_X \sigma_Y \\
& =& \alpha^2(\sigma_X^2+\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y) + \alpha (-2 \sigma_Y^2+2\rho_{XY}\sigma_X\sigma_Y) +\sigma_Y^2
\end{array} $$
The maximum of the mean $\mu_H$ will be at $$\alpha = 0 \text{ or } \alpha=1$$ or not existing when $\mu_X=\mu_Y$.
The minimum of the variance $\sigma_H^2$ will be at $$\alpha = 1 - \frac{\sigma_X^2 -\rho_{XY}\sigma_X\sigma_Y}{\sigma_X^2 +\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y} = \frac{\sigma_Y^2-\rho_{XY}\sigma_X\sigma_Y}{\sigma_X^2+\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y} $$
The optimum will be somewhere in between those two extremes and depends on how you wish to compare losses and gains
Note that now there is a symmetry between $\alpha$ and $1-\alpha$. It does not matter whether you use the hedge $H=\alpha_1 X+(1-\alpha_1)Y$ or the hedge $H=\alpha_2 Y + (1-\alpha_2) X$. You will get the same ratios in terms of $\alpha_1 = 1-\alpha_2$.
Minimal variance case and relation with principle components
In the minimal variance case (here you actually do not need to assume a multivariate Normal distribution) you get the following hedge ratio as optimum $$\frac{\alpha}{1-\alpha} = \frac{var(Y) - cov(X,Y)}{var(X)-cov(X,Y)}$$ which can be expressed in terms of the regression coefficients $\beta = cov(X,Y)/var(X)$ and $\gamma = cov(X,Y)/var(Y)$ and is as following $$\frac{\alpha}{1-\alpha} = \frac{1-\beta}{1-\gamma}$$
In a situation with more than two variables/stocks/bonds you might generalize this to the last (smallest eigenvalue) principle component.
Variants
Improvements of the model can be made by using different distributions than multivariate normal. Also you could incorporate the time in a more sophisticated model to make better predictions of future values/distributions for the pair $X,Y$.
$\dagger$ This is a simplification but it suits the purpose of explaining how one can, and should, perform the analysis to find an optimal ratio without a regression line. | Is the average of betas from Y ~ X and X ~ Y valid? | $\beta$ and $\gamma$
As Xi'an noted in his answer the $\beta$ and $\gamma$ are related to each other by relating to the conditional means $X|Y$ and $Y|X$ (which in their turn relate to a single joint | Is the average of betas from Y ~ X and X ~ Y valid?
$\beta$ and $\gamma$
As Xi'an noted in his answer the $\beta$ and $\gamma$ are related to each other by relating to the conditional means $X|Y$ and $Y|X$ (which in their turn relate to a single joint distribution) these are not symmetric in the sense that $\beta \neq 1/\gamma$. This is neither the case if you would 'know' the true $\sigma$ and $\rho$ instead of using estimates. You have $$\beta = \rho_{XY} \frac{\sigma_Y}{\sigma_X}$$ and $$\gamma = \rho_{XY} \frac{\sigma_X}{\sigma_Y}$$
or you could say
$$\beta \gamma = \rho_{XY}^2 \leq 1$$
See also simple linear regression on wikipedia for computation of the $\beta$ and $\gamma$.
It is this correlation term which sort of disturbs the symmetry. When the $\beta$ and $\gamma$ would be simply the ratio of the standard deviation $\sigma_Y/\sigma_X$ and $\sigma_X/\sigma_Y$ then they would indeed be each others inverse. The $\rho_{XY}$ term can be seen as modifying this as a sort of regression to the mean.
With perfect correlation $\rho_{XY} = 1$ then you can fully predict $X$ based on $Y$ or vice versa. The slopes will be equal $$\beta \gamma = 1$$
But with less than perfect correlation, $\rho_{XY} < 1$, you can not make those perfect predictions and the conditional mean will be somewhat closer to the unconditional mean, in comparison to a simple scaling by $\sigma_Y/\sigma_X$ or $\sigma_X/\sigma_Y$. The slopes of the regression lines will be less steep. The slopes will be not related as each others reciprocal and their product will be smaller than one $$\beta \gamma < 1$$
Is a regression line the right method?
You may wonder whether these conditional probabilities and regression lines is what you need to determine your ratios of $X$ and $Y$. It is unclear to me how you would wish to use a regression line in the computation of an optimal ratio.
Below is an alternative way to compute the ratio. This method does have symmetry (ie if you switch X and Y then you will get the same ratio).
Alternative
Say, the yields of bonds $X$ and $Y$ are distributed according to a multivariate normal distribution$^\dagger$ with correlation $\rho_{XY}$ and standard deviations $\sigma_X$ and $\sigma_Y$ then the yield of a hedge that is sum of $X$ and $Y$ will be normal distributed:
$$H = \alpha X + (1-\alpha) Y \sim N(\mu_H,\sigma_H^2)$$
were $0 \leq \alpha \leq 1$ and with
$$\begin{array}{rcl}
\mu_H &=& \alpha \mu_X+(1-\alpha) \mu_Y \\
\sigma_H^2 &=& \alpha^2 \sigma_X^2 + (1-\alpha)^2 \sigma_Y^2 + 2 \alpha (1-\alpha) \rho_{XY} \sigma_X \sigma_Y \\
& =& \alpha^2(\sigma_X^2+\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y) + \alpha (-2 \sigma_Y^2+2\rho_{XY}\sigma_X\sigma_Y) +\sigma_Y^2
\end{array} $$
The maximum of the mean $\mu_H$ will be at $$\alpha = 0 \text{ or } \alpha=1$$ or not existing when $\mu_X=\mu_Y$.
The minimum of the variance $\sigma_H^2$ will be at $$\alpha = 1 - \frac{\sigma_X^2 -\rho_{XY}\sigma_X\sigma_Y}{\sigma_X^2 +\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y} = \frac{\sigma_Y^2-\rho_{XY}\sigma_X\sigma_Y}{\sigma_X^2+\sigma_Y^2 -2 \rho_{XY} \sigma_X\sigma_Y} $$
The optimum will be somewhere in between those two extremes and depends on how you wish to compare losses and gains
Note that now there is a symmetry between $\alpha$ and $1-\alpha$. It does not matter whether you use the hedge $H=\alpha_1 X+(1-\alpha_1)Y$ or the hedge $H=\alpha_2 Y + (1-\alpha_2) X$. You will get the same ratios in terms of $\alpha_1 = 1-\alpha_2$.
Minimal variance case and relation with principle components
In the minimal variance case (here you actually do not need to assume a multivariate Normal distribution) you get the following hedge ratio as optimum $$\frac{\alpha}{1-\alpha} = \frac{var(Y) - cov(X,Y)}{var(X)-cov(X,Y)}$$ which can be expressed in terms of the regression coefficients $\beta = cov(X,Y)/var(X)$ and $\gamma = cov(X,Y)/var(Y)$ and is as following $$\frac{\alpha}{1-\alpha} = \frac{1-\beta}{1-\gamma}$$
In a situation with more than two variables/stocks/bonds you might generalize this to the last (smallest eigenvalue) principle component.
Variants
Improvements of the model can be made by using different distributions than multivariate normal. Also you could incorporate the time in a more sophisticated model to make better predictions of future values/distributions for the pair $X,Y$.
$\dagger$ This is a simplification but it suits the purpose of explaining how one can, and should, perform the analysis to find an optimal ratio without a regression line. | Is the average of betas from Y ~ X and X ~ Y valid?
$\beta$ and $\gamma$
As Xi'an noted in his answer the $\beta$ and $\gamma$ are related to each other by relating to the conditional means $X|Y$ and $Y|X$ (which in their turn relate to a single joint |
38,478 | Is the average of betas from Y ~ X and X ~ Y valid? | Perhaps the approach of "Granger causality" might help. This would help you to assess whether X is a good predictor of Y or whether X is a better of Y. In other words, it tells you whether beta or gamma is the thing to take more seriously. Also, considering that you are dealing with time series data, it tells you how much of the history of X counts towards the prediction of Y (or vice versa).
Wikipedia gives a simple explanation:
A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.
What you do is the following:
regress X(t-1) and Y(t-1) on Y(t)
regress X(t-1), X(t-2), Y(t-1), Y(t-2) on Y(t)
regress X(t-1), X(t-2), X(t-3), Y(t-1), Y(t-2), Y(t-3) on Y(t)
Continue for whatever history length might be reasonable. Check the significance of the F-statistics for each regression.
Then do the same the reverse (so, now regress the past values of X and Y on X(t)) and see which regressions have significant F-values.
A very straightforward example, with R code, is found here.
Granger causality has been critiqued for not actually establishing causality (in some cases). But it seems that you application is really about "predictive causality," which is exactly what the Granger causality approach is meant for.
The point is that the approach will tell you whether X predicts Y or whether Y predicts X (so you no longer would be tempted to artificially--and incorrectly--compound the two regression coefficients) and it gives you a better prediction (as you will know how much history of X and Y you need to know to predict Y), which is useful for hedging purposes, right? | Is the average of betas from Y ~ X and X ~ Y valid? | Perhaps the approach of "Granger causality" might help. This would help you to assess whether X is a good predictor of Y or whether X is a better of Y. In other words, it tells you whether beta or gam | Is the average of betas from Y ~ X and X ~ Y valid?
Perhaps the approach of "Granger causality" might help. This would help you to assess whether X is a good predictor of Y or whether X is a better of Y. In other words, it tells you whether beta or gamma is the thing to take more seriously. Also, considering that you are dealing with time series data, it tells you how much of the history of X counts towards the prediction of Y (or vice versa).
Wikipedia gives a simple explanation:
A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.
What you do is the following:
regress X(t-1) and Y(t-1) on Y(t)
regress X(t-1), X(t-2), Y(t-1), Y(t-2) on Y(t)
regress X(t-1), X(t-2), X(t-3), Y(t-1), Y(t-2), Y(t-3) on Y(t)
Continue for whatever history length might be reasonable. Check the significance of the F-statistics for each regression.
Then do the same the reverse (so, now regress the past values of X and Y on X(t)) and see which regressions have significant F-values.
A very straightforward example, with R code, is found here.
Granger causality has been critiqued for not actually establishing causality (in some cases). But it seems that you application is really about "predictive causality," which is exactly what the Granger causality approach is meant for.
The point is that the approach will tell you whether X predicts Y or whether Y predicts X (so you no longer would be tempted to artificially--and incorrectly--compound the two regression coefficients) and it gives you a better prediction (as you will know how much history of X and Y you need to know to predict Y), which is useful for hedging purposes, right? | Is the average of betas from Y ~ X and X ~ Y valid?
Perhaps the approach of "Granger causality" might help. This would help you to assess whether X is a good predictor of Y or whether X is a better of Y. In other words, it tells you whether beta or gam |
38,479 | assumptions for lmer models | Existence of variance: Do not need to check, in practice, it is always true.
Linearity: Do not need to check, because your covariates are categorical.
Homogeneity: Need to Check by plotting residuals vs predicted values.
Normality of error term: need to check by histogram, QQplot of residuals, even Kolmogorov-Smirnov test.
Normality of random effect: Get the estimate of random effect (in your case random intercepts), and check them as check the residual. But it is not efficient because you just have 7 random intercepts.
Another assumption is the independent between subjects. No test, based on your judgement. Subject specific random intercept means the correlation between the response variable from the same subject are the same. | assumptions for lmer models | Existence of variance: Do not need to check, in practice, it is always true.
Linearity: Do not need to check, because your covariates are categorical.
Homogeneity: Need to Check by plotting residual | assumptions for lmer models
Existence of variance: Do not need to check, in practice, it is always true.
Linearity: Do not need to check, because your covariates are categorical.
Homogeneity: Need to Check by plotting residuals vs predicted values.
Normality of error term: need to check by histogram, QQplot of residuals, even Kolmogorov-Smirnov test.
Normality of random effect: Get the estimate of random effect (in your case random intercepts), and check them as check the residual. But it is not efficient because you just have 7 random intercepts.
Another assumption is the independent between subjects. No test, based on your judgement. Subject specific random intercept means the correlation between the response variable from the same subject are the same. | assumptions for lmer models
Existence of variance: Do not need to check, in practice, it is always true.
Linearity: Do not need to check, because your covariates are categorical.
Homogeneity: Need to Check by plotting residual |
38,480 | assumptions for lmer models | The commonly quoted assumptions (or "conditions" as I prefer to call some of them) of linear mixed effects models are:
Linearity of the predictors. This can be checked by plotting the residuals against the response and looking for any systematic shape, and by including non-linear terms (or splines) and comparing the model fit. Very often this will not be an issue, and if it is, then including non-linear terms (such as log, exp or polynomials) in the linear predictor may be sufficient. More importantly, substantive domain knowledge should inform whether the linearity condition is justified. For example, in some domains such as pharmacokinetics, we already know, based on rigorous theory and experimentation, that a linear model is not appropriate in some cases.
The residuals have constant variance. This can be checked with a plot of residuals against fitted values - there should be no pattern/trend.
The residuals are independent. This can be checked by plotting residuals against covariates - especially time-varying or spatial covariates. There should not be any systematic pattern
The residuals are normally distributed. This can be checked in many ways, such as a Q-Q plot and a simple histogram. Statistical tests, such as Anderson-Darling and Kolmogorov–Smirnov are also possible.
Note that "residuals" above refer to both the unit-level residuals (often called "errors") and the random effects. For the random effects, this can be problematic where only a small number of groups/clusters exist in the sample. In the simulated example given in the OP there are 7 clusters. There are lots of rules of thumb to inform a sufficient number of clusters, and 7 is generally thought to be difficult to draw any conclusions.
It is mentioned in the OP that their actual model exhibits a funneled shape plot of residuals vs fitted values. This indicates heteroskadasticity. One way to proceed is to consider a transformations of variables - ideally this should be informed by expert domain knowledge. With this in mind, Box-Cox transformations may be useful. | assumptions for lmer models | The commonly quoted assumptions (or "conditions" as I prefer to call some of them) of linear mixed effects models are:
Linearity of the predictors. This can be checked by plotting the residuals again | assumptions for lmer models
The commonly quoted assumptions (or "conditions" as I prefer to call some of them) of linear mixed effects models are:
Linearity of the predictors. This can be checked by plotting the residuals against the response and looking for any systematic shape, and by including non-linear terms (or splines) and comparing the model fit. Very often this will not be an issue, and if it is, then including non-linear terms (such as log, exp or polynomials) in the linear predictor may be sufficient. More importantly, substantive domain knowledge should inform whether the linearity condition is justified. For example, in some domains such as pharmacokinetics, we already know, based on rigorous theory and experimentation, that a linear model is not appropriate in some cases.
The residuals have constant variance. This can be checked with a plot of residuals against fitted values - there should be no pattern/trend.
The residuals are independent. This can be checked by plotting residuals against covariates - especially time-varying or spatial covariates. There should not be any systematic pattern
The residuals are normally distributed. This can be checked in many ways, such as a Q-Q plot and a simple histogram. Statistical tests, such as Anderson-Darling and Kolmogorov–Smirnov are also possible.
Note that "residuals" above refer to both the unit-level residuals (often called "errors") and the random effects. For the random effects, this can be problematic where only a small number of groups/clusters exist in the sample. In the simulated example given in the OP there are 7 clusters. There are lots of rules of thumb to inform a sufficient number of clusters, and 7 is generally thought to be difficult to draw any conclusions.
It is mentioned in the OP that their actual model exhibits a funneled shape plot of residuals vs fitted values. This indicates heteroskadasticity. One way to proceed is to consider a transformations of variables - ideally this should be informed by expert domain knowledge. With this in mind, Box-Cox transformations may be useful. | assumptions for lmer models
The commonly quoted assumptions (or "conditions" as I prefer to call some of them) of linear mixed effects models are:
Linearity of the predictors. This can be checked by plotting the residuals again |
38,481 | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | I don't know why the book Generalized Linear Models by McCullagh and Nelder shouldn't be a top contender. It's considered the founding work on GLMs. It is a highly technical book, focused on interpretation, asymptotic theory, and general framework. A GLM is nothing more than a link function and a mean-variance relationship. Speaking as a mathematician, all the "second-generation" GLMs you mention are just special cases of the framework; and so with a good understanding and some confidence, you could derive, implement, fit, interpret, and test any of those models.
In the book, you can find many applied data analysis examples of interesting problems and inference such as cumulative link models (like proportional odds), the Cox model (which is a GLM interestingly), the cloglog link for discrete survival, and so on.
This book is not a comprehensive dictionary of named GLMs (that would be a waste of time) nor is it a detailed step-by-step implementation guide for fitting GLMs in R (it assumes the reader has the know-how). However, it dovetails excellently with R's glm. The help file even demonstrates fitting models with custom link functions. | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | I don't know why the book Generalized Linear Models by McCullagh and Nelder shouldn't be a top contender. It's considered the founding work on GLMs. It is a highly technical book, focused on interpret | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
I don't know why the book Generalized Linear Models by McCullagh and Nelder shouldn't be a top contender. It's considered the founding work on GLMs. It is a highly technical book, focused on interpretation, asymptotic theory, and general framework. A GLM is nothing more than a link function and a mean-variance relationship. Speaking as a mathematician, all the "second-generation" GLMs you mention are just special cases of the framework; and so with a good understanding and some confidence, you could derive, implement, fit, interpret, and test any of those models.
In the book, you can find many applied data analysis examples of interesting problems and inference such as cumulative link models (like proportional odds), the Cox model (which is a GLM interestingly), the cloglog link for discrete survival, and so on.
This book is not a comprehensive dictionary of named GLMs (that would be a waste of time) nor is it a detailed step-by-step implementation guide for fitting GLMs in R (it assumes the reader has the know-how). However, it dovetails excellently with R's glm. The help file even demonstrates fitting models with custom link functions. | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
I don't know why the book Generalized Linear Models by McCullagh and Nelder shouldn't be a top contender. It's considered the founding work on GLMs. It is a highly technical book, focused on interpret |
38,482 | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | The book Extending the Linear Model with R by Faraway has a chapter on "other GLM", and the count regression chapter also has a Negative Binomial discussion.
Generalized Linear Modeling with H20 has something on Gamma GLMs and Tweedie GLMs. Note that Tweedie GLMs are used often by insurance companies, so you may be able to find more literature with key words from there. | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | The book Extending the Linear Model with R by Faraway has a chapter on "other GLM", and the count regression chapter also has a Negative Binomial discussion.
Generalized Linear Modeling with H20 has s | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
The book Extending the Linear Model with R by Faraway has a chapter on "other GLM", and the count regression chapter also has a Negative Binomial discussion.
Generalized Linear Modeling with H20 has something on Gamma GLMs and Tweedie GLMs. Note that Tweedie GLMs are used often by insurance companies, so you may be able to find more literature with key words from there. | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
The book Extending the Linear Model with R by Faraway has a chapter on "other GLM", and the count regression chapter also has a Negative Binomial discussion.
Generalized Linear Modeling with H20 has s |
38,483 | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | Hardin and Hilbe cover a bit more than the typical basic book (Dobson and Barnett, etc.); the table of contents shows that they have chapters covering Gamma, inverse Gaussian, etc.. As I recall they also have some other useful extensions for count data (like the NB1, i.e. a negative binomial with variance proportional to the mean rather than a quadratic function of the mean). | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson? | Hardin and Hilbe cover a bit more than the typical basic book (Dobson and Barnett, etc.); the table of contents shows that they have chapters covering Gamma, inverse Gaussian, etc.. As I recall they a | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
Hardin and Hilbe cover a bit more than the typical basic book (Dobson and Barnett, etc.); the table of contents shows that they have chapters covering Gamma, inverse Gaussian, etc.. As I recall they also have some other useful extensions for count data (like the NB1, i.e. a negative binomial with variance proportional to the mean rather than a quadratic function of the mean). | Textbooks on GLMs outside of Bernoulli, Binomial, and Poisson?
Hardin and Hilbe cover a bit more than the typical basic book (Dobson and Barnett, etc.); the table of contents shows that they have chapters covering Gamma, inverse Gaussian, etc.. As I recall they a |
38,484 | How to generate correlated Bernoulli variables? | I can see the problem in you experiment.
You did everything right until:
binvars <- qbinom(pvars, 4, .3)
Why? Let's try to understand what pvars represents first.
pvars is a matrix that contains Uniforms between $0$ and $1$ that have the correlation structure you specified earlier in the variable sigma.
If you feed those Uniforms to any desired inverse CDF, say Bernulli, you get 4 vectors of correlated Bernulli.
This is an application of a GAUSSIAN COPULA.
The problem with your code is that you feed the pvars to the inverse CDF of a Binomial distribution with 4 trials, $Bin(n,p) = Bin(4, 0.3)$ in your case.
You simulated 4 correlated Binomial(4, 0.3) not 4 correlated Bernulli(0.3)
Think about it. Given the correlation matrix you chose how is it possible that the sum of your 4 Bernulli gives 4? It is not possible, nevertheless you obtained some 4 in your summary.
Change the code like this:
binvars <- qbinom(pvars, 1, .3)
X = apply(binvars,1,sum)
X is the vector you are looking for.
EDIT:
Following @Xi'an answer it's much simpler:
Basically you simulate only two independent bernoulli say b1 and b3 with respective probability of success p1 and p3.
Finally you set the remaining two bernoulli b2 = 1 - b1 and b4 = 1 - b3, ans @Xi'an suggested.
This way b2 and b4 end up being perfectly negatively correlated to b1 and b3.
set.seed(961)
p1 = 0.3
p3 = 0.7
b1 =rbinom(1e4, 1, p1)
b3 =rbinom(1e4, 1, p3)
b2 = 1 - b1
b4 = 1 - b3
binvars = cbind(b1,b2,b3,b4)
X = apply(binvars,1,sum)
cor(binvars)
b1 b2 b3 b4
b1 1.0000000000 -1.0000000000 0.0007217015 -0.0007217015
b2 -1.0000000000 1.0000000000 -0.0007217015 0.0007217015
b3 0.0007217015 -0.0007217015 1.0000000000 -1.0000000000
b4 -0.0007217015 0.0007217015 -1.0000000000 1.0000000000 | How to generate correlated Bernoulli variables? | I can see the problem in you experiment.
You did everything right until:
binvars <- qbinom(pvars, 4, .3)
Why? Let's try to understand what pvars represents first.
pvars is a matrix that contains Uni | How to generate correlated Bernoulli variables?
I can see the problem in you experiment.
You did everything right until:
binvars <- qbinom(pvars, 4, .3)
Why? Let's try to understand what pvars represents first.
pvars is a matrix that contains Uniforms between $0$ and $1$ that have the correlation structure you specified earlier in the variable sigma.
If you feed those Uniforms to any desired inverse CDF, say Bernulli, you get 4 vectors of correlated Bernulli.
This is an application of a GAUSSIAN COPULA.
The problem with your code is that you feed the pvars to the inverse CDF of a Binomial distribution with 4 trials, $Bin(n,p) = Bin(4, 0.3)$ in your case.
You simulated 4 correlated Binomial(4, 0.3) not 4 correlated Bernulli(0.3)
Think about it. Given the correlation matrix you chose how is it possible that the sum of your 4 Bernulli gives 4? It is not possible, nevertheless you obtained some 4 in your summary.
Change the code like this:
binvars <- qbinom(pvars, 1, .3)
X = apply(binvars,1,sum)
X is the vector you are looking for.
EDIT:
Following @Xi'an answer it's much simpler:
Basically you simulate only two independent bernoulli say b1 and b3 with respective probability of success p1 and p3.
Finally you set the remaining two bernoulli b2 = 1 - b1 and b4 = 1 - b3, ans @Xi'an suggested.
This way b2 and b4 end up being perfectly negatively correlated to b1 and b3.
set.seed(961)
p1 = 0.3
p3 = 0.7
b1 =rbinom(1e4, 1, p1)
b3 =rbinom(1e4, 1, p3)
b2 = 1 - b1
b4 = 1 - b3
binvars = cbind(b1,b2,b3,b4)
X = apply(binvars,1,sum)
cor(binvars)
b1 b2 b3 b4
b1 1.0000000000 -1.0000000000 0.0007217015 -0.0007217015
b2 -1.0000000000 1.0000000000 -0.0007217015 0.0007217015
b3 0.0007217015 -0.0007217015 1.0000000000 -1.0000000000
b4 -0.0007217015 0.0007217015 -1.0000000000 1.0000000000 | How to generate correlated Bernoulli variables?
I can see the problem in you experiment.
You did everything right until:
binvars <- qbinom(pvars, 4, .3)
Why? Let's try to understand what pvars represents first.
pvars is a matrix that contains Uni |
38,485 | How to generate correlated Bernoulli variables? | With the values of the correlation matrix that are proposed in the question, no simulation is needed to solve the question, as the answer is deterministic and available. Here is the reason why:
To generate two Bernoulli variates that are perfectly correlated, i.e., when$$\mathrm{corr}(C_1,C_2)=-1$$ one needs to find the conditional distribution that fits this constraint:
\begin{align*}
\mathrm{cov}(C_1,C_2) &=\mathbb{E}[C_1C_2]-\mathbb{E}[C_1]\mathbb{E}[C_2]\\
&=\mathbb{P}(C_1=C_2=1)-\mathbb{P}(C_1=1)\mathbb{P}(C_2=1)\\
&=\mathbb{P}(C_1=1|C_2=1)\mathbb{P}(C_2=1)-\mathbb{P}(C_1=1)\mathbb{P}(C_2=1)\\ &=q_{11}p_2-p_1p_2\\ &=p_2(q_{11}-p_1)
\end{align*}
which should satisfy$$p_2(q_{11}-p_1)=-\sqrt{p_1p_2(1-p_1)(1-p_2)}$$or$$q_{11}=p_1-\sqrt{p_1(1-p_1)(1-p_2)/p_2}=p_1\left\{1-
\sqrt{(1-p_1)(1-p_2)/p_1p_2}\right\}$$which allows for a solution in $(0,1)$ if and only if$$(1-p_1)(1-p_2)\le p_1p_2$$To generate $C_1$ when $C_2=0$, one needs the conditional probability $q_{10}=\mathbb{P}(C_1=1|C_2=0)$ which is given by$$q_{10}(1-p_2)+q_{11}p_2=p_1$$or
$$q_{10}=(p_1-p_2q_{11})/(1-p_2)=p_1+\sqrt{p_1p_2(1-p_1)/(1-p_2)}$$which allows for a solution in $(0,1)$ only if
$$\sqrt{\frac{p_2(1-p_1)}{(1-p_2)p_1}}\le\frac{1}{p_1}-1=\frac{1-p_1}{p_1}$$
i.e.,$$\sqrt{\frac{p_2}{(1-p_2)}}\le\sqrt{\frac{1-p_1}{p_1}}$$which amounts to$$p_1p_2\le (1-p_1)(1-p_2)$$Therefore, the only case when a negative correlation of $-1$ is feasible is when$$p_1p_2 = (1-p_1)(1-p_2)$$or$$\frac{p_1}{1-p_1}=\frac{1-p_2}{p_2}$$i.e.,$$p_2=1-p_1$$ This leads to
$$q_{11}=p_1-\sqrt{p_1(1-p_1)(1-p_2)/p_2}=p_1-\sqrt{p_1(1-p_1)p_1/(1-p_1)}=0$$and$$q_{10}=p_1+\sqrt{p_1p_2(1-p_1)/(1-p_2)}=p_1+\sqrt{p_1(1-p_1)(1-p_1)/p_1}=1$$meaning that $C_1$ is equal to zero when $C_2$ is equal to one and vice-versa. Therefore$$C_1+C_2=1$$with probability $1$.
QED: no simulation is needed!
Note that this property would extend to the Binomial case in that
$C_2=n-C_1$ is the only Binomial $\mathcal{B}(n,1-p)$ perfectly and
negatively correlated with $C_2\sim\mathcal{B}(n,p)$ | How to generate correlated Bernoulli variables? | With the values of the correlation matrix that are proposed in the question, no simulation is needed to solve the question, as the answer is deterministic and available. Here is the reason why:
To gen | How to generate correlated Bernoulli variables?
With the values of the correlation matrix that are proposed in the question, no simulation is needed to solve the question, as the answer is deterministic and available. Here is the reason why:
To generate two Bernoulli variates that are perfectly correlated, i.e., when$$\mathrm{corr}(C_1,C_2)=-1$$ one needs to find the conditional distribution that fits this constraint:
\begin{align*}
\mathrm{cov}(C_1,C_2) &=\mathbb{E}[C_1C_2]-\mathbb{E}[C_1]\mathbb{E}[C_2]\\
&=\mathbb{P}(C_1=C_2=1)-\mathbb{P}(C_1=1)\mathbb{P}(C_2=1)\\
&=\mathbb{P}(C_1=1|C_2=1)\mathbb{P}(C_2=1)-\mathbb{P}(C_1=1)\mathbb{P}(C_2=1)\\ &=q_{11}p_2-p_1p_2\\ &=p_2(q_{11}-p_1)
\end{align*}
which should satisfy$$p_2(q_{11}-p_1)=-\sqrt{p_1p_2(1-p_1)(1-p_2)}$$or$$q_{11}=p_1-\sqrt{p_1(1-p_1)(1-p_2)/p_2}=p_1\left\{1-
\sqrt{(1-p_1)(1-p_2)/p_1p_2}\right\}$$which allows for a solution in $(0,1)$ if and only if$$(1-p_1)(1-p_2)\le p_1p_2$$To generate $C_1$ when $C_2=0$, one needs the conditional probability $q_{10}=\mathbb{P}(C_1=1|C_2=0)$ which is given by$$q_{10}(1-p_2)+q_{11}p_2=p_1$$or
$$q_{10}=(p_1-p_2q_{11})/(1-p_2)=p_1+\sqrt{p_1p_2(1-p_1)/(1-p_2)}$$which allows for a solution in $(0,1)$ only if
$$\sqrt{\frac{p_2(1-p_1)}{(1-p_2)p_1}}\le\frac{1}{p_1}-1=\frac{1-p_1}{p_1}$$
i.e.,$$\sqrt{\frac{p_2}{(1-p_2)}}\le\sqrt{\frac{1-p_1}{p_1}}$$which amounts to$$p_1p_2\le (1-p_1)(1-p_2)$$Therefore, the only case when a negative correlation of $-1$ is feasible is when$$p_1p_2 = (1-p_1)(1-p_2)$$or$$\frac{p_1}{1-p_1}=\frac{1-p_2}{p_2}$$i.e.,$$p_2=1-p_1$$ This leads to
$$q_{11}=p_1-\sqrt{p_1(1-p_1)(1-p_2)/p_2}=p_1-\sqrt{p_1(1-p_1)p_1/(1-p_1)}=0$$and$$q_{10}=p_1+\sqrt{p_1p_2(1-p_1)/(1-p_2)}=p_1+\sqrt{p_1(1-p_1)(1-p_1)/p_1}=1$$meaning that $C_1$ is equal to zero when $C_2$ is equal to one and vice-versa. Therefore$$C_1+C_2=1$$with probability $1$.
QED: no simulation is needed!
Note that this property would extend to the Binomial case in that
$C_2=n-C_1$ is the only Binomial $\mathcal{B}(n,1-p)$ perfectly and
negatively correlated with $C_2\sim\mathcal{B}(n,p)$ | How to generate correlated Bernoulli variables?
With the values of the correlation matrix that are proposed in the question, no simulation is needed to solve the question, as the answer is deterministic and available. Here is the reason why:
To gen |
38,486 | How to generate correlated Bernoulli variables? | Using a mathematical method described by whuber in this related question, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R. You can call this function to generate any desired number of correlated Bernoulli random variables, with specified probabilities prob1 and prob1 and specified corelation corr. Note that the correlation coefficient is the correlation of the individual Bernoulli values that sum to the binomial, not the correlation between the binomial values themselves.
rcorrbinom <- function(n, size = 1, prob1, prob2, corr = 0) {
#Check inputs
if (!is.numeric(n)) { stop('Error: n must be numeric') }
if (length(n) != 1) { stop('Error: n must be a single number') }
if (as.integer(n) != n) { stop('Error: n must be a positive integer') }
if (n < 1) { stop('Error: n must be a positive integer') }
if (!is.numeric(size)) { stop('Error: n must be numeric') }
if (length(size) != 1) { stop('Error: n must be a single number') }
if (as.integer(size) != size) { stop('Error: n must be a positive integer') }
if (size < 1) { stop('Error: n must be a positive integer') }
if (!is.numeric(prob1)) { stop('Error: prob1 must be numeric') }
if (length(prob1) != 1) { stop('Error: prob1 must be a single number') }
if (prob1 < 0) { stop('Error: prob1 must be between 0 and 1') }
if (prob1 > 1) { stop('Error: prob1 must be between 0 and 1') }
if (!is.numeric(prob2)) { stop('Error: prob2 must be numeric') }
if (length(prob2) != 1) { stop('Error: prob2 must be a single number') }
if (prob2 < 0) { stop('Error: prob2 must be between 0 and 1') }
if (prob2 > 1) { stop('Error: prob2 must be between 0 and 1') }
if (!is.numeric(corr)) { stop('Error: corr must be numeric') }
if (length(corr) != 1) { stop('Error: corr must be a single number') }
if (corr < -1) { stop('Error: corr must be between -1 and 1') }
if (corr > 1) { stop('Error: corr must be between -1 and 1') }
#Compute probabilities
P00 <- (1-prob1)*(1-prob2) + corr*sqrt(prob1*prob2*(1-prob1)*(1-prob2))
P01 <- 1 - prob1 - P00
P10 <- 1 - prob2 - P00
P11 <- P00 + prob1 + prob2 - 1
PROBS <- c(P00, P01, P10, P11)
if (min(PROBS) < 0) { stop('Error: corr is not in the allowable range') }
#Generate the output
RAND <- array(sample.int(4, size = n*size, replace = TRUE, prob = PROBS),
dim = c(n, size))
VALS <- array(0, dim = c(2, n, size))
OUT <- array(0, dim = c(2, n))
for (i in 1:n) {
for (j in 1:size) {
VALS[1,i,j] <- (RAND[i,j] %in% c(3, 4))
VALS[2,i,j] <- (RAND[i,j] %in% c(2, 4)) }
OUT[1, i] <- sum(VALS[1,i,])
OUT[2, i] <- sum(VALS[2,i,]) }
#Give output
OUT }
Here is an example of using this function to produce a sample array containing a large number of correlated Bernoulli random variables. We can confirm that, for a large sample, the sampled values have sample means and sample correlation that is close to the specified parameters.
#Set parameters
n <- 10^6
PROB1 <- 0.3
PROB2 <- 0.8
CORR <- 0.2
#Generate sample of correlated Bernoulli random variables
set.seed(1)
SAMPLE <- rcorrbinom(n = n, prob1 = PROB1, prob2 = PROB2, corr = CORR)
#Check the properties of the sample
str(SAMPLE)
num [1:2, 1:10000] 0 1 0 1 1 1 0 0 0 1 ...
mean(SAMPLE[1,])
[1] 0.300122
mean(SAMPLE[2,])
[1] 0.800145
cor(SAMPLE[1,], SAMPLE[2,])
[1] 0.20018 | How to generate correlated Bernoulli variables? | Using a mathematical method described by whuber in this related question, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distri | How to generate correlated Bernoulli variables?
Using a mathematical method described by whuber in this related question, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distributions in R. You can call this function to generate any desired number of correlated Bernoulli random variables, with specified probabilities prob1 and prob1 and specified corelation corr. Note that the correlation coefficient is the correlation of the individual Bernoulli values that sum to the binomial, not the correlation between the binomial values themselves.
rcorrbinom <- function(n, size = 1, prob1, prob2, corr = 0) {
#Check inputs
if (!is.numeric(n)) { stop('Error: n must be numeric') }
if (length(n) != 1) { stop('Error: n must be a single number') }
if (as.integer(n) != n) { stop('Error: n must be a positive integer') }
if (n < 1) { stop('Error: n must be a positive integer') }
if (!is.numeric(size)) { stop('Error: n must be numeric') }
if (length(size) != 1) { stop('Error: n must be a single number') }
if (as.integer(size) != size) { stop('Error: n must be a positive integer') }
if (size < 1) { stop('Error: n must be a positive integer') }
if (!is.numeric(prob1)) { stop('Error: prob1 must be numeric') }
if (length(prob1) != 1) { stop('Error: prob1 must be a single number') }
if (prob1 < 0) { stop('Error: prob1 must be between 0 and 1') }
if (prob1 > 1) { stop('Error: prob1 must be between 0 and 1') }
if (!is.numeric(prob2)) { stop('Error: prob2 must be numeric') }
if (length(prob2) != 1) { stop('Error: prob2 must be a single number') }
if (prob2 < 0) { stop('Error: prob2 must be between 0 and 1') }
if (prob2 > 1) { stop('Error: prob2 must be between 0 and 1') }
if (!is.numeric(corr)) { stop('Error: corr must be numeric') }
if (length(corr) != 1) { stop('Error: corr must be a single number') }
if (corr < -1) { stop('Error: corr must be between -1 and 1') }
if (corr > 1) { stop('Error: corr must be between -1 and 1') }
#Compute probabilities
P00 <- (1-prob1)*(1-prob2) + corr*sqrt(prob1*prob2*(1-prob1)*(1-prob2))
P01 <- 1 - prob1 - P00
P10 <- 1 - prob2 - P00
P11 <- P00 + prob1 + prob2 - 1
PROBS <- c(P00, P01, P10, P11)
if (min(PROBS) < 0) { stop('Error: corr is not in the allowable range') }
#Generate the output
RAND <- array(sample.int(4, size = n*size, replace = TRUE, prob = PROBS),
dim = c(n, size))
VALS <- array(0, dim = c(2, n, size))
OUT <- array(0, dim = c(2, n))
for (i in 1:n) {
for (j in 1:size) {
VALS[1,i,j] <- (RAND[i,j] %in% c(3, 4))
VALS[2,i,j] <- (RAND[i,j] %in% c(2, 4)) }
OUT[1, i] <- sum(VALS[1,i,])
OUT[2, i] <- sum(VALS[2,i,]) }
#Give output
OUT }
Here is an example of using this function to produce a sample array containing a large number of correlated Bernoulli random variables. We can confirm that, for a large sample, the sampled values have sample means and sample correlation that is close to the specified parameters.
#Set parameters
n <- 10^6
PROB1 <- 0.3
PROB2 <- 0.8
CORR <- 0.2
#Generate sample of correlated Bernoulli random variables
set.seed(1)
SAMPLE <- rcorrbinom(n = n, prob1 = PROB1, prob2 = PROB2, corr = CORR)
#Check the properties of the sample
str(SAMPLE)
num [1:2, 1:10000] 0 1 0 1 1 1 0 0 0 1 ...
mean(SAMPLE[1,])
[1] 0.300122
mean(SAMPLE[2,])
[1] 0.800145
cor(SAMPLE[1,], SAMPLE[2,])
[1] 0.20018 | How to generate correlated Bernoulli variables?
Using a mathematical method described by whuber in this related question, I have programmed a function that generates pairs of correlated binomial random variables using the standard syntax for distri |
38,487 | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | First, let's get a sense why this should be true. The density of a Beta$(1,n)$ variable which has been multiplied by $n$ should be proportional to
$$\left(\frac{x}{n}\right)^{1-1}\left(1 - \frac{x}{n}\right)^{n-1} \propto \left(1 - \frac{x}{n}\right)^{n-1} \approx e^{-x}$$
(for large $n$, anyway), so it surely looks exponential.
A rigorous and relatively elementary way to demonstrate the result is to work with the distribution function (CDF) $F_n$. That is, suppose $Y_n$ has a Beta$(1,n)$ distribution (for $n \gt 0$) and let $X_n=nY_n$. By definition,
$$F_n(x) = \Pr(X_n \le x) = \Pr(nY_n \le x) = \Pr\left(Y_n \le \frac{x}{n}\right).$$
When $0 \lt x \lt n$, this probability is given by the Beta integral, proportional to
$$\Pr\left(Y_n \le \frac{x}{n}\right) \propto \int_0^{x/n} (1-y)^{n-1} dy = -\frac{1}{n} (1-y)^n|_0^{x/n} \propto 1 - \left(1 - \frac{x}{n}\right)^n.\tag{*}$$
When $x \ge n$, this probability is $1$ (it's no longer given by the integral).
(Notice how freely we may drop any multiplicative constants, like that factor of $1/n$, that do not depend on $y$ or $x$: in the end we only need to establish that the limiting function rises from $0$ to a finite value in the limit as $x\to\infty$. The function then can be divided by that limiting value to produce a genuine distribution.)
The right hand side is often used to define the exponential in the sense that
$$e^{-x} = \lim_{n\to \infty} \left(1 - \frac{x}{n}\right)^n.$$
Since for any $x \gt 0$ and $n\to \infty$ eventually $x\lt n$, the limiting value of $F_n(x)$ is the limiting value of $(*)$: we don't have to worry about the fact that $F_n(x)=1$ when $n$ is small. Furthermore, for $x\le 0$, $F_n(x)=0$ always and so its limiting value obviously is $0$ in such cases.
To illustrate the analysis, this figure plots $F_1$ (blue), $F_2$ (red), $F_4$ (gold), $F_8$ (green), and the limiting distribution (dashed, in gray). Evidently the distributions $F_n$ converge down to their limiting value everywhere $x \gt 0$.
The normalizing constant turns out to be unity, because the limiting value of $1 - e^{-x}$ as $x\to\infty$ is $1$: it already is a valid distribution function.
This shows that $F_n(x)$ approaches $F(x)=1 - e^{-x}$ arbitrarily closely for any $x\gt 0$ for sufficiently large $n$ and otherwise is $0$. This is the standard Exponential distribution. Therefore whenever $(Y_n)$ is a sequence of random variables with Beta$(1,n)$ distributions, the distributions of the random variables $(nY_n)$ converge to the standard Exponential distribution, QED.
Addendum
It might be worthwhile to show what can go wrong with an analysis of densities (PDFs).
For any $n=1,2,\ldots,$ define a "uniform $n$-distribution" to be an equally-weighted mixture of Normal distributions, each with standard deviation $2^{-2n}$ and located at the odd multiples of $2^{-n}$: that is, at $k2^{-n}$ for $k=1, 3, \ldots, 2^n-1$. Here is a plot of densities of the uniform $n$-distributions for $n=1,2,3,4$:
The uniform $n$-distribution has $2^{n-1}$ spikes and those spikes are occupying exponentially narrower portions of the gaps between their peaks. Because this is a finite mixture of Normal distributions it has a very nice density which is bounded, nonzero, and infinitely differentiable everywhere--one could scarcely complain of any mathematical "pathology." However, this family has been constructed to ensure that the limit of these densities is almost everywhere zero. (This is not hard to prove, but the details might be distracting here, so I will rely on the figure to make the point.) Note that zero itself is a nice function, too: bounded and infinitely differentiable. It's just impossible to normalize it to unit area!
Nevertheless, this sequence of distribution functions does have a limiting distribution function: it is the (usual) Uniform$(0,1)$ distribution. Here is a picture corresponding to the previous one, showing their distribution functions in the same colors:
The limit is a uniform distribution because between any $0 \le a \lt b \le 1$ there are approximately $(b-a)2^{n-1}$ spikes, each with almost all its probability (totaling $2^{1-n}$) concentrated between $a$ and $b$, for a total probability close to $b-a$: that is nearly uniform. In the picture you see a sequence of staircase-like graphs with smaller and smaller steps squeezing down to the slanted ramp (gray dots): that's the Uniform$(0,1)$ CDF.
The problem isn't restricted to densities that converge to $0$. Pick $0 \lt p\lt 1$ and let $(X_n)$ be a sequence of random variables with distributions that are a mixture of $p$ times any absolutely continuous distribution $F$ you want and $1-p$ times the uniform $n$-distributions. This sequence of densities converges to $p$ times the density of $F$. Although that is a nonzero function, it's not a valid density because it integrates to $p$, not to $1$.
The moral is that even when a sequence of very nice density functions converges, it doesn't necessarily converge to a density function. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | First, let's get a sense why this should be true. The density of a Beta$(1,n)$ variable which has been multiplied by $n$ should be proportional to
$$\left(\frac{x}{n}\right)^{1-1}\left(1 - \frac{x}{n | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
First, let's get a sense why this should be true. The density of a Beta$(1,n)$ variable which has been multiplied by $n$ should be proportional to
$$\left(\frac{x}{n}\right)^{1-1}\left(1 - \frac{x}{n}\right)^{n-1} \propto \left(1 - \frac{x}{n}\right)^{n-1} \approx e^{-x}$$
(for large $n$, anyway), so it surely looks exponential.
A rigorous and relatively elementary way to demonstrate the result is to work with the distribution function (CDF) $F_n$. That is, suppose $Y_n$ has a Beta$(1,n)$ distribution (for $n \gt 0$) and let $X_n=nY_n$. By definition,
$$F_n(x) = \Pr(X_n \le x) = \Pr(nY_n \le x) = \Pr\left(Y_n \le \frac{x}{n}\right).$$
When $0 \lt x \lt n$, this probability is given by the Beta integral, proportional to
$$\Pr\left(Y_n \le \frac{x}{n}\right) \propto \int_0^{x/n} (1-y)^{n-1} dy = -\frac{1}{n} (1-y)^n|_0^{x/n} \propto 1 - \left(1 - \frac{x}{n}\right)^n.\tag{*}$$
When $x \ge n$, this probability is $1$ (it's no longer given by the integral).
(Notice how freely we may drop any multiplicative constants, like that factor of $1/n$, that do not depend on $y$ or $x$: in the end we only need to establish that the limiting function rises from $0$ to a finite value in the limit as $x\to\infty$. The function then can be divided by that limiting value to produce a genuine distribution.)
The right hand side is often used to define the exponential in the sense that
$$e^{-x} = \lim_{n\to \infty} \left(1 - \frac{x}{n}\right)^n.$$
Since for any $x \gt 0$ and $n\to \infty$ eventually $x\lt n$, the limiting value of $F_n(x)$ is the limiting value of $(*)$: we don't have to worry about the fact that $F_n(x)=1$ when $n$ is small. Furthermore, for $x\le 0$, $F_n(x)=0$ always and so its limiting value obviously is $0$ in such cases.
To illustrate the analysis, this figure plots $F_1$ (blue), $F_2$ (red), $F_4$ (gold), $F_8$ (green), and the limiting distribution (dashed, in gray). Evidently the distributions $F_n$ converge down to their limiting value everywhere $x \gt 0$.
The normalizing constant turns out to be unity, because the limiting value of $1 - e^{-x}$ as $x\to\infty$ is $1$: it already is a valid distribution function.
This shows that $F_n(x)$ approaches $F(x)=1 - e^{-x}$ arbitrarily closely for any $x\gt 0$ for sufficiently large $n$ and otherwise is $0$. This is the standard Exponential distribution. Therefore whenever $(Y_n)$ is a sequence of random variables with Beta$(1,n)$ distributions, the distributions of the random variables $(nY_n)$ converge to the standard Exponential distribution, QED.
Addendum
It might be worthwhile to show what can go wrong with an analysis of densities (PDFs).
For any $n=1,2,\ldots,$ define a "uniform $n$-distribution" to be an equally-weighted mixture of Normal distributions, each with standard deviation $2^{-2n}$ and located at the odd multiples of $2^{-n}$: that is, at $k2^{-n}$ for $k=1, 3, \ldots, 2^n-1$. Here is a plot of densities of the uniform $n$-distributions for $n=1,2,3,4$:
The uniform $n$-distribution has $2^{n-1}$ spikes and those spikes are occupying exponentially narrower portions of the gaps between their peaks. Because this is a finite mixture of Normal distributions it has a very nice density which is bounded, nonzero, and infinitely differentiable everywhere--one could scarcely complain of any mathematical "pathology." However, this family has been constructed to ensure that the limit of these densities is almost everywhere zero. (This is not hard to prove, but the details might be distracting here, so I will rely on the figure to make the point.) Note that zero itself is a nice function, too: bounded and infinitely differentiable. It's just impossible to normalize it to unit area!
Nevertheless, this sequence of distribution functions does have a limiting distribution function: it is the (usual) Uniform$(0,1)$ distribution. Here is a picture corresponding to the previous one, showing their distribution functions in the same colors:
The limit is a uniform distribution because between any $0 \le a \lt b \le 1$ there are approximately $(b-a)2^{n-1}$ spikes, each with almost all its probability (totaling $2^{1-n}$) concentrated between $a$ and $b$, for a total probability close to $b-a$: that is nearly uniform. In the picture you see a sequence of staircase-like graphs with smaller and smaller steps squeezing down to the slanted ramp (gray dots): that's the Uniform$(0,1)$ CDF.
The problem isn't restricted to densities that converge to $0$. Pick $0 \lt p\lt 1$ and let $(X_n)$ be a sequence of random variables with distributions that are a mixture of $p$ times any absolutely continuous distribution $F$ you want and $1-p$ times the uniform $n$-distributions. This sequence of densities converges to $p$ times the density of $F$. Although that is a nonzero function, it's not a valid density because it integrates to $p$, not to $1$.
The moral is that even when a sequence of very nice density functions converges, it doesn't necessarily converge to a density function. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
First, let's get a sense why this should be true. The density of a Beta$(1,n)$ variable which has been multiplied by $n$ should be proportional to
$$\left(\frac{x}{n}\right)^{1-1}\left(1 - \frac{x}{n |
38,488 | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Here is probabilistic proof to this old problem.
Let $(X_n:n\in\mathbb{N})$ be an i.i.d sequence of exponential random variables with parameter $\theta>0$ ($\mu_{X_1}(dx)=\theta e^{-\theta x}\mathbb{1}_{(0,\infty)}(x)\,dx$). Define
$$W_n=\frac{X_1}{X_1+(X_2+\ldots + X_{n+1})}$$
As $X_1$ has $\operatorname{Gamma}(1,\theta)$ and $X_2+\ldots+X_{n+1}$ has $\operatorname{Gamma}(n,\theta)$ distribution,
has distribution $\operatorname{Beta}(1,n)$. By the law of large numbers
$$nW_n=\frac{X_1}{\tfrac1n X_1+\frac{1}{n}(X_2+\ldots + X_{n+1})}\xrightarrow{n\rightarrow\infty}\frac{X_1}{0+1/\theta}=\theta X_1$$
Notice that $\theta X_1\sim\operatorname{Exp}(1)$. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Here is probabilistic proof to this old problem.
Let $(X_n:n\in\mathbb{N})$ be an i.i.d sequence of exponential random variables with parameter $\theta>0$ ($\mu_{X_1}(dx)=\theta e^{-\theta x}\mathbb{1 | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Here is probabilistic proof to this old problem.
Let $(X_n:n\in\mathbb{N})$ be an i.i.d sequence of exponential random variables with parameter $\theta>0$ ($\mu_{X_1}(dx)=\theta e^{-\theta x}\mathbb{1}_{(0,\infty)}(x)\,dx$). Define
$$W_n=\frac{X_1}{X_1+(X_2+\ldots + X_{n+1})}$$
As $X_1$ has $\operatorname{Gamma}(1,\theta)$ and $X_2+\ldots+X_{n+1}$ has $\operatorname{Gamma}(n,\theta)$ distribution,
has distribution $\operatorname{Beta}(1,n)$. By the law of large numbers
$$nW_n=\frac{X_1}{\tfrac1n X_1+\frac{1}{n}(X_2+\ldots + X_{n+1})}\xrightarrow{n\rightarrow\infty}\frac{X_1}{0+1/\theta}=\theta X_1$$
Notice that $\theta X_1\sim\operatorname{Exp}(1)$. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Here is probabilistic proof to this old problem.
Let $(X_n:n\in\mathbb{N})$ be an i.i.d sequence of exponential random variables with parameter $\theta>0$ ($\mu_{X_1}(dx)=\theta e^{-\theta x}\mathbb{1 |
38,489 | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Another view: if $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$, then $X_{(1)} = \min(X_1, \ldots, X_n) \sim \text{Beta}(1, n)$. More generally,
\begin{align}
X_{(i)} \sim \text{Beta}(i, n + 1 - i), i = 1, \ldots, n.
\end{align}
It then follows by (for fixed $x > 0$, when $n$ is sufficiently large, $x/n$ can be bounded above by $1$)
\begin{align*}
P[nX_{(1)} > x] = P[X_{(1)} > x/n] = \prod_{i = 1}^nP[X_i > x/n] =
\left(1 - \frac{x}{n}\right)^n \to e^{-x}
\end{align*}
that the claimed property holds. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Another view: if $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$, then $X_{(1)} = \min(X_1, \ldots, X_n) \sim \text{Beta}(1, n)$. More generally,
\begin{align}
X_{(i)} \sim \text{Beta}(i, n + 1 - i), | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Another view: if $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$, then $X_{(1)} = \min(X_1, \ldots, X_n) \sim \text{Beta}(1, n)$. More generally,
\begin{align}
X_{(i)} \sim \text{Beta}(i, n + 1 - i), i = 1, \ldots, n.
\end{align}
It then follows by (for fixed $x > 0$, when $n$ is sufficiently large, $x/n$ can be bounded above by $1$)
\begin{align*}
P[nX_{(1)} > x] = P[X_{(1)} > x/n] = \prod_{i = 1}^nP[X_i > x/n] =
\left(1 - \frac{x}{n}\right)^n \to e^{-x}
\end{align*}
that the claimed property holds. | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Another view: if $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$, then $X_{(1)} = \min(X_1, \ldots, X_n) \sim \text{Beta}(1, n)$. More generally,
\begin{align}
X_{(i)} \sim \text{Beta}(i, n + 1 - i), |
38,490 | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Thanks to the comments I can now formulate the answer myself, I think:
We know that Y=nX, so we can use a general formula for the pdf of a transformed variable (link of Christoph Hanck):
$\rho_Y(y)=\frac{\rho_X(x)}{|f'(x)|}$
$f'(x)=\frac{d(nx)}{dx}=n$
$\rho_Y(y) = \frac{n(1-y/n)^{n-1}}{n}$
$\lim_{n \to \infty} (1-y/n)^{n-1}=e^{y}$ | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity | Thanks to the comments I can now formulate the answer myself, I think:
We know that Y=nX, so we can use a general formula for the pdf of a transformed variable (link of Christoph Hanck):
$\rho_Y(y)=\ | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Thanks to the comments I can now formulate the answer myself, I think:
We know that Y=nX, so we can use a general formula for the pdf of a transformed variable (link of Christoph Hanck):
$\rho_Y(y)=\frac{\rho_X(x)}{|f'(x)|}$
$f'(x)=\frac{d(nx)}{dx}=n$
$\rho_Y(y) = \frac{n(1-y/n)^{n-1}}{n}$
$\lim_{n \to \infty} (1-y/n)^{n-1}=e^{y}$ | Limit of $n$ times Beta$(1,n)$ variables when $n$ goes to infinity
Thanks to the comments I can now formulate the answer myself, I think:
We know that Y=nX, so we can use a general formula for the pdf of a transformed variable (link of Christoph Hanck):
$\rho_Y(y)=\ |
38,491 | Knots in Smoothing Splines | But, in my opinion wouldn't that be overfitting?
No.
Your equation explains it all.
$$\underbrace{\sum_{i=1}^n(y_i-g(x_i))^2}_\text{residual squares}+\underbrace{\lambda\int g''(t)^2dt}_\text{roughness penalty}$$
The second part $\lambda\int g''(t)^2dt$ is often called a roughness penalty, and $\lambda$ - roughness coefficient. The idea here is that first and second parts are competing. Think of this, if you make your function $g(x_i)=y_i$, i.e. go through each point exactly, then $\sum_{i=1}^n(y_i-g(x_i))^2=0$, but it usually leads to the function being very bumpy, it goes up and down trying to pass through each observation, which have noise in them. This would increase the contribution of the right part because generally $g''(x)$ will be higher, and depending on $\lambda$ the second part may become very large. Note, that $g''(x)$ is an approximation of the curvature of the spline.
So, you may find a curve that doesn't go exactly through each point $g(x_i)\ne y_i$ and $\sum_{i=1}^n(y_i-g(x_i))^2>0$, but your function becomes less bumpy, more smooth so that $g''(x)$ becomes smaller, and the increase in the first part is compensated by the decrease of the second part. Therefore, the roughness penalty does what shrinkage does, it actually cures overfitting.
Note, that the equation you gave is not the only possible way to build the smoothing spline. It's probably the simplest and most intuitive one. You could replace the second part with something different, e.g. $\lambda\int g'(t)^2dt$ would lead to the Laplacian kernel. It minimizes the length of the smooth curve.
The example actually has a simple physical representation. So let's start with an ordinary spline. Imagine that we nail a ring to the board at coordinates $x_i,y_i$, then we pass a flat spline through each ring. Now the shape of the flat spline is what you get from an ordinary (cubic) spline. Here how it looks (pic is from Wiki):
Now, instead of the ring, we nail springs into the same point. Then we attach the spline to the spring. Since the springs can extend the spline no longer will go through each observation! It'll relax a bit. What defines the shape of the new spline? The competition between the potential energy of the springs and the energy of tension in the flat spline. The more you bend the flat spline the more energy is in its tension, just like with a spring extension.
So, if you recall what is potential energy of a spring, it's just a square of its extension, which is given by the error (residual) $e_i=y_y-g(x_i)$, i.e. the sum of squares in the first part of your smoothing spline equation:
Now the second part of your equation gives the potential energy of the tension in the spline. In my example $\lambda\int g'(t)^2dt$ represents an approximation of the length of the spline. So, the shape of the spline will be the one that minimizes the total potential energy (in your case) or sum of the potential energy of spring extensions and the length of the spline (in my example). | Knots in Smoothing Splines | But, in my opinion wouldn't that be overfitting?
No.
Your equation explains it all.
$$\underbrace{\sum_{i=1}^n(y_i-g(x_i))^2}_\text{residual squares}+\underbrace{\lambda\int g''(t)^2dt}_\text{roughne | Knots in Smoothing Splines
But, in my opinion wouldn't that be overfitting?
No.
Your equation explains it all.
$$\underbrace{\sum_{i=1}^n(y_i-g(x_i))^2}_\text{residual squares}+\underbrace{\lambda\int g''(t)^2dt}_\text{roughness penalty}$$
The second part $\lambda\int g''(t)^2dt$ is often called a roughness penalty, and $\lambda$ - roughness coefficient. The idea here is that first and second parts are competing. Think of this, if you make your function $g(x_i)=y_i$, i.e. go through each point exactly, then $\sum_{i=1}^n(y_i-g(x_i))^2=0$, but it usually leads to the function being very bumpy, it goes up and down trying to pass through each observation, which have noise in them. This would increase the contribution of the right part because generally $g''(x)$ will be higher, and depending on $\lambda$ the second part may become very large. Note, that $g''(x)$ is an approximation of the curvature of the spline.
So, you may find a curve that doesn't go exactly through each point $g(x_i)\ne y_i$ and $\sum_{i=1}^n(y_i-g(x_i))^2>0$, but your function becomes less bumpy, more smooth so that $g''(x)$ becomes smaller, and the increase in the first part is compensated by the decrease of the second part. Therefore, the roughness penalty does what shrinkage does, it actually cures overfitting.
Note, that the equation you gave is not the only possible way to build the smoothing spline. It's probably the simplest and most intuitive one. You could replace the second part with something different, e.g. $\lambda\int g'(t)^2dt$ would lead to the Laplacian kernel. It minimizes the length of the smooth curve.
The example actually has a simple physical representation. So let's start with an ordinary spline. Imagine that we nail a ring to the board at coordinates $x_i,y_i$, then we pass a flat spline through each ring. Now the shape of the flat spline is what you get from an ordinary (cubic) spline. Here how it looks (pic is from Wiki):
Now, instead of the ring, we nail springs into the same point. Then we attach the spline to the spring. Since the springs can extend the spline no longer will go through each observation! It'll relax a bit. What defines the shape of the new spline? The competition between the potential energy of the springs and the energy of tension in the flat spline. The more you bend the flat spline the more energy is in its tension, just like with a spring extension.
So, if you recall what is potential energy of a spring, it's just a square of its extension, which is given by the error (residual) $e_i=y_y-g(x_i)$, i.e. the sum of squares in the first part of your smoothing spline equation:
Now the second part of your equation gives the potential energy of the tension in the spline. In my example $\lambda\int g'(t)^2dt$ represents an approximation of the length of the spline. So, the shape of the spline will be the one that minimizes the total potential energy (in your case) or sum of the potential energy of spring extensions and the length of the spline (in my example). | Knots in Smoothing Splines
But, in my opinion wouldn't that be overfitting?
No.
Your equation explains it all.
$$\underbrace{\sum_{i=1}^n(y_i-g(x_i))^2}_\text{residual squares}+\underbrace{\lambda\int g''(t)^2dt}_\text{roughne |
38,492 | Knots in Smoothing Splines | Having read the book myself, this statement refers to a regularized model (edit: I guess all smoothing splines are like that), where every point is a knot, but you are regularizing by adding $\lambda \int g''(x)^2$ to the loss function, so you are punishing a "wiggly function", as expressed in a large absolute second derivative.
I believe they also point out that the $\lambda$ effectively maps to the degrees of freedom of the model (the higher the $\lambda$, the less variance in the estimated model, which means less overfitting)
Edit (also, I'm pretty sure I'm citing ISLR here more or less, so credits to them):
The way an algorithm finds a smoothing spline, is by minimizing the equation you outline in 1). This equation has two parts, the RHS and the LHS. The LHS is minimal, when all the $g(x_i) = y_i$. The RHS is minimal, if the second derivative of a $g()$ is 0 everywhere. That means, the function is at most linear.
Clearly, these objectives for RHS and LHS are a trade off, because the LHS wants $g$ to be flexible, but the RHS wants $g$ to be linear. This trade-off is regulated through the $\lambda$.
If, e.g., $\lambda = \infty$, the second derivative HAS to be 0 to minimize the loss, which means that $g$ needs to be linear, corresponding to 0 knots (and the model has two degrees of freedom, intercept and slope).
If $\lambda = 0$, the RHS becomes irrelevant, and the objective is to minimize the LHS without constraints, which is achieved by setting one knot at each data point (and hence the model has n degrees of freedom) | Knots in Smoothing Splines | Having read the book myself, this statement refers to a regularized model (edit: I guess all smoothing splines are like that), where every point is a knot, but you are regularizing by adding $\lambda | Knots in Smoothing Splines
Having read the book myself, this statement refers to a regularized model (edit: I guess all smoothing splines are like that), where every point is a knot, but you are regularizing by adding $\lambda \int g''(x)^2$ to the loss function, so you are punishing a "wiggly function", as expressed in a large absolute second derivative.
I believe they also point out that the $\lambda$ effectively maps to the degrees of freedom of the model (the higher the $\lambda$, the less variance in the estimated model, which means less overfitting)
Edit (also, I'm pretty sure I'm citing ISLR here more or less, so credits to them):
The way an algorithm finds a smoothing spline, is by minimizing the equation you outline in 1). This equation has two parts, the RHS and the LHS. The LHS is minimal, when all the $g(x_i) = y_i$. The RHS is minimal, if the second derivative of a $g()$ is 0 everywhere. That means, the function is at most linear.
Clearly, these objectives for RHS and LHS are a trade off, because the LHS wants $g$ to be flexible, but the RHS wants $g$ to be linear. This trade-off is regulated through the $\lambda$.
If, e.g., $\lambda = \infty$, the second derivative HAS to be 0 to minimize the loss, which means that $g$ needs to be linear, corresponding to 0 knots (and the model has two degrees of freedom, intercept and slope).
If $\lambda = 0$, the RHS becomes irrelevant, and the objective is to minimize the LHS without constraints, which is achieved by setting one knot at each data point (and hence the model has n degrees of freedom) | Knots in Smoothing Splines
Having read the book myself, this statement refers to a regularized model (edit: I guess all smoothing splines are like that), where every point is a knot, but you are regularizing by adding $\lambda |
38,493 | Find CDF from an estimated PDF (estimated by KDE) | There's no need to integrate anything if you know the cdf of the kernel itself. I believe this is straightforward for all the common kernels.
Note that
a KDE is a mixture density
the cdf of a mixture is the mixture of the cdfs.
that is, if $\hat{f}(x)=\frac{1}{n}\sum_i f_i(x)$ is your KDE at $x$, then
$\hat{F}(x)=\frac{1}{n}\sum_i F_i(x)$.
Take a Gaussian kernel for example. If $x_i$ are your observations, $f_i$ is $\frac{1}{\sigma} \phi(\frac{x- x_i}{\sigma})$ and $F_i=\Phi(\frac{x-x_i}{\sigma})$, where commonly $\sigma$ is defined as the bandwidth (in some implementations the bandwidth may be some multiple of $\sigma$).
Indeed, R does that (defines bandwidth = $\sigma$) for all its kernels, not just the Gaussian one. But it's easy as long as you can convert a bandwidth to the parameters of the kernel so you can call a function for the cdf.
So you can evaluate the cdf of your mixture at any $x$ in linear time. If you need it to be able to calculate $\hat{F}$ fast, you could evaluate it over a grid (fine enough to get sufficient accuracy), and use interpolation in between (e.g. in R this is easily done with approxfun; no doubt Python has a convenient way to do something similar)
Here's an example of a plot of a kde and cdf for a Gaussian kernel.
Here's the code I used (it was done in R - this is a quick kludge to show the idea, a proper function would be checking arguments, providing better info, labelling axes, letting you specify the kernel and so on). The workhorse is the third line, which defines the function that does all the actual calculation of the cdf, everything else is details of data or plotting.
x <- c(11,12,16) #data
xx <- seq(7,20,.1) # plot values for the cdf
kdecdfnorm <- function(x,xdat,bw) rowMeans(pnorm(outer(x,xdat,"-"),0,bw)) #cdf of KDE
opar <- par() # save graphics parameter settings
par(mfrow=c(1,2)) # 1 x 2 plot grid
kde <- density(x)
plot(kde)
bw <- kde$bw
plot(xx,kdecdfnorm(xx,x,bw),type="l")
abline(h=c(0,1),col=rgb(.5,.5,.5,.5),lty=3)
par(opar) # restore graphics parameters
How does that rowMeans(pnorm(outer(x,xdat,"-"),0,bw)) work?
rowMeans is just doing $\frac{1}{n}\sum_{i=1}^{n}$ of its argument
pnorm is computing the cdf of the Gaussian kernel terms, with bandwidth at its last argument
the first argument to pnorm is just $x-x_i$ over the data values ($x_i$) and the various x's we want to find the curve at
which is to say we're just computing $\frac{1}{n}\sum_i \Phi(\frac{x-x_i}{\sigma})$ in a quite direct way, across whatever values for $x$ we want to calculate it at. | Find CDF from an estimated PDF (estimated by KDE) | There's no need to integrate anything if you know the cdf of the kernel itself. I believe this is straightforward for all the common kernels.
Note that
a KDE is a mixture density
the cdf of a mixtur | Find CDF from an estimated PDF (estimated by KDE)
There's no need to integrate anything if you know the cdf of the kernel itself. I believe this is straightforward for all the common kernels.
Note that
a KDE is a mixture density
the cdf of a mixture is the mixture of the cdfs.
that is, if $\hat{f}(x)=\frac{1}{n}\sum_i f_i(x)$ is your KDE at $x$, then
$\hat{F}(x)=\frac{1}{n}\sum_i F_i(x)$.
Take a Gaussian kernel for example. If $x_i$ are your observations, $f_i$ is $\frac{1}{\sigma} \phi(\frac{x- x_i}{\sigma})$ and $F_i=\Phi(\frac{x-x_i}{\sigma})$, where commonly $\sigma$ is defined as the bandwidth (in some implementations the bandwidth may be some multiple of $\sigma$).
Indeed, R does that (defines bandwidth = $\sigma$) for all its kernels, not just the Gaussian one. But it's easy as long as you can convert a bandwidth to the parameters of the kernel so you can call a function for the cdf.
So you can evaluate the cdf of your mixture at any $x$ in linear time. If you need it to be able to calculate $\hat{F}$ fast, you could evaluate it over a grid (fine enough to get sufficient accuracy), and use interpolation in between (e.g. in R this is easily done with approxfun; no doubt Python has a convenient way to do something similar)
Here's an example of a plot of a kde and cdf for a Gaussian kernel.
Here's the code I used (it was done in R - this is a quick kludge to show the idea, a proper function would be checking arguments, providing better info, labelling axes, letting you specify the kernel and so on). The workhorse is the third line, which defines the function that does all the actual calculation of the cdf, everything else is details of data or plotting.
x <- c(11,12,16) #data
xx <- seq(7,20,.1) # plot values for the cdf
kdecdfnorm <- function(x,xdat,bw) rowMeans(pnorm(outer(x,xdat,"-"),0,bw)) #cdf of KDE
opar <- par() # save graphics parameter settings
par(mfrow=c(1,2)) # 1 x 2 plot grid
kde <- density(x)
plot(kde)
bw <- kde$bw
plot(xx,kdecdfnorm(xx,x,bw),type="l")
abline(h=c(0,1),col=rgb(.5,.5,.5,.5),lty=3)
par(opar) # restore graphics parameters
How does that rowMeans(pnorm(outer(x,xdat,"-"),0,bw)) work?
rowMeans is just doing $\frac{1}{n}\sum_{i=1}^{n}$ of its argument
pnorm is computing the cdf of the Gaussian kernel terms, with bandwidth at its last argument
the first argument to pnorm is just $x-x_i$ over the data values ($x_i$) and the various x's we want to find the curve at
which is to say we're just computing $\frac{1}{n}\sum_i \Phi(\frac{x-x_i}{\sigma})$ in a quite direct way, across whatever values for $x$ we want to calculate it at. | Find CDF from an estimated PDF (estimated by KDE)
There's no need to integrate anything if you know the cdf of the kernel itself. I believe this is straightforward for all the common kernels.
Note that
a KDE is a mixture density
the cdf of a mixtur |
38,494 | Are 1-dimensional numpy arrays equivalent to vectors? [closed] | A NumPy array is a N-dimensional container of
items of
the same type and size. As a computer programming data structure, it is limited
by resources and dtype --- there are values which are not representable by NumPy
arrays. Due to these limitations, NumPy arrays are not exactly equivalent to the
mathematical concept of coordinate vectors. NumPy arrays are often used to
(approximately) represent vectors however.
Math also has a concept of vector spaces whose elements are called vectors. One
example of a vector is an object with direction and magnitude. A coordinate
vector is merely a represention of the vector with respect to a particular
coordinate system. So while a NumPy array can at best record the
coordinates of a vector (tacitly, with respect to a coordinate system), it can
not capture the full abstract notion of a vector. The abstract notion of vector
exists without any mention of coordinate system.
Moreover, vector spaces can be collections of things other than
coordinates. For example, families of functions can form
a vector space. The functions would then be vectors. So here is another example
where NumPy arrays are not at all equivalent to vectors.
Linear algebra makes a distinction between "row vectors" and "column vectors".
There is no such distinction in NumPy. There are only n-dimensional arrays.
Keep in mind that NumPy was built around a desire to generalize array-like containers to N dimensions where N is bigger than 2. So NumPy operations are defined in ways that generalize to higher dimensions.
For example, transposing a NumPy array of shape (a,b,c,d) returns an array of shape (d,c,b,a) -- the axes are reversed. In two dimensions, this means an array of shape (a,b) (i.e. a rows, b columns) becomes an array of shape (b,a) (i.e, b rows, a columns). So NumPy's notion of transposition matches up nicely with the linear algebra notion for 2-dimensional arrays.
But this also means that the transpose of a 1-dimensional NumPy array of shape
(a,) still has shape (a,). Nothing changes. It is still the same
1-dimensional array. Thus there is no real distinction between "row vectors"
and "column vectors".
NumPy apes the concept of row and column vectors using 2-dimensional arrays.
An array of shape (5,1) has 5 rows and 1 column. You can sort of think of this as a column vector, and wherever you would need a column vector in linear algebra, you could use an array of shape (n,1). Similarly, wherever you see a row vector in linear algebra you could use an array of shape (1,n).
However, NumPy also has a concept of broadcasting and one of the rules of broadcasting is that extra axes will be automatically added to any array on the left-hand side of its shape whenever an operation requires it. So,
a 1-dimensional NumPy array of shape (5,) can
broadcast to a 2-dimensional array of shape (1,5) (or 3-dimensional array of
shape (1,1,5), etc).
This means a 1-dimensional array of shape (5,) can be thought of as a row vector since it will automatically broadcast up to an array of shape (1,5) whenever necessary.
On the other hand, broadcasting never adds extra axes on the right-hand side of the shape. You must do so explicitly. So if theta is an array of shape (5,), to create a "column vector" of shape (5,1) you must explicitly add the new axis yourself by using theta[:, np.newaxis] or the shorthand theta[:, None].
What would be the correct numpy equivalent of $\theta^TX$?
If, for example,
In [4]: import numpy as np
In [5]: theta = np.array([1,2,3,4,5])[:, np.newaxis]
In [7]: X = np.random.randint(10, size=(5,3))
In [8]: X
Out[8]:
array([[4, 0, 3],
[6, 9, 1],
[7, 8, 7],
[4, 2, 6],
[7, 7, 2]])
then you could compute $\theta^TX$ using
In [18]: np.dot(theta.T, X)
Out[18]: array([[88, 85, 60]])
Note that np.dot is defined so that
For N dimensions it is a sum product over the last axis of a and
the second-to-last of b
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
This has the property that
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D
arrays to inner product of vectors (without complex conjugation).
Note that NumPy also has a matrix subclass of ndarray whose multiplication operator is defined to match 2-dimensional matrix multiplication. So if theta and X were NumPy matrices, then you could write theta.T * X instead of np.dot(theta.T, X). This can make translating math into NumPy code a bit more readable.
Or, if you have Python3.5 or newer, you can use regular NumPy arrays and write theta.T @ X. | Are 1-dimensional numpy arrays equivalent to vectors? [closed] | A NumPy array is a N-dimensional container of
items of
the same type and size. As a computer programming data structure, it is limited
by resources and dtype --- there are values which are not represe | Are 1-dimensional numpy arrays equivalent to vectors? [closed]
A NumPy array is a N-dimensional container of
items of
the same type and size. As a computer programming data structure, it is limited
by resources and dtype --- there are values which are not representable by NumPy
arrays. Due to these limitations, NumPy arrays are not exactly equivalent to the
mathematical concept of coordinate vectors. NumPy arrays are often used to
(approximately) represent vectors however.
Math also has a concept of vector spaces whose elements are called vectors. One
example of a vector is an object with direction and magnitude. A coordinate
vector is merely a represention of the vector with respect to a particular
coordinate system. So while a NumPy array can at best record the
coordinates of a vector (tacitly, with respect to a coordinate system), it can
not capture the full abstract notion of a vector. The abstract notion of vector
exists without any mention of coordinate system.
Moreover, vector spaces can be collections of things other than
coordinates. For example, families of functions can form
a vector space. The functions would then be vectors. So here is another example
where NumPy arrays are not at all equivalent to vectors.
Linear algebra makes a distinction between "row vectors" and "column vectors".
There is no such distinction in NumPy. There are only n-dimensional arrays.
Keep in mind that NumPy was built around a desire to generalize array-like containers to N dimensions where N is bigger than 2. So NumPy operations are defined in ways that generalize to higher dimensions.
For example, transposing a NumPy array of shape (a,b,c,d) returns an array of shape (d,c,b,a) -- the axes are reversed. In two dimensions, this means an array of shape (a,b) (i.e. a rows, b columns) becomes an array of shape (b,a) (i.e, b rows, a columns). So NumPy's notion of transposition matches up nicely with the linear algebra notion for 2-dimensional arrays.
But this also means that the transpose of a 1-dimensional NumPy array of shape
(a,) still has shape (a,). Nothing changes. It is still the same
1-dimensional array. Thus there is no real distinction between "row vectors"
and "column vectors".
NumPy apes the concept of row and column vectors using 2-dimensional arrays.
An array of shape (5,1) has 5 rows and 1 column. You can sort of think of this as a column vector, and wherever you would need a column vector in linear algebra, you could use an array of shape (n,1). Similarly, wherever you see a row vector in linear algebra you could use an array of shape (1,n).
However, NumPy also has a concept of broadcasting and one of the rules of broadcasting is that extra axes will be automatically added to any array on the left-hand side of its shape whenever an operation requires it. So,
a 1-dimensional NumPy array of shape (5,) can
broadcast to a 2-dimensional array of shape (1,5) (or 3-dimensional array of
shape (1,1,5), etc).
This means a 1-dimensional array of shape (5,) can be thought of as a row vector since it will automatically broadcast up to an array of shape (1,5) whenever necessary.
On the other hand, broadcasting never adds extra axes on the right-hand side of the shape. You must do so explicitly. So if theta is an array of shape (5,), to create a "column vector" of shape (5,1) you must explicitly add the new axis yourself by using theta[:, np.newaxis] or the shorthand theta[:, None].
What would be the correct numpy equivalent of $\theta^TX$?
If, for example,
In [4]: import numpy as np
In [5]: theta = np.array([1,2,3,4,5])[:, np.newaxis]
In [7]: X = np.random.randint(10, size=(5,3))
In [8]: X
Out[8]:
array([[4, 0, 3],
[6, 9, 1],
[7, 8, 7],
[4, 2, 6],
[7, 7, 2]])
then you could compute $\theta^TX$ using
In [18]: np.dot(theta.T, X)
Out[18]: array([[88, 85, 60]])
Note that np.dot is defined so that
For N dimensions it is a sum product over the last axis of a and
the second-to-last of b
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
This has the property that
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D
arrays to inner product of vectors (without complex conjugation).
Note that NumPy also has a matrix subclass of ndarray whose multiplication operator is defined to match 2-dimensional matrix multiplication. So if theta and X were NumPy matrices, then you could write theta.T * X instead of np.dot(theta.T, X). This can make translating math into NumPy code a bit more readable.
Or, if you have Python3.5 or newer, you can use regular NumPy arrays and write theta.T @ X. | Are 1-dimensional numpy arrays equivalent to vectors? [closed]
A NumPy array is a N-dimensional container of
items of
the same type and size. As a computer programming data structure, it is limited
by resources and dtype --- there are values which are not represe |
38,495 | Pooling data for logistic regression | This is all wrong-headed. First, note that there is no meaningful ontological status of 'winner'.
How to determine the quality of something when all you have is a set of results from head-to-head comparisons (e.g., sports teams based on the results of games in a season) is a very tricky question. In the simplest case, a Bradley-Terry model could be used to predict the probability that unit $i$ will beat unit $j$. Bayesian network analyses can also be used.
A Bradley-Terry model wouldn't quite work in your case, but your case is actually a lot simpler: You presumably already have data directly on the quality of each dog as a racing dog. Specifically, you should have each dog's race times. A better race dog is just a faster dog. If you want to determine what variables are related the ability of a race dog, you need to model racing times. If you want to rank existing dogs, you could fit a Bayesian model, or a mixed effects model and look at the BLUPs. If you wanted to estimated probabilities that dog A will win a given race (e.g., for book-making purposes), you could take fitted race time distributions for each dog in the race and simulate to generate the proportion of the runs that dog A has the lowest time.
Update:
As I understand your situation now from your comment, I gather you want to determine if odds that were given in the past (by whatever method) were reasonable given what you now know about whether a dog actually won its race. This is a different situation than I thought you were asking about in the body of the question. Here you aren't trying to build a model of any type, you are only trying to assess the calibration of the starting odds.
First, note that the odds that a bookmaker (e.g., the track) will offer / list are not the odds that they think are fair. They have to add a cut in order to make a living (cf., Odds made simple). So you need to remove that to get to the actual odds that were believed to be fair.
Once you have those numbers, the simplest check is that they should imply a 100% chance of one of the listed dogs winning. For example, if there were only two dogs and one had an estimated odds of winning of 1 to 3, the other dog's odds should be 3 to 1; if it were 10 to 1, something doesn't add up.
To answer your specific question, if the odds add up, you needn't take into account the number of dogs in a race, because the odds being offered are supposed to account for that, and if they don't, that's something you want to discover.
At this point, you could assess the discriminative performance of the odds by computing Somer's D, which is informationally equivalent to the area under the receiver operating characteristic curve (AUC).
Lastly, you could convert the fair odds into the log odds of winning and use them as a single predictive variable in a logistic regression model. The intercept and slope of that model should be $0$ and $1$, if the odds are not biased. | Pooling data for logistic regression | This is all wrong-headed. First, note that there is no meaningful ontological status of 'winner'.
How to determine the quality of something when all you have is a set of results from head-to-head com | Pooling data for logistic regression
This is all wrong-headed. First, note that there is no meaningful ontological status of 'winner'.
How to determine the quality of something when all you have is a set of results from head-to-head comparisons (e.g., sports teams based on the results of games in a season) is a very tricky question. In the simplest case, a Bradley-Terry model could be used to predict the probability that unit $i$ will beat unit $j$. Bayesian network analyses can also be used.
A Bradley-Terry model wouldn't quite work in your case, but your case is actually a lot simpler: You presumably already have data directly on the quality of each dog as a racing dog. Specifically, you should have each dog's race times. A better race dog is just a faster dog. If you want to determine what variables are related the ability of a race dog, you need to model racing times. If you want to rank existing dogs, you could fit a Bayesian model, or a mixed effects model and look at the BLUPs. If you wanted to estimated probabilities that dog A will win a given race (e.g., for book-making purposes), you could take fitted race time distributions for each dog in the race and simulate to generate the proportion of the runs that dog A has the lowest time.
Update:
As I understand your situation now from your comment, I gather you want to determine if odds that were given in the past (by whatever method) were reasonable given what you now know about whether a dog actually won its race. This is a different situation than I thought you were asking about in the body of the question. Here you aren't trying to build a model of any type, you are only trying to assess the calibration of the starting odds.
First, note that the odds that a bookmaker (e.g., the track) will offer / list are not the odds that they think are fair. They have to add a cut in order to make a living (cf., Odds made simple). So you need to remove that to get to the actual odds that were believed to be fair.
Once you have those numbers, the simplest check is that they should imply a 100% chance of one of the listed dogs winning. For example, if there were only two dogs and one had an estimated odds of winning of 1 to 3, the other dog's odds should be 3 to 1; if it were 10 to 1, something doesn't add up.
To answer your specific question, if the odds add up, you needn't take into account the number of dogs in a race, because the odds being offered are supposed to account for that, and if they don't, that's something you want to discover.
At this point, you could assess the discriminative performance of the odds by computing Somer's D, which is informationally equivalent to the area under the receiver operating characteristic curve (AUC).
Lastly, you could convert the fair odds into the log odds of winning and use them as a single predictive variable in a logistic regression model. The intercept and slope of that model should be $0$ and $1$, if the odds are not biased. | Pooling data for logistic regression
This is all wrong-headed. First, note that there is no meaningful ontological status of 'winner'.
How to determine the quality of something when all you have is a set of results from head-to-head com |
38,496 | Pooling data for logistic regression | That is fine way to structure the data, yes. Your trepidation comes from your data taking on a multilevel structure: Dogs are nested within races. You can test if you "need" to account for this multilevel structure by doing multilevel modeling. You can specify a model with a random intercept at the race level (Level 2) and one without this random intercept. Then you can compare these two models to see if the addition of the random intercept accounts for a significant proportion of the variance in your outcome.
The lme4 package in R is my go-to for running multilevel models, and it handles logistic regression by using the glmer() function along with the family= argument, specifying binomial. | Pooling data for logistic regression | That is fine way to structure the data, yes. Your trepidation comes from your data taking on a multilevel structure: Dogs are nested within races. You can test if you "need" to account for this multil | Pooling data for logistic regression
That is fine way to structure the data, yes. Your trepidation comes from your data taking on a multilevel structure: Dogs are nested within races. You can test if you "need" to account for this multilevel structure by doing multilevel modeling. You can specify a model with a random intercept at the race level (Level 2) and one without this random intercept. Then you can compare these two models to see if the addition of the random intercept accounts for a significant proportion of the variance in your outcome.
The lme4 package in R is my go-to for running multilevel models, and it handles logistic regression by using the glmer() function along with the family= argument, specifying binomial. | Pooling data for logistic regression
That is fine way to structure the data, yes. Your trepidation comes from your data taking on a multilevel structure: Dogs are nested within races. You can test if you "need" to account for this multil |
38,497 | Pooling data for logistic regression | I don't quite understand what you mean by "pool" here. What you described will certainly get your data in a format that makes it easy to work with in R or Python (although putting it in a data.frame object is cleaner imo).
The fact that different numbers of dogs racing probably won't be a problem. What will be a problem is the dependency each dog has on its competitors. Dogs don't run a race in a vacuum. Each dog affects each other dogs' probabilities of winning. Your model will tell you what dog-characteristics make up a winning dog, but will not account for dependency... which will probably seriously confound your results. | Pooling data for logistic regression | I don't quite understand what you mean by "pool" here. What you described will certainly get your data in a format that makes it easy to work with in R or Python (although putting it in a data.frame | Pooling data for logistic regression
I don't quite understand what you mean by "pool" here. What you described will certainly get your data in a format that makes it easy to work with in R or Python (although putting it in a data.frame object is cleaner imo).
The fact that different numbers of dogs racing probably won't be a problem. What will be a problem is the dependency each dog has on its competitors. Dogs don't run a race in a vacuum. Each dog affects each other dogs' probabilities of winning. Your model will tell you what dog-characteristics make up a winning dog, but will not account for dependency... which will probably seriously confound your results. | Pooling data for logistic regression
I don't quite understand what you mean by "pool" here. What you described will certainly get your data in a format that makes it easy to work with in R or Python (although putting it in a data.frame |
38,498 | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | You quote the definition of covariance of two variables, while the form with transpose is the definition of covariance matrix. In first case we are talking about two random variables $X$ and $Y$,
$$ \operatorname{cov}(X,Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]} $$
while in the second case $\mathbf{X}$ is a vector of random variables $X_1,\dots,X_n$
$$ \mathbf{X} = \begin{bmatrix}X_1 \\ \vdots \\ X_n \end{bmatrix} $$
so we are talking about covariance between multiple variables in form of covariance matrix
$$ \Sigma=\mathrm{E}
\left[
\left(
\mathbf{X} - \mathrm{E}[\mathbf{X}]
\right)
\left(
\mathbf{X} - \mathrm{E}[\mathbf{X}]
\right)^{\rm T}
\right] $$
and the transpose appears in here because you are multiplying two vectors. | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | You quote the definition of covariance of two variables, while the form with transpose is the definition of covariance matrix. In first case we are talking about two random variables $X$ and $Y$,
$$ \ | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
You quote the definition of covariance of two variables, while the form with transpose is the definition of covariance matrix. In first case we are talking about two random variables $X$ and $Y$,
$$ \operatorname{cov}(X,Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]} $$
while in the second case $\mathbf{X}$ is a vector of random variables $X_1,\dots,X_n$
$$ \mathbf{X} = \begin{bmatrix}X_1 \\ \vdots \\ X_n \end{bmatrix} $$
so we are talking about covariance between multiple variables in form of covariance matrix
$$ \Sigma=\mathrm{E}
\left[
\left(
\mathbf{X} - \mathrm{E}[\mathbf{X}]
\right)
\left(
\mathbf{X} - \mathrm{E}[\mathbf{X}]
\right)^{\rm T}
\right] $$
and the transpose appears in here because you are multiplying two vectors. | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
You quote the definition of covariance of two variables, while the form with transpose is the definition of covariance matrix. In first case we are talking about two random variables $X$ and $Y$,
$$ \ |
38,499 | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | A covariance matrix is matrix-valued. Covariance of two random variables is an element in that matrix, i.e. a scalar. Hence the covariance formula that you list yields a scalar, while computing $xx^T$ for vector-valued $x$ yields a matrix. | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | A covariance matrix is matrix-valued. Covariance of two random variables is an element in that matrix, i.e. a scalar. Hence the covariance formula that you list yields a scalar, while computing $xx^T$ | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
A covariance matrix is matrix-valued. Covariance of two random variables is an element in that matrix, i.e. a scalar. Hence the covariance formula that you list yields a scalar, while computing $xx^T$ for vector-valued $x$ yields a matrix. | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
A covariance matrix is matrix-valued. Covariance of two random variables is an element in that matrix, i.e. a scalar. Hence the covariance formula that you list yields a scalar, while computing $xx^T$ |
38,500 | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | Small example: Consider a 2x1
$$v = \begin{bmatrix}
7\\
6
\end{bmatrix}$$
Note that
$$v'v = \begin{bmatrix}
7&6
\end{bmatrix}\begin{bmatrix}
7\\
6
\end{bmatrix}$$
is a scalar (or 1x1 matrix?)
But
$$vv' = \begin{bmatrix}
7\\
6
\end{bmatrix} \begin{bmatrix}
7&6
\end{bmatrix}$$
is a matrix | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside? | Small example: Consider a 2x1
$$v = \begin{bmatrix}
7\\
6
\end{bmatrix}$$
Note that
$$v'v = \begin{bmatrix}
7&6
\end{bmatrix}\begin{bmatrix}
7\\
6
\end{bmatrix}$$
is a scalar (or 1x1 matrix?)
But
$$ | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
Small example: Consider a 2x1
$$v = \begin{bmatrix}
7\\
6
\end{bmatrix}$$
Note that
$$v'v = \begin{bmatrix}
7&6
\end{bmatrix}\begin{bmatrix}
7\\
6
\end{bmatrix}$$
is a scalar (or 1x1 matrix?)
But
$$vv' = \begin{bmatrix}
7\\
6
\end{bmatrix} \begin{bmatrix}
7&6
\end{bmatrix}$$
is a matrix | Why does variance-covariance matrix of $\hat{\beta}$ have transpose inside?
Small example: Consider a 2x1
$$v = \begin{bmatrix}
7\\
6
\end{bmatrix}$$
Note that
$$v'v = \begin{bmatrix}
7&6
\end{bmatrix}\begin{bmatrix}
7\\
6
\end{bmatrix}$$
is a scalar (or 1x1 matrix?)
But
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.