idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,201
Why does the proportion of native language speakers have an arcsine like distribution?
Question 2: What would be the best way to interpret this observation? For example does such a shape indicates that a few major/dominant language will eventually cannibalize the numerous minor languages? How many subdistricts are there? It looks like in most districts (about 2 or 3 thousand?) one language is dominant with 80% or more of the people that have this as their native language (and high dominance seems to be more likely than little dominance) As a consequence, this leaves only 20% for the other languages in a district, and that creates this mirrored image. A language is spoken either by many (scoring >80%) or (as a consequence) on the other side only by a few (scoring <20%). (Possibly there might be some bilingual speakers, but I assume that in most cases the native speakers of languages should add up to more or less 100% in a single subdistrict.) In short: You don't see many languages in the middle around 50% because there is often a dominant language in a district, which causes bumps at the high end (representing the percentage of native speakers of the dominant language) but also a bump at the low end (representing the percentage of native speakers of the non-dominant languages). A nice way to add information to that graph would be to make a stacked graph where you sub divide the bars and give different colours to the 1st most spoken language, the 2nd most spoken language, and the other languages. In that way you can see how the mirror image is created from on the right the dominant (most spoken) language. And on the left the rest. Question 1: Why do we have this distribution that roughly resembles an arcsine like distribution. Note that I am not saying that it is necessarily a perfect arcsine in the theoretical sense but rather in an engineering application sense where it is good enough to assume the nearest matching distribution in order get the job done. I know that random Brownian motion results in an arcsine distribution but I am not sure if that is the underlying reason here. I don't believe that it is so simple as 1d Brownian motion. But maybe it could be insightful to make some maps and see how the languages distribute. What I imagine is that the majority of the curve is dominated by the mayor languages which are concentrated in regions where they are the 1st language spoken: From https://commons.m.wikimedia.org/wiki/File:Language_region_maps_of_India.svg#mw-jump-to-license and on top of that you can imagine some mixing of those languages at the borders which causes the distribution to deviate from a perfect 0/100% split. You might see this spread as some sort of Brownian motion process (but possibly with some attractive forces). And the probability for languages to reach further from their origin reduces and in that way you get some distribution that might be simular to the arcsine distribution, but probably it will be more complex, maybe you could model(approximate) it more generally as a beta distribution, but possibly it is a mixture of something more complex, that happens to look like an arcsine.
Why does the proportion of native language speakers have an arcsine like distribution?
Question 2: What would be the best way to interpret this observation? For example does such a shape indicates that a few major/dominant language will eventually cannibalize the numerous minor language
Why does the proportion of native language speakers have an arcsine like distribution? Question 2: What would be the best way to interpret this observation? For example does such a shape indicates that a few major/dominant language will eventually cannibalize the numerous minor languages? How many subdistricts are there? It looks like in most districts (about 2 or 3 thousand?) one language is dominant with 80% or more of the people that have this as their native language (and high dominance seems to be more likely than little dominance) As a consequence, this leaves only 20% for the other languages in a district, and that creates this mirrored image. A language is spoken either by many (scoring >80%) or (as a consequence) on the other side only by a few (scoring <20%). (Possibly there might be some bilingual speakers, but I assume that in most cases the native speakers of languages should add up to more or less 100% in a single subdistrict.) In short: You don't see many languages in the middle around 50% because there is often a dominant language in a district, which causes bumps at the high end (representing the percentage of native speakers of the dominant language) but also a bump at the low end (representing the percentage of native speakers of the non-dominant languages). A nice way to add information to that graph would be to make a stacked graph where you sub divide the bars and give different colours to the 1st most spoken language, the 2nd most spoken language, and the other languages. In that way you can see how the mirror image is created from on the right the dominant (most spoken) language. And on the left the rest. Question 1: Why do we have this distribution that roughly resembles an arcsine like distribution. Note that I am not saying that it is necessarily a perfect arcsine in the theoretical sense but rather in an engineering application sense where it is good enough to assume the nearest matching distribution in order get the job done. I know that random Brownian motion results in an arcsine distribution but I am not sure if that is the underlying reason here. I don't believe that it is so simple as 1d Brownian motion. But maybe it could be insightful to make some maps and see how the languages distribute. What I imagine is that the majority of the curve is dominated by the mayor languages which are concentrated in regions where they are the 1st language spoken: From https://commons.m.wikimedia.org/wiki/File:Language_region_maps_of_India.svg#mw-jump-to-license and on top of that you can imagine some mixing of those languages at the borders which causes the distribution to deviate from a perfect 0/100% split. You might see this spread as some sort of Brownian motion process (but possibly with some attractive forces). And the probability for languages to reach further from their origin reduces and in that way you get some distribution that might be simular to the arcsine distribution, but probably it will be more complex, maybe you could model(approximate) it more generally as a beta distribution, but possibly it is a mixture of something more complex, that happens to look like an arcsine.
Why does the proportion of native language speakers have an arcsine like distribution? Question 2: What would be the best way to interpret this observation? For example does such a shape indicates that a few major/dominant language will eventually cannibalize the numerous minor language
32,202
Why does the proportion of native language speakers have an arcsine like distribution?
The arcsine function describe a known distribution: the beta distribution $\mathcal{B}(\alpha = 1/2, \beta = 1/2)$. While a random walk would give a good mechanistic explanation, there is perhaps an answer in probability theory: for any district the calculated proportion is a number between $0$ and $1$ - one can see it as the probability that the people from that district would speak its official native language, when looking at the whole set of districts, this number can be considered as a random variable such that it is well described by the conjugate of the Bernouilli trial distribution, that is, the beta distribution, this distribution has two parameters $\alpha$ and $\beta$. Yet we need to understand why we should get $\alpha=1/2$ and $\beta =1/2$... Still half an answer: half full and half empty :-)
Why does the proportion of native language speakers have an arcsine like distribution?
The arcsine function describe a known distribution: the beta distribution $\mathcal{B}(\alpha = 1/2, \beta = 1/2)$. While a random walk would give a good mechanistic explanation, there is perhaps an a
Why does the proportion of native language speakers have an arcsine like distribution? The arcsine function describe a known distribution: the beta distribution $\mathcal{B}(\alpha = 1/2, \beta = 1/2)$. While a random walk would give a good mechanistic explanation, there is perhaps an answer in probability theory: for any district the calculated proportion is a number between $0$ and $1$ - one can see it as the probability that the people from that district would speak its official native language, when looking at the whole set of districts, this number can be considered as a random variable such that it is well described by the conjugate of the Bernouilli trial distribution, that is, the beta distribution, this distribution has two parameters $\alpha$ and $\beta$. Yet we need to understand why we should get $\alpha=1/2$ and $\beta =1/2$... Still half an answer: half full and half empty :-)
Why does the proportion of native language speakers have an arcsine like distribution? The arcsine function describe a known distribution: the beta distribution $\mathcal{B}(\alpha = 1/2, \beta = 1/2)$. While a random walk would give a good mechanistic explanation, there is perhaps an a
32,203
What is the intuition behind the positional cosine encoding in the transformer network?
In positional encoding you encode the dimension with different frequency waves. Together with a position (on this wave) this gives you encoding that corresponds to each input. The encoding is subsequently added to the input. This procedure alters the angle between two embedding vectors. Suppose your word is embedded with a vector: $e_1,e_2,\dots ,e_d$. If there was no positional encoding then the angle between the embedding vectors of the same word will be always 0 regardless of the position of the word in a sentence. Now, you alter the vector with positional encodings $p_1,p_2,\dots ,p_d$ and $p'_1,p'_2,\dots ,p'_d$ for two different positions in a sentence of the same word. Now the angle becomes: $$\cos(\alpha)=\frac{\sum_{i=1}^d(e_i+p_i)(e_i+p'_i)}{\sqrt{\left(\sum_{j=1}^d(e_j+p_j)^2\right)\left(\sum_{j=1}^d(e_j+p_j)^2\right)}}$$ Depending on the difference in the position, the angle deviates more or less from zero. Why not concatenate? Concatenation would not be merely change the angles. It would make the distances orthogonal dimensions. In the procedure above we're altering the vectors: perhaps, scaling their dimensions differently. This effectively alters their length and angles.
What is the intuition behind the positional cosine encoding in the transformer network?
In positional encoding you encode the dimension with different frequency waves. Together with a position (on this wave) this gives you encoding that corresponds to each input. The encoding is subseque
What is the intuition behind the positional cosine encoding in the transformer network? In positional encoding you encode the dimension with different frequency waves. Together with a position (on this wave) this gives you encoding that corresponds to each input. The encoding is subsequently added to the input. This procedure alters the angle between two embedding vectors. Suppose your word is embedded with a vector: $e_1,e_2,\dots ,e_d$. If there was no positional encoding then the angle between the embedding vectors of the same word will be always 0 regardless of the position of the word in a sentence. Now, you alter the vector with positional encodings $p_1,p_2,\dots ,p_d$ and $p'_1,p'_2,\dots ,p'_d$ for two different positions in a sentence of the same word. Now the angle becomes: $$\cos(\alpha)=\frac{\sum_{i=1}^d(e_i+p_i)(e_i+p'_i)}{\sqrt{\left(\sum_{j=1}^d(e_j+p_j)^2\right)\left(\sum_{j=1}^d(e_j+p_j)^2\right)}}$$ Depending on the difference in the position, the angle deviates more or less from zero. Why not concatenate? Concatenation would not be merely change the angles. It would make the distances orthogonal dimensions. In the procedure above we're altering the vectors: perhaps, scaling their dimensions differently. This effectively alters their length and angles.
What is the intuition behind the positional cosine encoding in the transformer network? In positional encoding you encode the dimension with different frequency waves. Together with a position (on this wave) this gives you encoding that corresponds to each input. The encoding is subseque
32,204
Variance of Normal Order Statistics
I found someone had indeed provided the approximation above. It is in page 120 of their book [1] and page 12 of their accompanying course material [2]. I believe the result is first presented systematically by David and Johnson [3], which included higher order terms. Section 4.6 of David and Nagaraja's book [4] provides a more accessible explanation on David and Johnson's results in my opinion. The author of [1,2] stated that the variance of the $k^{\textrm{th}}$ order statistics can be estimated as: $$\textrm{Var}(X_{(k)}) \approx \frac{p(1-p)}{(n+2)(f(\theta))^2},$$ where $f(\cdot)$ is the PDF of $X$, $p = \frac{k}{n+1}$, and $\theta$ is the $p^\textrm{th}$ quantile of the distribution. Applying to the normal case, we have $\theta = \Phi^{-1} (\frac{k}{n+1})$, and one can easily verify the referenced variance estimate equates to the variance derived in the original question with some rearrangement of terms. [1] Jenny A. Baglivo (2005) Mathematica laboratories for mathematical statistics: Emphasizing simulation and computer intensive methods. [2] Jenny A. Baglivo (2018) MATH4427 Notebook 4 - Fall Semester 2017/2018 - Boston College. URL: https://www2.bc.edu/jenny-baglivo/MT427/notebook04.pdf [3] F. N. David and N. L. Johnson (1954) Statistical treatment of censored data: Part I. fundamental formulae. Biometrika, vol. 41, pp. 228–240. [4] H. A. David and H. N. Nagaraja (2004) Order statistics. Encyclopedia of Statistical Sciences.
Variance of Normal Order Statistics
I found someone had indeed provided the approximation above. It is in page 120 of their book [1] and page 12 of their accompanying course material [2]. I believe the result is first presented systemat
Variance of Normal Order Statistics I found someone had indeed provided the approximation above. It is in page 120 of their book [1] and page 12 of their accompanying course material [2]. I believe the result is first presented systematically by David and Johnson [3], which included higher order terms. Section 4.6 of David and Nagaraja's book [4] provides a more accessible explanation on David and Johnson's results in my opinion. The author of [1,2] stated that the variance of the $k^{\textrm{th}}$ order statistics can be estimated as: $$\textrm{Var}(X_{(k)}) \approx \frac{p(1-p)}{(n+2)(f(\theta))^2},$$ where $f(\cdot)$ is the PDF of $X$, $p = \frac{k}{n+1}$, and $\theta$ is the $p^\textrm{th}$ quantile of the distribution. Applying to the normal case, we have $\theta = \Phi^{-1} (\frac{k}{n+1})$, and one can easily verify the referenced variance estimate equates to the variance derived in the original question with some rearrangement of terms. [1] Jenny A. Baglivo (2005) Mathematica laboratories for mathematical statistics: Emphasizing simulation and computer intensive methods. [2] Jenny A. Baglivo (2018) MATH4427 Notebook 4 - Fall Semester 2017/2018 - Boston College. URL: https://www2.bc.edu/jenny-baglivo/MT427/notebook04.pdf [3] F. N. David and N. L. Johnson (1954) Statistical treatment of censored data: Part I. fundamental formulae. Biometrika, vol. 41, pp. 228–240. [4] H. A. David and H. N. Nagaraja (2004) Order statistics. Encyclopedia of Statistical Sciences.
Variance of Normal Order Statistics I found someone had indeed provided the approximation above. It is in page 120 of their book [1] and page 12 of their accompanying course material [2]. I believe the result is first presented systemat
32,205
What can Deep Neural Networks do that Support Vector Machines can't?
I will list a few areas where I am fairly confident DNNs perform better than SVMs, and it's not just the "hype". I'm sure there are more, just as I'm sure there are places where SVMs would do better. In particular, I've found that lots of people who ask questions like these are thinking only about Fully Connected networks (a.k.a. feed forward networks, or multi-layer perceptrons, or ANNs, or ...), typically being applied to more standard "tabular" style data. In these cases, I have not seen incredible results from DNNs, and if that is your only experience it may be easy to believe it is just hype. Where DNNs really shine is with Convolutional Neural Networks, but also their ability to handle sequential data, their ability to generate data, and reinforcement learning (e.g. learning to play Go or Atari games). I'll go into detail on a few below. Images DNNS, in particular Convolutional Neural Networks (CNNs), are the clear state of the art on almost every image processing task. I'm not sure if anyone is seriously suggesting that SVMs reach comparable performance on classification datasets like ImageNet, Cifar10, or even MNIST. This goes doubly for "dense" image predictions, e.g. given a 500x500 image of a CT scan classify exactly which pixels are a tumor (see U-Net: Convolutional Networks for Biomedical Image Segmentation by Olaf Ronneberger, Philipp Fischer, Thomas Brox for one of the earlier works in this area). I'm not sure what it would even look like for an SVM to do that. Note that while images are the poster child project for CNNs, they can also be readily applied to other tasks where signal processing may historically have been used, and in my personal experience have reached better performance levels. Data Generation There are a number of data generation tasks that are currently areas of research. For example, using Generative Adversarial Networks to generate new images (in practice, though, it is typically just used as unsupervised feature learning). There is also interesting work being done to try and generate art/music (https://magenta.tensorflow.org/). Similar to dense image predictions, I'm not sure what it would look like for SVMs to do this. Maybe there are people doing fascinating work in this area with SVMs, I'm not going to claim to be an expert on that, but my impression is that is not happening. I will note that data generation is not just purely academic, there are uses in voice synthesis (think Siri, Cortana, or Google Assistant), and likely other areas. Reinforcement Learning Researchers have been able to train DNNs to learn how to play Atari games using only the raw pixel data and the scores as input (https://deepmind.com/research/dqn/). Perhaps this is just my inexperience, but this is a feat I would struggle to achieve with SVMs.
What can Deep Neural Networks do that Support Vector Machines can't?
I will list a few areas where I am fairly confident DNNs perform better than SVMs, and it's not just the "hype". I'm sure there are more, just as I'm sure there are places where SVMs would do better.
What can Deep Neural Networks do that Support Vector Machines can't? I will list a few areas where I am fairly confident DNNs perform better than SVMs, and it's not just the "hype". I'm sure there are more, just as I'm sure there are places where SVMs would do better. In particular, I've found that lots of people who ask questions like these are thinking only about Fully Connected networks (a.k.a. feed forward networks, or multi-layer perceptrons, or ANNs, or ...), typically being applied to more standard "tabular" style data. In these cases, I have not seen incredible results from DNNs, and if that is your only experience it may be easy to believe it is just hype. Where DNNs really shine is with Convolutional Neural Networks, but also their ability to handle sequential data, their ability to generate data, and reinforcement learning (e.g. learning to play Go or Atari games). I'll go into detail on a few below. Images DNNS, in particular Convolutional Neural Networks (CNNs), are the clear state of the art on almost every image processing task. I'm not sure if anyone is seriously suggesting that SVMs reach comparable performance on classification datasets like ImageNet, Cifar10, or even MNIST. This goes doubly for "dense" image predictions, e.g. given a 500x500 image of a CT scan classify exactly which pixels are a tumor (see U-Net: Convolutional Networks for Biomedical Image Segmentation by Olaf Ronneberger, Philipp Fischer, Thomas Brox for one of the earlier works in this area). I'm not sure what it would even look like for an SVM to do that. Note that while images are the poster child project for CNNs, they can also be readily applied to other tasks where signal processing may historically have been used, and in my personal experience have reached better performance levels. Data Generation There are a number of data generation tasks that are currently areas of research. For example, using Generative Adversarial Networks to generate new images (in practice, though, it is typically just used as unsupervised feature learning). There is also interesting work being done to try and generate art/music (https://magenta.tensorflow.org/). Similar to dense image predictions, I'm not sure what it would look like for SVMs to do this. Maybe there are people doing fascinating work in this area with SVMs, I'm not going to claim to be an expert on that, but my impression is that is not happening. I will note that data generation is not just purely academic, there are uses in voice synthesis (think Siri, Cortana, or Google Assistant), and likely other areas. Reinforcement Learning Researchers have been able to train DNNs to learn how to play Atari games using only the raw pixel data and the scores as input (https://deepmind.com/research/dqn/). Perhaps this is just my inexperience, but this is a feat I would struggle to achieve with SVMs.
What can Deep Neural Networks do that Support Vector Machines can't? I will list a few areas where I am fairly confident DNNs perform better than SVMs, and it's not just the "hype". I'm sure there are more, just as I'm sure there are places where SVMs would do better.
32,206
Is stochastic gradient descent biased?
For a typical loss function $L = E_{x_i \sim \text{D}}[f(x_i)]$ and true gradient $\nabla L = E[\nabla f(x_i)]$, the expectation of the SGD gradient is $E[\nabla f(x')]$ where $x'$ is the datapoint in our batch of size this is 1. This is clearly unbiased. The loss function in the paper takes the form $L = \log E[e^{f(x)}]$ and has the gradient $$\nabla L = \frac{1}{E[e^{f(x)}]} E[\nabla e^{f(x)}] = \frac{E[\nabla f(x) e^{f(x)}]}{E[e^{f(x)}]} $$ Note that the SGD gradient $\frac{\nabla f(x') e^{f(x')}}{e^{f(x')}} = \nabla f(x')$ is biased. However, what we we only "did SGD" for the numerator and computed the exact expectation for the denominator? This pseudo-SGD gradient $\frac{\nabla f(x') e^{f(x')}}{E[e^{f(x)}]}$ is indeed unbiased. Although it is too expensive to recompute the denominator at every SGD step, if we assume that the parameters of $f$ do not change too rapidly (and therefore $f(x)$ also does not change rapidly), one way to estimate the denominator is to use an exponentially weighted moving average. This would lead us to a relatively unbiased estimate.
Is stochastic gradient descent biased?
For a typical loss function $L = E_{x_i \sim \text{D}}[f(x_i)]$ and true gradient $\nabla L = E[\nabla f(x_i)]$, the expectation of the SGD gradient is $E[\nabla f(x')]$ where $x'$ is the datapoint in
Is stochastic gradient descent biased? For a typical loss function $L = E_{x_i \sim \text{D}}[f(x_i)]$ and true gradient $\nabla L = E[\nabla f(x_i)]$, the expectation of the SGD gradient is $E[\nabla f(x')]$ where $x'$ is the datapoint in our batch of size this is 1. This is clearly unbiased. The loss function in the paper takes the form $L = \log E[e^{f(x)}]$ and has the gradient $$\nabla L = \frac{1}{E[e^{f(x)}]} E[\nabla e^{f(x)}] = \frac{E[\nabla f(x) e^{f(x)}]}{E[e^{f(x)}]} $$ Note that the SGD gradient $\frac{\nabla f(x') e^{f(x')}}{e^{f(x')}} = \nabla f(x')$ is biased. However, what we we only "did SGD" for the numerator and computed the exact expectation for the denominator? This pseudo-SGD gradient $\frac{\nabla f(x') e^{f(x')}}{E[e^{f(x)}]}$ is indeed unbiased. Although it is too expensive to recompute the denominator at every SGD step, if we assume that the parameters of $f$ do not change too rapidly (and therefore $f(x)$ also does not change rapidly), one way to estimate the denominator is to use an exponentially weighted moving average. This would lead us to a relatively unbiased estimate.
Is stochastic gradient descent biased? For a typical loss function $L = E_{x_i \sim \text{D}}[f(x_i)]$ and true gradient $\nabla L = E[\nabla f(x_i)]$, the expectation of the SGD gradient is $E[\nabla f(x')]$ where $x'$ is the datapoint in
32,207
What is the difference between model selection and hyperparameter tuning?
The way I look at it (others may disagree!) is that it's all the same problem but some hyperparameters are easier to judge the effects of and optimize than others, and you aren't always able to give an acceptable quantification of every aspect under consideration. For instance, you could fit a ridge penalized logistic regression and jointly optimize the link function, which features are included, and the ridge penalty by a search over $$ \{\text{probit},\text{logit}\} \times \{0,1\}^p \times [0,\infty) $$ to minimize the negative log likelihood. But if you're in a typical statistics situation this will be a really high variance optimization (it's about as discrete as it gets, so good luck doing this well for a large number of features) and will really hurt your generalization probably, plus you'll probably want to make these decisions on scientific concerns anyway. So it's not that you couldn't treat these all as one big hyperparamer and optimize them, but it's more that that just isn't a helpful way to look at it. So instead you'd pick a sensible link and include all the features that you think make scientific sense, and then tune only the ridge penalty (if you even still want to do a ridge regression). Or maybe you have 5 different models and you evaluate them on AIC/BIC. This is like having a one dimensional grid search with each cell being a model so it's not actually any different. But probably you're not just thinking about the *IC values and there are other concerns not represented by that one number, so you wouldn't actually do this as an optimization because your objective function fails to capture every aspect of the problem. Other parameters, like $\lambda$ in a ridge regression, don't have as much of an interpretation or scientific issue so it's no problem to just optimize it, and it's a feasible thing to do too. And speaking of *IC, you can definitely use AIC and BIC for more machine learning-style models. They both have asymptotic relationships to cross validation so it's all getting at the same idea. Just as an example, I found this paper AIC and BIC based approaches for SVM parameter value estimation with RBF kernels from 2012 by Demyanov et al. so there are definitely people in machine learning thinking about these things. So that's my opinion, at least: there aren't any fundamental differences but in practice there are a lot of modeling decisions that we're not just going to cross validate over so it's nice to have other tools for them. Sometimes it's easy criteria like *IC (these don't require fitting a model on multiple subsets so they are pretty convenient if you're not basing your life on them), other times graphical assessments of a model or scientific concerns, and other times we can reduce it to a numerical optimization.
What is the difference between model selection and hyperparameter tuning?
The way I look at it (others may disagree!) is that it's all the same problem but some hyperparameters are easier to judge the effects of and optimize than others, and you aren't always able to give a
What is the difference between model selection and hyperparameter tuning? The way I look at it (others may disagree!) is that it's all the same problem but some hyperparameters are easier to judge the effects of and optimize than others, and you aren't always able to give an acceptable quantification of every aspect under consideration. For instance, you could fit a ridge penalized logistic regression and jointly optimize the link function, which features are included, and the ridge penalty by a search over $$ \{\text{probit},\text{logit}\} \times \{0,1\}^p \times [0,\infty) $$ to minimize the negative log likelihood. But if you're in a typical statistics situation this will be a really high variance optimization (it's about as discrete as it gets, so good luck doing this well for a large number of features) and will really hurt your generalization probably, plus you'll probably want to make these decisions on scientific concerns anyway. So it's not that you couldn't treat these all as one big hyperparamer and optimize them, but it's more that that just isn't a helpful way to look at it. So instead you'd pick a sensible link and include all the features that you think make scientific sense, and then tune only the ridge penalty (if you even still want to do a ridge regression). Or maybe you have 5 different models and you evaluate them on AIC/BIC. This is like having a one dimensional grid search with each cell being a model so it's not actually any different. But probably you're not just thinking about the *IC values and there are other concerns not represented by that one number, so you wouldn't actually do this as an optimization because your objective function fails to capture every aspect of the problem. Other parameters, like $\lambda$ in a ridge regression, don't have as much of an interpretation or scientific issue so it's no problem to just optimize it, and it's a feasible thing to do too. And speaking of *IC, you can definitely use AIC and BIC for more machine learning-style models. They both have asymptotic relationships to cross validation so it's all getting at the same idea. Just as an example, I found this paper AIC and BIC based approaches for SVM parameter value estimation with RBF kernels from 2012 by Demyanov et al. so there are definitely people in machine learning thinking about these things. So that's my opinion, at least: there aren't any fundamental differences but in practice there are a lot of modeling decisions that we're not just going to cross validate over so it's nice to have other tools for them. Sometimes it's easy criteria like *IC (these don't require fitting a model on multiple subsets so they are pretty convenient if you're not basing your life on them), other times graphical assessments of a model or scientific concerns, and other times we can reduce it to a numerical optimization.
What is the difference between model selection and hyperparameter tuning? The way I look at it (others may disagree!) is that it's all the same problem but some hyperparameters are easier to judge the effects of and optimize than others, and you aren't always able to give a
32,208
Improving the minimum estimator
I have no clear answer about existence of unbiased estimator. However, in terms of estimation error, estimating $\min(\mu_1, \dots, \mu_n)$ is a intrinsically difficult problem in general. For instance, let $Y_1, \dots, Y_N \sim N(\mu, \sigma^2I)$, and $\mu = (\mu_1, \dots, \mu_n)$. Let $\theta = \min_i \mu_i$ be the target quantity and $\hat{\theta}$ is an estimate of $\theta$. If we use the "naive" estimator $\hat{\theta} = \min_i(\bar{Y}_i)$ where $\bar{Y_i} = \frac{1}{N}\sum_{j=1}^N Y_{i,j}$, then, the $L_2$ estimate error is upper bounded by $$ \mathbb{E}[\hat{\theta} - \theta]^2 \lessapprox \frac{\sigma^2\log n}{N} $$ up to constant. (Note that the estimate error for each $\mu_i$ is $\frac{\sigma^2}{N}$). Of course, if $\mu_i$'s are far away from each others and $\sigma$ is very small, the estimate error should be reduced to $\frac{\sigma^2}{N}$. However, in worst case, there is no estimate of $\theta$ works better than the naive estimator. You can precisely show that $$ \inf_{\hat{\theta}} \sup_{\mu_1, \dots,\mu_n} \mathbb{E}[\hat{\theta} - \theta]^2 \gtrapprox \frac{\sigma^2\log n}{N} $$ where the infimum takes over all possible estiamte of $\theta$ based on the sample $Y_1,\dots, Y_N$ and the supremum takes over all possible configuration of $\mu_i$'s. Therefore the naive estimator is minimax optimal up to constant, and there is no better estimate of $\theta$ in this sense.
Improving the minimum estimator
I have no clear answer about existence of unbiased estimator. However, in terms of estimation error, estimating $\min(\mu_1, \dots, \mu_n)$ is a intrinsically difficult problem in general. For instanc
Improving the minimum estimator I have no clear answer about existence of unbiased estimator. However, in terms of estimation error, estimating $\min(\mu_1, \dots, \mu_n)$ is a intrinsically difficult problem in general. For instance, let $Y_1, \dots, Y_N \sim N(\mu, \sigma^2I)$, and $\mu = (\mu_1, \dots, \mu_n)$. Let $\theta = \min_i \mu_i$ be the target quantity and $\hat{\theta}$ is an estimate of $\theta$. If we use the "naive" estimator $\hat{\theta} = \min_i(\bar{Y}_i)$ where $\bar{Y_i} = \frac{1}{N}\sum_{j=1}^N Y_{i,j}$, then, the $L_2$ estimate error is upper bounded by $$ \mathbb{E}[\hat{\theta} - \theta]^2 \lessapprox \frac{\sigma^2\log n}{N} $$ up to constant. (Note that the estimate error for each $\mu_i$ is $\frac{\sigma^2}{N}$). Of course, if $\mu_i$'s are far away from each others and $\sigma$ is very small, the estimate error should be reduced to $\frac{\sigma^2}{N}$. However, in worst case, there is no estimate of $\theta$ works better than the naive estimator. You can precisely show that $$ \inf_{\hat{\theta}} \sup_{\mu_1, \dots,\mu_n} \mathbb{E}[\hat{\theta} - \theta]^2 \gtrapprox \frac{\sigma^2\log n}{N} $$ where the infimum takes over all possible estiamte of $\theta$ based on the sample $Y_1,\dots, Y_N$ and the supremum takes over all possible configuration of $\mu_i$'s. Therefore the naive estimator is minimax optimal up to constant, and there is no better estimate of $\theta$ in this sense.
Improving the minimum estimator I have no clear answer about existence of unbiased estimator. However, in terms of estimation error, estimating $\min(\mu_1, \dots, \mu_n)$ is a intrinsically difficult problem in general. For instanc
32,209
Improving the minimum estimator
EDIT: The following answers a different question than what was asked - it is framed as if $\mu$ is considered random, but does not work when $\mu$ is considered fixed, which is probably what the OP had in mind. If $\mu$ is fixed, I don't have a better answer than $\min(\hat\mu_1,...,\hat\mu_n)$ If we only consider estimates for mean and covariance, we can treat $(\mu_1, ..., \mu_n)$ as a single sample from multivariate normal distribution. A simple way to get an estimate of the minimum is then to draw a large number of samples from $MVN(\hat{\mu}, \Sigma)$, calculate the minimum of each sample and then take the mean of those minima. The above procedure and its limitations can be understood in Bayesian terms - taking the notation from Wikipedia on MVN, if $\Sigma$ is the known covariance of the estimators and we have one observation, the joint posterior distribution is $\mu \sim MVN(\frac{\hat{\mu} + m \lambda_0}{1 + m}, \frac{1}{n+m} \Sigma)$ where $\lambda_0$ and $m$ arise from the prior where, before observing any data we take the prior $\mu \sim MVN(\lambda_0, m^{-1} \Sigma$). Since you are probably not willing to put priors on $\mu$, we can take the limit as $m \rightarrow 0$, resulting in flat prior and the posterior becomes $\mu \sim MVN(\hat{\mu}, \Sigma)$. However, given the flat prior we are implicitly making the assumption that the elements of $\mu$ differ a lot (if all real numbers are equally likely, getting similar values is very unlikely). A quick simulation shows that the estimate with this procedure slightly overestimates $min(\mu)$ when the elements of $\mu$ differ a lot and underestimates $min(\mu)$ when the elements are similar. One could argue that without any prior knowledge this is correct behavior. If you are willing to state at least some prior information (e.g. $m = 0.1$), the results could become a bit better behaved for your use case. If you are willing to assume more structure, you might be able to choose a better distribution than multivariete normal. Also it might make sense to use Stan or other MCMC sampler to fit the estimates of $\mu$ in the first place. This will get you a set of samples of $(\mu_1, ..., \mu_n)$ that reflect the uncertainty in the estimators themselves, including their covariance structure (possibly richer than what MVN can provide). Once again you can than compute the minimum for each sample to get a posterior distribution over minima, and take the mean of this distribution if you need a point estimate.
Improving the minimum estimator
EDIT: The following answers a different question than what was asked - it is framed as if $\mu$ is considered random, but does not work when $\mu$ is considered fixed, which is probably what the OP ha
Improving the minimum estimator EDIT: The following answers a different question than what was asked - it is framed as if $\mu$ is considered random, but does not work when $\mu$ is considered fixed, which is probably what the OP had in mind. If $\mu$ is fixed, I don't have a better answer than $\min(\hat\mu_1,...,\hat\mu_n)$ If we only consider estimates for mean and covariance, we can treat $(\mu_1, ..., \mu_n)$ as a single sample from multivariate normal distribution. A simple way to get an estimate of the minimum is then to draw a large number of samples from $MVN(\hat{\mu}, \Sigma)$, calculate the minimum of each sample and then take the mean of those minima. The above procedure and its limitations can be understood in Bayesian terms - taking the notation from Wikipedia on MVN, if $\Sigma$ is the known covariance of the estimators and we have one observation, the joint posterior distribution is $\mu \sim MVN(\frac{\hat{\mu} + m \lambda_0}{1 + m}, \frac{1}{n+m} \Sigma)$ where $\lambda_0$ and $m$ arise from the prior where, before observing any data we take the prior $\mu \sim MVN(\lambda_0, m^{-1} \Sigma$). Since you are probably not willing to put priors on $\mu$, we can take the limit as $m \rightarrow 0$, resulting in flat prior and the posterior becomes $\mu \sim MVN(\hat{\mu}, \Sigma)$. However, given the flat prior we are implicitly making the assumption that the elements of $\mu$ differ a lot (if all real numbers are equally likely, getting similar values is very unlikely). A quick simulation shows that the estimate with this procedure slightly overestimates $min(\mu)$ when the elements of $\mu$ differ a lot and underestimates $min(\mu)$ when the elements are similar. One could argue that without any prior knowledge this is correct behavior. If you are willing to state at least some prior information (e.g. $m = 0.1$), the results could become a bit better behaved for your use case. If you are willing to assume more structure, you might be able to choose a better distribution than multivariete normal. Also it might make sense to use Stan or other MCMC sampler to fit the estimates of $\mu$ in the first place. This will get you a set of samples of $(\mu_1, ..., \mu_n)$ that reflect the uncertainty in the estimators themselves, including their covariance structure (possibly richer than what MVN can provide). Once again you can than compute the minimum for each sample to get a posterior distribution over minima, and take the mean of this distribution if you need a point estimate.
Improving the minimum estimator EDIT: The following answers a different question than what was asked - it is framed as if $\mu$ is considered random, but does not work when $\mu$ is considered fixed, which is probably what the OP ha
32,210
MLE of $f(x;\alpha,\theta)=\frac{e^{-x/\theta}}{\theta^{\alpha}\Gamma(\alpha)}x^{\alpha-1}$
Let $\psi(\alpha) = \frac{\Gamma'(\alpha)}{\Gamma(\alpha)}$ so $\psi$ is the digamma function (I'm using $\psi$ rather than your $\Psi$). By the AM-GM inequality $$ \bar x \geq \left(\prod_i x_i\right)^{1/n} $$ so $$ \log \bar x - \overline{\log x} \geq 0 $$ (where $\log \bar x$ and $\log x_i$ are defined almost surely). Furthermore, equality only holds for $x_1=\dots=x_n$ which is a probability $0$ event, so $\log \bar x - \overline{\log x} > 0$ almost surely. For simplicity, I'll take $y = \log \bar x - \overline{\log x}$. Consider $f(\alpha) = \log(\alpha) - \psi(\alpha)$ on $(0,\infty)$. This is continuous and $$ \lim_{\alpha\to 0} f(\alpha) = \infty $$ $$ \lim_{\alpha\to\infty} f(\alpha) = 0 $$ so by the intermediate value theorem $f$ hits every real number in $(0,\infty)$. In particular, this means that $$ f^{-1}\left(\left\{y\right\}\right) \neq \emptyset $$ i.e. there is at least one point in $(0,\infty)$ mapped to $y$, since $y > 0$. Furthermore, $f$ turns out to be injective on $(0,\infty)$ as $f' < 0$ so there is actually a unique $\hat \alpha$ with $f(\hat\alpha) = y$. Actually finding this $\hat \alpha$ will require numerical methods though, as @StubbornAtom says.
MLE of $f(x;\alpha,\theta)=\frac{e^{-x/\theta}}{\theta^{\alpha}\Gamma(\alpha)}x^{\alpha-1}$
Let $\psi(\alpha) = \frac{\Gamma'(\alpha)}{\Gamma(\alpha)}$ so $\psi$ is the digamma function (I'm using $\psi$ rather than your $\Psi$). By the AM-GM inequality $$ \bar x \geq \left(\prod_i x_i\right
MLE of $f(x;\alpha,\theta)=\frac{e^{-x/\theta}}{\theta^{\alpha}\Gamma(\alpha)}x^{\alpha-1}$ Let $\psi(\alpha) = \frac{\Gamma'(\alpha)}{\Gamma(\alpha)}$ so $\psi$ is the digamma function (I'm using $\psi$ rather than your $\Psi$). By the AM-GM inequality $$ \bar x \geq \left(\prod_i x_i\right)^{1/n} $$ so $$ \log \bar x - \overline{\log x} \geq 0 $$ (where $\log \bar x$ and $\log x_i$ are defined almost surely). Furthermore, equality only holds for $x_1=\dots=x_n$ which is a probability $0$ event, so $\log \bar x - \overline{\log x} > 0$ almost surely. For simplicity, I'll take $y = \log \bar x - \overline{\log x}$. Consider $f(\alpha) = \log(\alpha) - \psi(\alpha)$ on $(0,\infty)$. This is continuous and $$ \lim_{\alpha\to 0} f(\alpha) = \infty $$ $$ \lim_{\alpha\to\infty} f(\alpha) = 0 $$ so by the intermediate value theorem $f$ hits every real number in $(0,\infty)$. In particular, this means that $$ f^{-1}\left(\left\{y\right\}\right) \neq \emptyset $$ i.e. there is at least one point in $(0,\infty)$ mapped to $y$, since $y > 0$. Furthermore, $f$ turns out to be injective on $(0,\infty)$ as $f' < 0$ so there is actually a unique $\hat \alpha$ with $f(\hat\alpha) = y$. Actually finding this $\hat \alpha$ will require numerical methods though, as @StubbornAtom says.
MLE of $f(x;\alpha,\theta)=\frac{e^{-x/\theta}}{\theta^{\alpha}\Gamma(\alpha)}x^{\alpha-1}$ Let $\psi(\alpha) = \frac{\Gamma'(\alpha)}{\Gamma(\alpha)}$ so $\psi$ is the digamma function (I'm using $\psi$ rather than your $\Psi$). By the AM-GM inequality $$ \bar x \geq \left(\prod_i x_i\right
32,211
What is meant by the non-gaussianity in the independent component analysis (ICA)?
First we look at the central limit theorem, which is basically concerned with the tendancy of estimations of the mean of independently drawn variables of any arbitrary distribution to follow a Gaussian distribution. This matters because in real world samples we are often observing data that is in fact a composite of many underlying factors, and based on the central limit theorem we understand that linear combinations of independent variables create an aggregate variable that tends towards Gaussian in nature. Non independent variable aggregates can retain non Gaussian distributions as the distributions are linked, but if independent then their combination will tend towards Gaussian (just as the sum of multiple independent fair dice tends towards a normal distribution). What we want to achieve with ICA is to separate out the independent variables that underlie the observed data, i.e. reverse the central limit theorem. Since the linear combination of independent variables is more Gaussian than the original variables, unless at least one is Gaussian, it follows that using non - Gaussianality is required to identify the underlying variables. Thus ICA is built on using the assumption of non-Gaussianality in the latent factors to tease them apart. If more than one underlying factor is Gaussian then they will not be separated by ICA since the separation is based on deviation from normality. Basically two Gaussian variables give a circular joint probability for which rotation is arbitrary, so there is no single solution. https://web.archive.org/web/20210303213322/fourier.eng.hmc.edu/e161/lectures/ica/node3.html http://wwwf.imperial.ac.uk/~nsjones/TalkSlides/HyvarinenSlides.pdf
What is meant by the non-gaussianity in the independent component analysis (ICA)?
First we look at the central limit theorem, which is basically concerned with the tendancy of estimations of the mean of independently drawn variables of any arbitrary distribution to follow a Gaussia
What is meant by the non-gaussianity in the independent component analysis (ICA)? First we look at the central limit theorem, which is basically concerned with the tendancy of estimations of the mean of independently drawn variables of any arbitrary distribution to follow a Gaussian distribution. This matters because in real world samples we are often observing data that is in fact a composite of many underlying factors, and based on the central limit theorem we understand that linear combinations of independent variables create an aggregate variable that tends towards Gaussian in nature. Non independent variable aggregates can retain non Gaussian distributions as the distributions are linked, but if independent then their combination will tend towards Gaussian (just as the sum of multiple independent fair dice tends towards a normal distribution). What we want to achieve with ICA is to separate out the independent variables that underlie the observed data, i.e. reverse the central limit theorem. Since the linear combination of independent variables is more Gaussian than the original variables, unless at least one is Gaussian, it follows that using non - Gaussianality is required to identify the underlying variables. Thus ICA is built on using the assumption of non-Gaussianality in the latent factors to tease them apart. If more than one underlying factor is Gaussian then they will not be separated by ICA since the separation is based on deviation from normality. Basically two Gaussian variables give a circular joint probability for which rotation is arbitrary, so there is no single solution. https://web.archive.org/web/20210303213322/fourier.eng.hmc.edu/e161/lectures/ica/node3.html http://wwwf.imperial.ac.uk/~nsjones/TalkSlides/HyvarinenSlides.pdf
What is meant by the non-gaussianity in the independent component analysis (ICA)? First we look at the central limit theorem, which is basically concerned with the tendancy of estimations of the mean of independently drawn variables of any arbitrary distribution to follow a Gaussia
32,212
Regularized linear vs. RKHS-regression
As you probably have noticed when writing down the optimization problems, the only difference in the minimization is which Hilbert norm to use for penalization. That is, to quantify what 'large' values of $\alpha$ are for penalization purposes. In the RKHS setting, we use the RKHS inner product, $\alpha^tK\alpha$, whereas ridge regression penalizes with respect to the Euclidean norm. An interesting theoretical consequence is how each method effects the spectrum of the reproducing kernel $K$. By RKHS theory, we have that $K$ is symmetric positive definite. By the spectral theorem, we can write $K = U^tDU$ where $D$ is the diagonal matrix of eigenvalues and $U$ is the orthonormal matrix of eigenvectors. Consequently, in the RKHS setting, \begin{align} (K+\lambda nI)^{-1}Y &= [U^t(D+\lambda nI)U]^{-1}Y\\ &= U^t[D+\lambda nI]^{-1}UY. \end{align} Meanwhile, in the Ridge regression setting, note that $K^tK=K^2$ by symmetry, \begin{align} (K^2+\lambda nI)^{-1}KY &= [U^t(D^2+\lambda nI)U]^{-1}KY\\ &= U^t[D^2+\lambda nI]^{-1}UKY\\ &= U^t[D^2+\lambda nI]^{-1}DUY\\ &= U^t[D+\lambda nD^{-1}]^{-1}UY. \end{align} Let the spectrum of $K$ be $\nu_1,\ldots,\nu_n$. In RKHS regression, the eigenvalues are stabilized by $\nu_i\rightarrow\nu_i+\lambda n$. In Ridge regression, we have $\nu_i\rightarrow \nu_i + \lambda n/\nu_i$. As a result, RKHS uniformly modifies the eigenvalues while Ridge adds a larger value if the corresponding $\nu_i$ is smaller. Depending on the choice of kernel, the two estimates for $\alpha$ may be close or far from each other. The distance in the operator norm sense will be \begin{align} \|{\alpha_\text{RKHS}-\alpha_\text{Ridge}}\|_{\ell^2} &= \|{ A_\text{RKHS}Y-A_\text{Ridge}Y }\|_{\ell^2}\\ &\le \|[D+\lambda nI]^{-1}-[D+\lambda n D^{-1}]^{-1}\|_\infty\|Y\|_{\ell^2}\\ &\le \max_{i=1,\ldots,n}\left\{| (\nu_i+\lambda n)^{-1} - (\nu_i+\lambda n/\nu_i)^{-1} |\right\}\|Y\|_{\ell^2}\\ &\le \max_{i=1,\ldots,n}\left\{ \frac{\lambda n|1-\nu_i|}{(\nu_i+\lambda n)(\nu_i^2+\lambda n)} \right\}\|Y\|_{\ell^2}\\ \end{align} However, this is still bounded for a given $Y$, so your two estimators cannot be arbitrarily far apart. Hence, if your kernel is close to the identity, then there will mostly likely be little difference in the approaches. If your kernels are vastly different, the two approaches can still lead to similar results. In practice, it is hard to say definitively if one is better than the other for a given situation. As we are minimizing with respect to the squared error when representing the data in terms of the kernel function, we are effectively choosing a best regression curve from the corresponding Hilbert space of functions. Hence, penalizing with respect to the RKHS inner product seems to be the natural way to proceed.
Regularized linear vs. RKHS-regression
As you probably have noticed when writing down the optimization problems, the only difference in the minimization is which Hilbert norm to use for penalization. That is, to quantify what 'large' valu
Regularized linear vs. RKHS-regression As you probably have noticed when writing down the optimization problems, the only difference in the minimization is which Hilbert norm to use for penalization. That is, to quantify what 'large' values of $\alpha$ are for penalization purposes. In the RKHS setting, we use the RKHS inner product, $\alpha^tK\alpha$, whereas ridge regression penalizes with respect to the Euclidean norm. An interesting theoretical consequence is how each method effects the spectrum of the reproducing kernel $K$. By RKHS theory, we have that $K$ is symmetric positive definite. By the spectral theorem, we can write $K = U^tDU$ where $D$ is the diagonal matrix of eigenvalues and $U$ is the orthonormal matrix of eigenvectors. Consequently, in the RKHS setting, \begin{align} (K+\lambda nI)^{-1}Y &= [U^t(D+\lambda nI)U]^{-1}Y\\ &= U^t[D+\lambda nI]^{-1}UY. \end{align} Meanwhile, in the Ridge regression setting, note that $K^tK=K^2$ by symmetry, \begin{align} (K^2+\lambda nI)^{-1}KY &= [U^t(D^2+\lambda nI)U]^{-1}KY\\ &= U^t[D^2+\lambda nI]^{-1}UKY\\ &= U^t[D^2+\lambda nI]^{-1}DUY\\ &= U^t[D+\lambda nD^{-1}]^{-1}UY. \end{align} Let the spectrum of $K$ be $\nu_1,\ldots,\nu_n$. In RKHS regression, the eigenvalues are stabilized by $\nu_i\rightarrow\nu_i+\lambda n$. In Ridge regression, we have $\nu_i\rightarrow \nu_i + \lambda n/\nu_i$. As a result, RKHS uniformly modifies the eigenvalues while Ridge adds a larger value if the corresponding $\nu_i$ is smaller. Depending on the choice of kernel, the two estimates for $\alpha$ may be close or far from each other. The distance in the operator norm sense will be \begin{align} \|{\alpha_\text{RKHS}-\alpha_\text{Ridge}}\|_{\ell^2} &= \|{ A_\text{RKHS}Y-A_\text{Ridge}Y }\|_{\ell^2}\\ &\le \|[D+\lambda nI]^{-1}-[D+\lambda n D^{-1}]^{-1}\|_\infty\|Y\|_{\ell^2}\\ &\le \max_{i=1,\ldots,n}\left\{| (\nu_i+\lambda n)^{-1} - (\nu_i+\lambda n/\nu_i)^{-1} |\right\}\|Y\|_{\ell^2}\\ &\le \max_{i=1,\ldots,n}\left\{ \frac{\lambda n|1-\nu_i|}{(\nu_i+\lambda n)(\nu_i^2+\lambda n)} \right\}\|Y\|_{\ell^2}\\ \end{align} However, this is still bounded for a given $Y$, so your two estimators cannot be arbitrarily far apart. Hence, if your kernel is close to the identity, then there will mostly likely be little difference in the approaches. If your kernels are vastly different, the two approaches can still lead to similar results. In practice, it is hard to say definitively if one is better than the other for a given situation. As we are minimizing with respect to the squared error when representing the data in terms of the kernel function, we are effectively choosing a best regression curve from the corresponding Hilbert space of functions. Hence, penalizing with respect to the RKHS inner product seems to be the natural way to proceed.
Regularized linear vs. RKHS-regression As you probably have noticed when writing down the optimization problems, the only difference in the minimization is which Hilbert norm to use for penalization. That is, to quantify what 'large' valu
32,213
Can I apply word2vec to find document similarity?
Some time ago I tried this idea on 20 newsgroups data. I used GloVe embeddings from the authors site (Wikipedia ones). Aggregating word embeddings using TF-IDF doesn't give good results. It is actually worse than just using TF-IDF features. See results in this notebook (Accuracy on tfidf data vs Accuracy on weighted embedded words). I also made plots of truncated SVD/PCA of the encoded documents - it seems like aggregated embeddings just make everything close to everything. To illustrate this I tried to find closest words for document encodings in Word embeddings space - it seems like they just lie close to common words (see Closest $10$ words to mean-aggregated texts). That being said, This notebook is just a toy example and it only suggests that the simplest approach won't work for this data. For instance I didn't try to filter out common words based on some threshold. Also maybe it would make more sense to first extract summaries from the documents first (for example TextRank sort of retrieves most informative paragraphs based partly on TF-IDF score of their words). If you want to try more elaborate techniques, I think that Gensim covers much of this stuff (for example extractive summarization via TextRank and similar algorithms).
Can I apply word2vec to find document similarity?
Some time ago I tried this idea on 20 newsgroups data. I used GloVe embeddings from the authors site (Wikipedia ones). Aggregating word embeddings using TF-IDF doesn't give good results. It is actuall
Can I apply word2vec to find document similarity? Some time ago I tried this idea on 20 newsgroups data. I used GloVe embeddings from the authors site (Wikipedia ones). Aggregating word embeddings using TF-IDF doesn't give good results. It is actually worse than just using TF-IDF features. See results in this notebook (Accuracy on tfidf data vs Accuracy on weighted embedded words). I also made plots of truncated SVD/PCA of the encoded documents - it seems like aggregated embeddings just make everything close to everything. To illustrate this I tried to find closest words for document encodings in Word embeddings space - it seems like they just lie close to common words (see Closest $10$ words to mean-aggregated texts). That being said, This notebook is just a toy example and it only suggests that the simplest approach won't work for this data. For instance I didn't try to filter out common words based on some threshold. Also maybe it would make more sense to first extract summaries from the documents first (for example TextRank sort of retrieves most informative paragraphs based partly on TF-IDF score of their words). If you want to try more elaborate techniques, I think that Gensim covers much of this stuff (for example extractive summarization via TextRank and similar algorithms).
Can I apply word2vec to find document similarity? Some time ago I tried this idea on 20 newsgroups data. I used GloVe embeddings from the authors site (Wikipedia ones). Aggregating word embeddings using TF-IDF doesn't give good results. It is actuall
32,214
Can I apply word2vec to find document similarity?
There is nothing wrong with the method, it has been explored in the literature a lot. This is for instance the way that many papers use to evaluate extrinsically word embeddings with tasks like classification. One would expect however to lose in terms of accuracy as the length of the documents increases. Models like doc2vec have been proposed to address such limitations, but it is always better to test them in your benchmark.
Can I apply word2vec to find document similarity?
There is nothing wrong with the method, it has been explored in the literature a lot. This is for instance the way that many papers use to evaluate extrinsically word embeddings with tasks like classi
Can I apply word2vec to find document similarity? There is nothing wrong with the method, it has been explored in the literature a lot. This is for instance the way that many papers use to evaluate extrinsically word embeddings with tasks like classification. One would expect however to lose in terms of accuracy as the length of the documents increases. Models like doc2vec have been proposed to address such limitations, but it is always better to test them in your benchmark.
Can I apply word2vec to find document similarity? There is nothing wrong with the method, it has been explored in the literature a lot. This is for instance the way that many papers use to evaluate extrinsically word embeddings with tasks like classi
32,215
Can I apply word2vec to find document similarity?
Basically the word2vec method intrinsically takes into account the tf (term frequency) of each word. There is no need to emphasis it twice. on the other hand maybe it is a good idea to emphasis on the words with high tf-idf owing the fact that these words are not seen enough in the training phase. I think the way to do that is not simple multiplication however you can feed the network with the context of high tf-idf words more than the other contexts.
Can I apply word2vec to find document similarity?
Basically the word2vec method intrinsically takes into account the tf (term frequency) of each word. There is no need to emphasis it twice. on the other hand maybe it is a good idea to emphasis on the
Can I apply word2vec to find document similarity? Basically the word2vec method intrinsically takes into account the tf (term frequency) of each word. There is no need to emphasis it twice. on the other hand maybe it is a good idea to emphasis on the words with high tf-idf owing the fact that these words are not seen enough in the training phase. I think the way to do that is not simple multiplication however you can feed the network with the context of high tf-idf words more than the other contexts.
Can I apply word2vec to find document similarity? Basically the word2vec method intrinsically takes into account the tf (term frequency) of each word. There is no need to emphasis it twice. on the other hand maybe it is a good idea to emphasis on the
32,216
How to fit the SIR and SEIR models to the epidemiological data?
I am going to confine my comments to the SEIR model - the issues for the SIR model are similar and it can be treated as a special limiting case of the SEIR model anyway (for large $\delta$). What you've done so far I've had a look at your MATLAB code, which seems absolutely fine to me. For a given set of model parameters, your code solves the SEIR differential equations to give functions $S(t)$,$E(t)$, $I(t)$, $R(t)$ on some time interval. You then calculate the cumulative state $J(t):=\int_0^t I(u) du$ which is used as a basis for fitting the model (correct me if I'm wrong here). Available data: you have time series $C_{data}(t)$ and $M_{data}(t)$, which are the cumulative number of cases and deaths respectively. Model fitting proceeds by minimising the difference between the curves $J(t)$ and $C_{data}(t)-M_{data}(t)$. (This assumes a disease case corresponds to an individual transitioning to the $I$ state.) A poor fit is obtained. It's also questionable how meaningful the confidence intervals are - the lower limit is often negative even though all model parameters are constrained to be positive. Model vs data I can see several issues with the way that the specified SEIR model relates to the available data. Firstly $J(t)$ above does not represent the number of infectious individuals, which is simply $I(t)$. It seems that you actually want to be equating $I(t)$ to $C_{data}(t)-M_{data}(t)$. Computing $J(t)$ seems unnecessary. Second, it appears that you're implicitly assuming that 'recovery' (transition to the $R$ category) always leads to death. However, I understand that - in the case of Ebola - it is also possible to be 'cured'. So, the available death data can't be directly related to the variables in the SEIR model you set up. This points to the need for a model that will take account of the different recovery modes that are possible with Ebola. A third issue is that, by subtracting one data time series from the other, you're losing some of the information in the original data. Ideally it would be good to fit the model using both of the available time series. Modified SEIR model and fitting procedure To improve model fitting I would suggest looking at the modelling done in this paper. Here they use a modified SEIR model for Ebola, which looks something like \begin{align} {\mathrm d S \over \mathrm d t} &= -\beta {S I \over N}\\[1.5ex] {\mathrm d E \over \mathrm d t} &= \beta {S I \over N} - \delta E \\[1.5ex] {\mathrm d I \over \mathrm d t} &= \delta E - \gamma I \\[1.5ex] {\mathrm d R \over \mathrm d t} &= (1-f)\gamma I \\ \end{align} Here $f$ is the case fatality rate, so the $R$ state corresponds to 'cured'. In the context of this model, the cumulative number of cases is $C(t)=\int_0^t \delta E(u)du$ and the cumulative number of deaths is $M(t)=\int_0^t f\gamma I(u)du$. Perhaps it would be possible to fit these two curves simultaneously in MATLAB? Other models More complex models are of course possible e.g. see this paper where additional disease categories are used. We could also add stochasticity, more detailed contact structure models, etc. Fitting transmission models to the 2014 Ebola outbreak data is an active area of research. Still, you might hope to get a reasonable fit using the modified SEIR model above. What I'm trying to say is that fitting transmission models to the Ebola outbreak data is not a trivial task! Finally: the paper you refer to does not appear to be a peer reviewed journal article. It's also anonymous. I wouldn't rely on it as an information source.
How to fit the SIR and SEIR models to the epidemiological data?
I am going to confine my comments to the SEIR model - the issues for the SIR model are similar and it can be treated as a special limiting case of the SEIR model anyway (for large $\delta$). What you'
How to fit the SIR and SEIR models to the epidemiological data? I am going to confine my comments to the SEIR model - the issues for the SIR model are similar and it can be treated as a special limiting case of the SEIR model anyway (for large $\delta$). What you've done so far I've had a look at your MATLAB code, which seems absolutely fine to me. For a given set of model parameters, your code solves the SEIR differential equations to give functions $S(t)$,$E(t)$, $I(t)$, $R(t)$ on some time interval. You then calculate the cumulative state $J(t):=\int_0^t I(u) du$ which is used as a basis for fitting the model (correct me if I'm wrong here). Available data: you have time series $C_{data}(t)$ and $M_{data}(t)$, which are the cumulative number of cases and deaths respectively. Model fitting proceeds by minimising the difference between the curves $J(t)$ and $C_{data}(t)-M_{data}(t)$. (This assumes a disease case corresponds to an individual transitioning to the $I$ state.) A poor fit is obtained. It's also questionable how meaningful the confidence intervals are - the lower limit is often negative even though all model parameters are constrained to be positive. Model vs data I can see several issues with the way that the specified SEIR model relates to the available data. Firstly $J(t)$ above does not represent the number of infectious individuals, which is simply $I(t)$. It seems that you actually want to be equating $I(t)$ to $C_{data}(t)-M_{data}(t)$. Computing $J(t)$ seems unnecessary. Second, it appears that you're implicitly assuming that 'recovery' (transition to the $R$ category) always leads to death. However, I understand that - in the case of Ebola - it is also possible to be 'cured'. So, the available death data can't be directly related to the variables in the SEIR model you set up. This points to the need for a model that will take account of the different recovery modes that are possible with Ebola. A third issue is that, by subtracting one data time series from the other, you're losing some of the information in the original data. Ideally it would be good to fit the model using both of the available time series. Modified SEIR model and fitting procedure To improve model fitting I would suggest looking at the modelling done in this paper. Here they use a modified SEIR model for Ebola, which looks something like \begin{align} {\mathrm d S \over \mathrm d t} &= -\beta {S I \over N}\\[1.5ex] {\mathrm d E \over \mathrm d t} &= \beta {S I \over N} - \delta E \\[1.5ex] {\mathrm d I \over \mathrm d t} &= \delta E - \gamma I \\[1.5ex] {\mathrm d R \over \mathrm d t} &= (1-f)\gamma I \\ \end{align} Here $f$ is the case fatality rate, so the $R$ state corresponds to 'cured'. In the context of this model, the cumulative number of cases is $C(t)=\int_0^t \delta E(u)du$ and the cumulative number of deaths is $M(t)=\int_0^t f\gamma I(u)du$. Perhaps it would be possible to fit these two curves simultaneously in MATLAB? Other models More complex models are of course possible e.g. see this paper where additional disease categories are used. We could also add stochasticity, more detailed contact structure models, etc. Fitting transmission models to the 2014 Ebola outbreak data is an active area of research. Still, you might hope to get a reasonable fit using the modified SEIR model above. What I'm trying to say is that fitting transmission models to the Ebola outbreak data is not a trivial task! Finally: the paper you refer to does not appear to be a peer reviewed journal article. It's also anonymous. I wouldn't rely on it as an information source.
How to fit the SIR and SEIR models to the epidemiological data? I am going to confine my comments to the SEIR model - the issues for the SIR model are similar and it can be treated as a special limiting case of the SEIR model anyway (for large $\delta$). What you'
32,217
Group elastic net
Let $\mathcal{G}$ be the grouping that you're interested in; that is, let $\mathcal{G}$ be a partition of $\{1, \dots, p\}$, where we consider there to be $p$ features. With response $y \in \mathbb{R}^n$ and design matrix $X \in \mathbb{R}^{n \times p}$, the group lasso estimator is $$\arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2.$$ Applying another squared $\ell_2$ penalty to induce overall shrinkage, we'd get the estimator $$\arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \mu \|\beta\|_2^2.$$ We might call this the "group elastic net". By Lagrangian duality, we can write \begin{align*} \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \mu \|\beta\|_2^2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p \, : \, \|\beta\|_2^2 \leq C} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p \, : \, \|\beta\|_2 \leq \sqrt{C}} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \tilde\mu \|\beta\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \left( \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \tilde\mu' p^{1/2} \|\beta\|_2 \right), \end{align*} where $\tilde\mu$ is the corresponding dual variable and $\tilde\mu' = p^{-1/2} \tilde\mu$. As we can see, this last expression is a group lasso with "overlapping" groups, since $\mathcal{G} \cup \{1, \dots, p\}$ is no longer a partition. Further, the group $\{1, \dots, p\}$ has a dual variable (or tuning variable) $\tilde\mu$ which is distinct from the dual variable $\lambda$ for the other groups. This can be optimization problem can be solved using the package gglasso. Reading the section on page 9 of the documentation here will tell you about the gglasso function, which should be used. Note that the argument pmax will have to manually supplied with a last component which will serve as a tuning parameter.
Group elastic net
Let $\mathcal{G}$ be the grouping that you're interested in; that is, let $\mathcal{G}$ be a partition of $\{1, \dots, p\}$, where we consider there to be $p$ features. With response $y \in \mathbb{R}
Group elastic net Let $\mathcal{G}$ be the grouping that you're interested in; that is, let $\mathcal{G}$ be a partition of $\{1, \dots, p\}$, where we consider there to be $p$ features. With response $y \in \mathbb{R}^n$ and design matrix $X \in \mathbb{R}^{n \times p}$, the group lasso estimator is $$\arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2.$$ Applying another squared $\ell_2$ penalty to induce overall shrinkage, we'd get the estimator $$\arg\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \mu \|\beta\|_2^2.$$ We might call this the "group elastic net". By Lagrangian duality, we can write \begin{align*} \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \mu \|\beta\|_2^2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p \, : \, \|\beta\|_2^2 \leq C} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p \, : \, \|\beta\|_2 \leq \sqrt{C}} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \tilde\mu \|\beta\|_2 \\ = \, \arg\min_{\beta \in \mathbb{R}^p} & \frac{1}{2n} \|y - X \beta \|_2^2 + \left( \lambda \sum_{g \in \mathcal{G}} |\mathcal{G}|^{1/2} \|\beta_g\|_2 + \tilde\mu' p^{1/2} \|\beta\|_2 \right), \end{align*} where $\tilde\mu$ is the corresponding dual variable and $\tilde\mu' = p^{-1/2} \tilde\mu$. As we can see, this last expression is a group lasso with "overlapping" groups, since $\mathcal{G} \cup \{1, \dots, p\}$ is no longer a partition. Further, the group $\{1, \dots, p\}$ has a dual variable (or tuning variable) $\tilde\mu$ which is distinct from the dual variable $\lambda$ for the other groups. This can be optimization problem can be solved using the package gglasso. Reading the section on page 9 of the documentation here will tell you about the gglasso function, which should be used. Note that the argument pmax will have to manually supplied with a last component which will serve as a tuning parameter.
Group elastic net Let $\mathcal{G}$ be the grouping that you're interested in; that is, let $\mathcal{G}$ be a partition of $\{1, \dots, p\}$, where we consider there to be $p$ features. With response $y \in \mathbb{R}
32,218
How to use restricted cubic splines with the R mice imputation package
You are right that the imputation model needs to be as rich or richer than the outcome model. The fact that imputation based on full maximum likelihood estimation and imputation done by mice assume linearity everywhere was a prime reason I wrote the R Hmisc package aregImpute function, which creates imputation models automatically using rich additive restricted cubic spline models. So linearity is not assumed for multiple imputation. The default approach in aregImpute is predictive mean matching, which I generally prefer over more parametric approaches (splines are still used; PMM is less parametric on the left hand side of models). Like mice, aregImpute uses chained equations. Unlike mice, it uses bootstrap draws instead of approximate (assuming multivariate normality) Bayesian posterior draws.
How to use restricted cubic splines with the R mice imputation package
You are right that the imputation model needs to be as rich or richer than the outcome model. The fact that imputation based on full maximum likelihood estimation and imputation done by mice assume l
How to use restricted cubic splines with the R mice imputation package You are right that the imputation model needs to be as rich or richer than the outcome model. The fact that imputation based on full maximum likelihood estimation and imputation done by mice assume linearity everywhere was a prime reason I wrote the R Hmisc package aregImpute function, which creates imputation models automatically using rich additive restricted cubic spline models. So linearity is not assumed for multiple imputation. The default approach in aregImpute is predictive mean matching, which I generally prefer over more parametric approaches (splines are still used; PMM is less parametric on the left hand side of models). Like mice, aregImpute uses chained equations. Unlike mice, it uses bootstrap draws instead of approximate (assuming multivariate normality) Bayesian posterior draws.
How to use restricted cubic splines with the R mice imputation package You are right that the imputation model needs to be as rich or richer than the outcome model. The fact that imputation based on full maximum likelihood estimation and imputation done by mice assume l
32,219
Why is ergodicity not a requirement for ARIMA models besides stationarity?
A bit technical maybe, but stationary ARMA processes are by construction mean-ergodic (as the other answer correctly pointed out, a previous version of my answer did not spell that out clearly and wrote ergodic as mean-ergodicity is maybe the most important "flavor" of ergodicity and hence sometimes treated synonymously with erdogicity, which, as this discussion shows, it should indeed not). First, here is a sufficient condition for mean ergodicity: Theorem: Let $Y_t$ be covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ such that $\sum_{j=0}^\infty|\gamma_j|<\infty$. Then $$\bar{Y}_T\to_p \mu$$ Proof: We shall actually prove that $\bar{Y}_T$ converges to $\mu$ in mean square, which implies convergence in probability. Write \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=&E\left[(1/T)\sum_{t=1}^T(Y_t- \mu)\right]^2\\ &=&1/T^2E[\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}\\ &&\quad\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}]\\ &=&1/T^2\{[\gamma_0+\gamma_1+\ldots+\gamma_{T-1}]+[\gamma_1+\gamma_0+\gamma_1+\ldots+\gamma_{T-2}]\\ &&\quad+\ldots+[\gamma_{T-1}+\gamma_{T-2}+\ldots+\gamma_1+\gamma_0]\} \end{eqnarray*} Thus, \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=& 1/T^2\{T\gamma_0+2(T-1)\gamma_1+2(T-2)\gamma_2+\ldots+2\gamma_{T-1}\} \end{eqnarray*} Put differently, \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=& 1/T\{\gamma_0+2(T-1)\gamma_1/T+2(T-2)\gamma_2/T+\ldots+2\gamma_{T-1}/T\} \end{eqnarray*} This expression tends to zero as $T\to\infty$, as $TE(\bar{Y}_T- \mu)^2$ remains bounded, because \begin{eqnarray*} TE(\bar{Y}_T- \mu)^2&=& |\gamma_0+2(T-1)\gamma_1/T+2(T-2)\gamma_2/T+\ldots+2\gamma_{T-1}/T|\\ &\leqslant&|\gamma_0|+2(T-1)|\gamma_1|/T+2(T-2)|\gamma_2|/T+\ldots+2|\gamma_{T-1}|/T\\ &\leqslant&|\gamma_0|+2|\gamma_1|+2|\gamma_2|+\ldots+2|\gamma_{T-1}|\\ &\to&c<\infty, \end{eqnarray*} using summability of the autocovariances. That is, if the autocovariances decay sufficiently quickly, mean ergodicity follows. We next show that any causal $ARMA(p,q)$ process is ergodic, as it has the required summable autocovariances. Let us look at the $MA(\infty)$ representation and use the triangle inequality to bound the sufficient condition for mean ergodicity of a stationary/causal process from above. Stationarity implies that a causal, or $MA(\infty)$ with summable coefficients, representation of the process exists. The claim is therefore shown if we can show that summability of the $MA(\infty)$ coefficients $\sum_{j=0}^\infty|\psi_j|<\infty$ implies $\sum_{k=0}^\infty|\gamma_k|<\infty$ where $\gamma_k=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}$ is the $k$th autocovariance of an $MA(\infty)$-process. We write \begin{eqnarray*} \sum_{k=0}^\infty|\gamma_k|&=&\sum_{k=0}^\infty\left|\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}\right|\\ &=&\sigma^2\sum_{k=0}^\infty\left|\sum_{j=0}^{\infty}\psi_j\psi_{j+k}\right|\\ &\leqslant&\sigma^2\sum_{k=0}^\infty\sum_{j=0}^{\infty}\left|\psi_j\psi_{j+k}\right|\\ &=&\sigma^2\sum_{k=0}^\infty\sum_{j=0}^{\infty}\left|\psi_j\right|\left|\psi_{j+k}\right|\\ &=&\sigma^2\sum_{j=0}^{\infty}\left|\psi_j\right|\sum_{k=0}^\infty\left|\psi_{j+k}\right|\\ &\leqslant&\sigma^2\sum_{j=0}^{\infty}\left|\psi_j\right|\sum_{k=0}^\infty\left|\psi_{k}\right|\\ &<&\infty \end{eqnarray*} Here, the first inequality uses the triangle inequality. Summability of the coefficients permits interchanging the order of summation in fourth equality (and hence taking out $|\psi_j|$ which does not depend on $k$). The second inequality follows because the second summation additionally has the terms $\psi_0,\ldots,\psi_{j-1}$ for $j>0$. The last inequality then follows from summability of the coefficients.
Why is ergodicity not a requirement for ARIMA models besides stationarity?
A bit technical maybe, but stationary ARMA processes are by construction mean-ergodic (as the other answer correctly pointed out, a previous version of my answer did not spell that out clearly and wro
Why is ergodicity not a requirement for ARIMA models besides stationarity? A bit technical maybe, but stationary ARMA processes are by construction mean-ergodic (as the other answer correctly pointed out, a previous version of my answer did not spell that out clearly and wrote ergodic as mean-ergodicity is maybe the most important "flavor" of ergodicity and hence sometimes treated synonymously with erdogicity, which, as this discussion shows, it should indeed not). First, here is a sufficient condition for mean ergodicity: Theorem: Let $Y_t$ be covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ such that $\sum_{j=0}^\infty|\gamma_j|<\infty$. Then $$\bar{Y}_T\to_p \mu$$ Proof: We shall actually prove that $\bar{Y}_T$ converges to $\mu$ in mean square, which implies convergence in probability. Write \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=&E\left[(1/T)\sum_{t=1}^T(Y_t- \mu)\right]^2\\ &=&1/T^2E[\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}\\ &&\quad\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}]\\ &=&1/T^2\{[\gamma_0+\gamma_1+\ldots+\gamma_{T-1}]+[\gamma_1+\gamma_0+\gamma_1+\ldots+\gamma_{T-2}]\\ &&\quad+\ldots+[\gamma_{T-1}+\gamma_{T-2}+\ldots+\gamma_1+\gamma_0]\} \end{eqnarray*} Thus, \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=& 1/T^2\{T\gamma_0+2(T-1)\gamma_1+2(T-2)\gamma_2+\ldots+2\gamma_{T-1}\} \end{eqnarray*} Put differently, \begin{eqnarray*} E(\bar{Y}_T- \mu)^2&=& 1/T\{\gamma_0+2(T-1)\gamma_1/T+2(T-2)\gamma_2/T+\ldots+2\gamma_{T-1}/T\} \end{eqnarray*} This expression tends to zero as $T\to\infty$, as $TE(\bar{Y}_T- \mu)^2$ remains bounded, because \begin{eqnarray*} TE(\bar{Y}_T- \mu)^2&=& |\gamma_0+2(T-1)\gamma_1/T+2(T-2)\gamma_2/T+\ldots+2\gamma_{T-1}/T|\\ &\leqslant&|\gamma_0|+2(T-1)|\gamma_1|/T+2(T-2)|\gamma_2|/T+\ldots+2|\gamma_{T-1}|/T\\ &\leqslant&|\gamma_0|+2|\gamma_1|+2|\gamma_2|+\ldots+2|\gamma_{T-1}|\\ &\to&c<\infty, \end{eqnarray*} using summability of the autocovariances. That is, if the autocovariances decay sufficiently quickly, mean ergodicity follows. We next show that any causal $ARMA(p,q)$ process is ergodic, as it has the required summable autocovariances. Let us look at the $MA(\infty)$ representation and use the triangle inequality to bound the sufficient condition for mean ergodicity of a stationary/causal process from above. Stationarity implies that a causal, or $MA(\infty)$ with summable coefficients, representation of the process exists. The claim is therefore shown if we can show that summability of the $MA(\infty)$ coefficients $\sum_{j=0}^\infty|\psi_j|<\infty$ implies $\sum_{k=0}^\infty|\gamma_k|<\infty$ where $\gamma_k=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}$ is the $k$th autocovariance of an $MA(\infty)$-process. We write \begin{eqnarray*} \sum_{k=0}^\infty|\gamma_k|&=&\sum_{k=0}^\infty\left|\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}\right|\\ &=&\sigma^2\sum_{k=0}^\infty\left|\sum_{j=0}^{\infty}\psi_j\psi_{j+k}\right|\\ &\leqslant&\sigma^2\sum_{k=0}^\infty\sum_{j=0}^{\infty}\left|\psi_j\psi_{j+k}\right|\\ &=&\sigma^2\sum_{k=0}^\infty\sum_{j=0}^{\infty}\left|\psi_j\right|\left|\psi_{j+k}\right|\\ &=&\sigma^2\sum_{j=0}^{\infty}\left|\psi_j\right|\sum_{k=0}^\infty\left|\psi_{j+k}\right|\\ &\leqslant&\sigma^2\sum_{j=0}^{\infty}\left|\psi_j\right|\sum_{k=0}^\infty\left|\psi_{k}\right|\\ &<&\infty \end{eqnarray*} Here, the first inequality uses the triangle inequality. Summability of the coefficients permits interchanging the order of summation in fourth equality (and hence taking out $|\psi_j|$ which does not depend on $k$). The second inequality follows because the second summation additionally has the terms $\psi_0,\ldots,\psi_{j-1}$ for $j>0$. The last inequality then follows from summability of the coefficients.
Why is ergodicity not a requirement for ARIMA models besides stationarity? A bit technical maybe, but stationary ARMA processes are by construction mean-ergodic (as the other answer correctly pointed out, a previous version of my answer did not spell that out clearly and wro
32,220
Why is ergodicity not a requirement for ARIMA models besides stationarity?
Ergodicity and mean-ergodicity are not the same properties. Ergodicity is a much stronger property than mean-ergodicity (mean-ergodicity just means an $L^2$-LLN holds). There are easy examples of ARMA processes which are not ergodic. What was shown by previous answer is that an ARMA process is mean-ergodic. (This is simply because that $l^1$, the space of absolutely summable sequences, is closed under convolution, and this makes the autocovariances also $l^1$, which implies mean-ergodicity.) Why is ergodicity not a requirement for ARIMA modeling? There is no reason for it to be. These notions have different historical origins. Ergodicity was first introduced in statistical mechanics, and intended to capture the phenomenon that "time average equals ensemble average". On the other hand, ARIMA models were introduced by Box and Jenkins for time series modeling. You can already see from the definitions that they occur in different settings. Ergodicity is a property defined for strictly stationary processes, whereas ARMA processes are considered under covariance-stationarity. From a time series perspective, first, the strict stationarity under which ergodicity is considered is too stringent an assumption to impose on general data. Second, the weak LLN that holds for many covariance-stationary processes (e.g. under $l^1$-condition for the autocovariances) is empirically just as good as the strong ergodic LLN. For a good while, these two literatures developed separately and did not talk to each other. Later there were attempts to link the two notions by characterizing when ARMA processes satisfy strong-mixing type of conditions, which is a strengthening of ergodicity for more general processes (by, e.g. Kolmogorov and co-authors). But the connection is still incomplete. ...is there an example of a ergodic, but non-stationary process? As stated above, ergodic processes are by definition strictly stationary.
Why is ergodicity not a requirement for ARIMA models besides stationarity?
Ergodicity and mean-ergodicity are not the same properties. Ergodicity is a much stronger property than mean-ergodicity (mean-ergodicity just means an $L^2$-LLN holds). There are easy examples of ARMA
Why is ergodicity not a requirement for ARIMA models besides stationarity? Ergodicity and mean-ergodicity are not the same properties. Ergodicity is a much stronger property than mean-ergodicity (mean-ergodicity just means an $L^2$-LLN holds). There are easy examples of ARMA processes which are not ergodic. What was shown by previous answer is that an ARMA process is mean-ergodic. (This is simply because that $l^1$, the space of absolutely summable sequences, is closed under convolution, and this makes the autocovariances also $l^1$, which implies mean-ergodicity.) Why is ergodicity not a requirement for ARIMA modeling? There is no reason for it to be. These notions have different historical origins. Ergodicity was first introduced in statistical mechanics, and intended to capture the phenomenon that "time average equals ensemble average". On the other hand, ARIMA models were introduced by Box and Jenkins for time series modeling. You can already see from the definitions that they occur in different settings. Ergodicity is a property defined for strictly stationary processes, whereas ARMA processes are considered under covariance-stationarity. From a time series perspective, first, the strict stationarity under which ergodicity is considered is too stringent an assumption to impose on general data. Second, the weak LLN that holds for many covariance-stationary processes (e.g. under $l^1$-condition for the autocovariances) is empirically just as good as the strong ergodic LLN. For a good while, these two literatures developed separately and did not talk to each other. Later there were attempts to link the two notions by characterizing when ARMA processes satisfy strong-mixing type of conditions, which is a strengthening of ergodicity for more general processes (by, e.g. Kolmogorov and co-authors). But the connection is still incomplete. ...is there an example of a ergodic, but non-stationary process? As stated above, ergodic processes are by definition strictly stationary.
Why is ergodicity not a requirement for ARIMA models besides stationarity? Ergodicity and mean-ergodicity are not the same properties. Ergodicity is a much stronger property than mean-ergodicity (mean-ergodicity just means an $L^2$-LLN holds). There are easy examples of ARMA
32,221
Linear regression and assumptions about response variable
The Wikipedia statement This is appropriate when the response variable has a normal distribution. is wrong. OLS does NOT have assumptions on response variable. But has assumptions on residual (See Gauss–Markov theorem). Also see this post for details. Why linear regression has assumption on residual but generalized linear model has assumptions on response? I am stealing @Cliff AB 's example here. The following distribution on $y$ and residual does not violate OLS assumption! Related posts: What is a complete list of the usual assumptions for linear regression? How does linear regression use the normal distribution? What if residuals are normally distributed, but y is not?
Linear regression and assumptions about response variable
The Wikipedia statement This is appropriate when the response variable has a normal distribution. is wrong. OLS does NOT have assumptions on response variable. But has assumptions on residual (S
Linear regression and assumptions about response variable The Wikipedia statement This is appropriate when the response variable has a normal distribution. is wrong. OLS does NOT have assumptions on response variable. But has assumptions on residual (See Gauss–Markov theorem). Also see this post for details. Why linear regression has assumption on residual but generalized linear model has assumptions on response? I am stealing @Cliff AB 's example here. The following distribution on $y$ and residual does not violate OLS assumption! Related posts: What is a complete list of the usual assumptions for linear regression? How does linear regression use the normal distribution? What if residuals are normally distributed, but y is not?
Linear regression and assumptions about response variable The Wikipedia statement This is appropriate when the response variable has a normal distribution. is wrong. OLS does NOT have assumptions on response variable. But has assumptions on residual (S
32,222
Intuition using linear algebra that the rank of the projection matrix equals the rank of the design matrix
Let the number of observations be $n$, let $p$ count the parameters, and let $r$ designate the rank of the $n\times p$ design matrix $X$ (which, by definition, is the dimension of the image of $X$). The SVD A Singular Value Decomposition expresses $X$ as a product $$X = U\Sigma V^\prime$$ where the matrices $U$ (dimensions $n\times r$) and $V$ (dimensions $p \times r$) are orthogonal and $\Sigma$ is an $r\times r$ diagonal matrix with no zeros. A nonzero $X$ always has an SVD. (Here's one proof: the columns of $V$ must be the eigenvectors of $X^\prime X$ corresponding to nonzero eigenvalues while the columns of $U$ must be the eigenvectors of $XX^\prime$ corresponding to nonzero eigenvalues. Those eigenvectors and eigenvalues exist because both $X^\prime X$ and $XX^\prime$ are nonzero real symmetric matrices: this is part of the Spectral Theorem. Although in the SVD it is arranged that all elements of $\Sigma$ be nonnegative, we won't need that here.) Interpreting the SVD One way to view the SVD is that it expresses the columns of $X$ as linear combinations of the columns of $U$: the coefficients are the columns of $\Sigma V^\prime$. You may therefore think of $U$ as being an orthonormal frame for the image of $X$, which is an $r$-dimensional subspace $\mathbb W\subset \mathbb{R}^n$. ("Orthonormal" means "orthogonal" and of unit length; "orthogonal" means mutually perpendicular, which is a crucial simplification.) Indeed, it is appealing to consider this geometrically: upon choosing bases for all the vector spaces in question, for $\beta\in\mathbb{R}^p$, $X$ determines a linear transformation from $\mathbb{R}^p$ into $\mathbb{R}^n$ in three steps: $\beta \to V^\prime \beta$ is a vector in $\mathbb{R}^r$. $\Sigma$ rescales each of the $r$ basis vectors of $\mathbb{R}^r$. The resulting $r$ coefficients determine a linear combination of the columns of $U$: that is, a unique vector in $\mathbb W$. (Equivalently, the original $r$ coefficients $V^\prime \beta$ specify linear combinations of the orthogonal columns of $U\Sigma$.) The image of $\beta$ in step (1) consists of all vectors spanned by the $r$ rows of $V$, and therefore has dimension $r$. Because the diagonal elements of $\Sigma$ are nonzero, the rescaling in (2) does not change that dimension. Thus the dimension of the space generated in (3) is also $r$. Consequently, the rank of $X$ is $r$. In statistical language, $V$ finds identifiable linear combinations of the parameters $\beta$ and the diagonal elements of $\Sigma$ establish scale factors in the space $\mathbb W$ spanned by the columns of $X$, which is the space of all possible vectors $y$ that can be exactly represented as linear combinations of those columns. More About Projections Here's a related algebraic argument. Any orthonormal frame $U$ determines a projection matrix $UU^\prime$. Specifically, left multiplying any vector $y\in\mathbb{R}^n$ by $U^\prime$ computes the coefficients of $y$ for each of the columns of $U$. Obviously this has rank $r$: since the columns of $U$ each get projected to themselves, the image of the linear transformation $UU^\prime$ is precisely $\mathbb W$. You probably know of a different looking formula for the "projection matrix": namely, $P=X(X^\prime X)^{-} X^\prime$ where $(X^\prime X)^{-}$ is a generalized inverse of $X^\prime X$. Using the SVD we may simplify this: $$P = (U\Sigma V^\prime)((U\Sigma V^\prime)^\prime\, (U\Sigma V^\prime))^{-} (U\Sigma V^\prime)^\prime = UU^\prime.$$ This is because terms of the form $V^\prime V=I_r=U^\prime U$ are identity matrices, which disappear in the multiplications, and the generalized inverse of $\Sigma^2$ is just $\Sigma ^{-2}$. It is now obvious that $P$ has rank $r$.
Intuition using linear algebra that the rank of the projection matrix equals the rank of the design
Let the number of observations be $n$, let $p$ count the parameters, and let $r$ designate the rank of the $n\times p$ design matrix $X$ (which, by definition, is the dimension of the image of $X$). T
Intuition using linear algebra that the rank of the projection matrix equals the rank of the design matrix Let the number of observations be $n$, let $p$ count the parameters, and let $r$ designate the rank of the $n\times p$ design matrix $X$ (which, by definition, is the dimension of the image of $X$). The SVD A Singular Value Decomposition expresses $X$ as a product $$X = U\Sigma V^\prime$$ where the matrices $U$ (dimensions $n\times r$) and $V$ (dimensions $p \times r$) are orthogonal and $\Sigma$ is an $r\times r$ diagonal matrix with no zeros. A nonzero $X$ always has an SVD. (Here's one proof: the columns of $V$ must be the eigenvectors of $X^\prime X$ corresponding to nonzero eigenvalues while the columns of $U$ must be the eigenvectors of $XX^\prime$ corresponding to nonzero eigenvalues. Those eigenvectors and eigenvalues exist because both $X^\prime X$ and $XX^\prime$ are nonzero real symmetric matrices: this is part of the Spectral Theorem. Although in the SVD it is arranged that all elements of $\Sigma$ be nonnegative, we won't need that here.) Interpreting the SVD One way to view the SVD is that it expresses the columns of $X$ as linear combinations of the columns of $U$: the coefficients are the columns of $\Sigma V^\prime$. You may therefore think of $U$ as being an orthonormal frame for the image of $X$, which is an $r$-dimensional subspace $\mathbb W\subset \mathbb{R}^n$. ("Orthonormal" means "orthogonal" and of unit length; "orthogonal" means mutually perpendicular, which is a crucial simplification.) Indeed, it is appealing to consider this geometrically: upon choosing bases for all the vector spaces in question, for $\beta\in\mathbb{R}^p$, $X$ determines a linear transformation from $\mathbb{R}^p$ into $\mathbb{R}^n$ in three steps: $\beta \to V^\prime \beta$ is a vector in $\mathbb{R}^r$. $\Sigma$ rescales each of the $r$ basis vectors of $\mathbb{R}^r$. The resulting $r$ coefficients determine a linear combination of the columns of $U$: that is, a unique vector in $\mathbb W$. (Equivalently, the original $r$ coefficients $V^\prime \beta$ specify linear combinations of the orthogonal columns of $U\Sigma$.) The image of $\beta$ in step (1) consists of all vectors spanned by the $r$ rows of $V$, and therefore has dimension $r$. Because the diagonal elements of $\Sigma$ are nonzero, the rescaling in (2) does not change that dimension. Thus the dimension of the space generated in (3) is also $r$. Consequently, the rank of $X$ is $r$. In statistical language, $V$ finds identifiable linear combinations of the parameters $\beta$ and the diagonal elements of $\Sigma$ establish scale factors in the space $\mathbb W$ spanned by the columns of $X$, which is the space of all possible vectors $y$ that can be exactly represented as linear combinations of those columns. More About Projections Here's a related algebraic argument. Any orthonormal frame $U$ determines a projection matrix $UU^\prime$. Specifically, left multiplying any vector $y\in\mathbb{R}^n$ by $U^\prime$ computes the coefficients of $y$ for each of the columns of $U$. Obviously this has rank $r$: since the columns of $U$ each get projected to themselves, the image of the linear transformation $UU^\prime$ is precisely $\mathbb W$. You probably know of a different looking formula for the "projection matrix": namely, $P=X(X^\prime X)^{-} X^\prime$ where $(X^\prime X)^{-}$ is a generalized inverse of $X^\prime X$. Using the SVD we may simplify this: $$P = (U\Sigma V^\prime)((U\Sigma V^\prime)^\prime\, (U\Sigma V^\prime))^{-} (U\Sigma V^\prime)^\prime = UU^\prime.$$ This is because terms of the form $V^\prime V=I_r=U^\prime U$ are identity matrices, which disappear in the multiplications, and the generalized inverse of $\Sigma^2$ is just $\Sigma ^{-2}$. It is now obvious that $P$ has rank $r$.
Intuition using linear algebra that the rank of the projection matrix equals the rank of the design Let the number of observations be $n$, let $p$ count the parameters, and let $r$ designate the rank of the $n\times p$ design matrix $X$ (which, by definition, is the dimension of the image of $X$). T
32,223
Stochastic gradient descent for regularized logistic regression
First I would recommend you to check my answer in this post first. How could stochastic gradient descent save time compared to standard gradient descent? Andrew Ng.'s formula is correct. We should not use $\frac \lambda {2n}$ on regularization term. Here is the reason: As I discussed in my answer, the idea of SGD is use a subset of data to approximate the gradient of objective function to optimize. Here objective function has two terms, cost value and regularization. Cost value has the sum, but regularization term does not. This is why regularization term does not need to divide by $n$ by SGD. EDIT: After review another answer. I may need to revise what I said. Now I think both answers are right: we can use $\frac \lambda {2n}$ or $\frac \lambda {2}$, each has pros and cons. But it depends on how do we define our objective function. Let me use regression (squared loss) as an example. If we define objective function as $\frac {\|Ax-b\|^2+\lambda\|x\|^2} N$ then, we should divide regularization by $N$ in SGD. If we define objective function as $\frac {\|Ax-b\|^2} N+\lambda\|x\|^2$ (as shown in the code demo). Then, we should NOT divide regularization by $N$ in SGD. Here is some code demo, we are using all data in SGD, so it should be the exact gradient.: # ------------------------------------------------------ # data, and loss function, and gradient # ------------------------------------------------------ set.seed(0) par(mfrow=c(2,1)) n_data=1e3 n_feature=2 A=matrix(runif(n_data*n_feature),ncol=n_feature) b=runif(n_data) sq_loss<-function(A,b,x,lambda){ e=A %*% x -b v=crossprod(e) return(v[1]/(2*n_data)+lambda*crossprod(x)) } sq_loss_gr<-function(A,b,x,lambda){ e=A %*% x -b v=t(A) %*% e return(v/n_data+2*lambda*x) } # ------------------------------------------------------ # sgd: approximate gradient using subset of data # ------------------------------------------------------ sq_loss_gr_approx_1<-function(A,b,x,nsample,lambda){ # sample data and calculate gradient i=sample(n_data,nsample) gr=t(A[i,] %*% x-b[i]) %*% A[i,] v=matrix(gr/nsample,ncol=1) return(v+2*lambda*x) } sq_loss_gr_approx_2<-function(A,b,x,nsample,lambda){ # sample data and calculate gradient i=sample(n_data,nsample) gr=t(A[i,] %*% x-b[i]) %*% A[i,] v=matrix(gr/nsample,ncol=1) return(v+2*lambda*x/nsample) } x=matrix(runif(2),ncol=1) sq_loss_gr(A,b,x,lambda=3) sq_loss_gr_approx_1(A,b,x,nsample=n_data,lambda=3) sq_loss_gr_approx_2(A,b,x,nsample=n_data,lambda=3) The function sq_loss_gr_approx_1 is right. Because loss function is v[1]/(2*n_data)+lambda*crossprod(x) but not (v[1]+lambda*crossprod(x))/(2*n_data) > sq_loss_gr(A,b,x,lambda=3) # [,1] # [1,] 3.317703 # [2,] 4.969016 > sq_loss_gr_approx_1(A,b,x,nsample=n_data,lambda=3) # [,1] # [1,] 3.317703 # [2,] 4.969016 > sq_loss_gr_approx_2(A,b,x,nsample=n_data,lambda=3) # [,1] # [1,] 0.1325575 # [2,] 0.1597326
Stochastic gradient descent for regularized logistic regression
First I would recommend you to check my answer in this post first. How could stochastic gradient descent save time compared to standard gradient descent? Andrew Ng.'s formula is correct. We should not
Stochastic gradient descent for regularized logistic regression First I would recommend you to check my answer in this post first. How could stochastic gradient descent save time compared to standard gradient descent? Andrew Ng.'s formula is correct. We should not use $\frac \lambda {2n}$ on regularization term. Here is the reason: As I discussed in my answer, the idea of SGD is use a subset of data to approximate the gradient of objective function to optimize. Here objective function has two terms, cost value and regularization. Cost value has the sum, but regularization term does not. This is why regularization term does not need to divide by $n$ by SGD. EDIT: After review another answer. I may need to revise what I said. Now I think both answers are right: we can use $\frac \lambda {2n}$ or $\frac \lambda {2}$, each has pros and cons. But it depends on how do we define our objective function. Let me use regression (squared loss) as an example. If we define objective function as $\frac {\|Ax-b\|^2+\lambda\|x\|^2} N$ then, we should divide regularization by $N$ in SGD. If we define objective function as $\frac {\|Ax-b\|^2} N+\lambda\|x\|^2$ (as shown in the code demo). Then, we should NOT divide regularization by $N$ in SGD. Here is some code demo, we are using all data in SGD, so it should be the exact gradient.: # ------------------------------------------------------ # data, and loss function, and gradient # ------------------------------------------------------ set.seed(0) par(mfrow=c(2,1)) n_data=1e3 n_feature=2 A=matrix(runif(n_data*n_feature),ncol=n_feature) b=runif(n_data) sq_loss<-function(A,b,x,lambda){ e=A %*% x -b v=crossprod(e) return(v[1]/(2*n_data)+lambda*crossprod(x)) } sq_loss_gr<-function(A,b,x,lambda){ e=A %*% x -b v=t(A) %*% e return(v/n_data+2*lambda*x) } # ------------------------------------------------------ # sgd: approximate gradient using subset of data # ------------------------------------------------------ sq_loss_gr_approx_1<-function(A,b,x,nsample,lambda){ # sample data and calculate gradient i=sample(n_data,nsample) gr=t(A[i,] %*% x-b[i]) %*% A[i,] v=matrix(gr/nsample,ncol=1) return(v+2*lambda*x) } sq_loss_gr_approx_2<-function(A,b,x,nsample,lambda){ # sample data and calculate gradient i=sample(n_data,nsample) gr=t(A[i,] %*% x-b[i]) %*% A[i,] v=matrix(gr/nsample,ncol=1) return(v+2*lambda*x/nsample) } x=matrix(runif(2),ncol=1) sq_loss_gr(A,b,x,lambda=3) sq_loss_gr_approx_1(A,b,x,nsample=n_data,lambda=3) sq_loss_gr_approx_2(A,b,x,nsample=n_data,lambda=3) The function sq_loss_gr_approx_1 is right. Because loss function is v[1]/(2*n_data)+lambda*crossprod(x) but not (v[1]+lambda*crossprod(x))/(2*n_data) > sq_loss_gr(A,b,x,lambda=3) # [,1] # [1,] 3.317703 # [2,] 4.969016 > sq_loss_gr_approx_1(A,b,x,nsample=n_data,lambda=3) # [,1] # [1,] 3.317703 # [2,] 4.969016 > sq_loss_gr_approx_2(A,b,x,nsample=n_data,lambda=3) # [,1] # [1,] 0.1325575 # [2,] 0.1597326
Stochastic gradient descent for regularized logistic regression First I would recommend you to check my answer in this post first. How could stochastic gradient descent save time compared to standard gradient descent? Andrew Ng.'s formula is correct. We should not
32,224
Stochastic gradient descent for regularized logistic regression
It looks like you are asking about how regularization might be applied in the case of stochastic gradient updates i.e. updating for one training example at a time. Your idea to divide the regularization term by number of data points $N$ (you use $n$) is correct. I also checked this paper, and it appears to say the same. The loss or cost is defined by $Eq. 2$ and in section 3 (see the first equation in sec. 3) they show an update of the weights $w$ for a single training example, they have clearly divided the regularization term by $N$. Thus the loss for a single example is also divided by $N$.
Stochastic gradient descent for regularized logistic regression
It looks like you are asking about how regularization might be applied in the case of stochastic gradient updates i.e. updating for one training example at a time. Your idea to divide the regularizati
Stochastic gradient descent for regularized logistic regression It looks like you are asking about how regularization might be applied in the case of stochastic gradient updates i.e. updating for one training example at a time. Your idea to divide the regularization term by number of data points $N$ (you use $n$) is correct. I also checked this paper, and it appears to say the same. The loss or cost is defined by $Eq. 2$ and in section 3 (see the first equation in sec. 3) they show an update of the weights $w$ for a single training example, they have clearly divided the regularization term by $N$. Thus the loss for a single example is also divided by $N$.
Stochastic gradient descent for regularized logistic regression It looks like you are asking about how regularization might be applied in the case of stochastic gradient updates i.e. updating for one training example at a time. Your idea to divide the regularizati
32,225
Stochastic gradient descent for regularized logistic regression
I always viewed the regularizer separately from the loss. Most machine learning problems come in the form of "Regularizer + Empirical Risk", where Empirical Risk means the arithmetic mean of the sum of the loss of every training sample. What you mean by "spread out across all observations" probably is that when you take the stochastic gradient of a single sample with respect to the weights, then you have to also consider the regularizer which does not get "spread out"/averaged. Compare gradient of regularized GD: $\nabla_w~\lambda~Regularizer(w) + \nabla_w n^{-1}\sum_{i=1}^{n}loss_i (w) $ to regularized SGD (only one element of the sum is considered): $\nabla_w~\lambda~Regularizer(w) + \nabla_w loss_i (w) $ Short: In my opinion it makes sense to separate the terms "regularization" and "cost" (which I named "Empirical Risk" for the full data and "loss" for one sample)
Stochastic gradient descent for regularized logistic regression
I always viewed the regularizer separately from the loss. Most machine learning problems come in the form of "Regularizer + Empirical Risk", where Empirical Risk means the arithmetic mean of the sum o
Stochastic gradient descent for regularized logistic regression I always viewed the regularizer separately from the loss. Most machine learning problems come in the form of "Regularizer + Empirical Risk", where Empirical Risk means the arithmetic mean of the sum of the loss of every training sample. What you mean by "spread out across all observations" probably is that when you take the stochastic gradient of a single sample with respect to the weights, then you have to also consider the regularizer which does not get "spread out"/averaged. Compare gradient of regularized GD: $\nabla_w~\lambda~Regularizer(w) + \nabla_w n^{-1}\sum_{i=1}^{n}loss_i (w) $ to regularized SGD (only one element of the sum is considered): $\nabla_w~\lambda~Regularizer(w) + \nabla_w loss_i (w) $ Short: In my opinion it makes sense to separate the terms "regularization" and "cost" (which I named "Empirical Risk" for the full data and "loss" for one sample)
Stochastic gradient descent for regularized logistic regression I always viewed the regularizer separately from the loss. Most machine learning problems come in the form of "Regularizer + Empirical Risk", where Empirical Risk means the arithmetic mean of the sum o
32,226
SVD of a data matrix after an orthogonal projection to a subspace
In the SVD $X = USV^\prime$, where $X$ is an $n\times p$ matrix, $V$ is an orthogonal $p\times p$ matrix. Suppose $B$ is an orthogonal $p\times q$ matrix: that is, $B^\prime B = 1_q$. Let $$S V^\prime B = TDW^\prime\tag{1}$$ be an SVD of $S V^\prime B$. Thus, by definition, $T$ is a $p\times q$ matrix, $D$ is a diagonal matrix of dimension $q$, and $W$ is an orthogonal $q\times q$ matrix. Compute $$XB = (USV^\prime) B = U(SV^\prime B) = U(TDW^\prime) = (UT)D(W^\prime).\tag{2}$$ Because $(UT)^\prime (UT) = T^\prime (U^\prime U) T = T^\prime T = 1_q$, $UT$ has orthonormal columns. Because $D$ and $W^\prime$ are part of an SVD, then by definition $D$ is diagonal with non-negative entries and $W$ is a $q\times q$ orthogonal matrix. Consequently, equation $(2)$ gives an SVD of $XB$. Equation $(1)$ shows how this SVD is related to that of $X$ and $B$.
SVD of a data matrix after an orthogonal projection to a subspace
In the SVD $X = USV^\prime$, where $X$ is an $n\times p$ matrix, $V$ is an orthogonal $p\times p$ matrix. Suppose $B$ is an orthogonal $p\times q$ matrix: that is, $B^\prime B = 1_q$. Let $$S V^\prim
SVD of a data matrix after an orthogonal projection to a subspace In the SVD $X = USV^\prime$, where $X$ is an $n\times p$ matrix, $V$ is an orthogonal $p\times p$ matrix. Suppose $B$ is an orthogonal $p\times q$ matrix: that is, $B^\prime B = 1_q$. Let $$S V^\prime B = TDW^\prime\tag{1}$$ be an SVD of $S V^\prime B$. Thus, by definition, $T$ is a $p\times q$ matrix, $D$ is a diagonal matrix of dimension $q$, and $W$ is an orthogonal $q\times q$ matrix. Compute $$XB = (USV^\prime) B = U(SV^\prime B) = U(TDW^\prime) = (UT)D(W^\prime).\tag{2}$$ Because $(UT)^\prime (UT) = T^\prime (U^\prime U) T = T^\prime T = 1_q$, $UT$ has orthonormal columns. Because $D$ and $W^\prime$ are part of an SVD, then by definition $D$ is diagonal with non-negative entries and $W$ is a $q\times q$ orthogonal matrix. Consequently, equation $(2)$ gives an SVD of $XB$. Equation $(1)$ shows how this SVD is related to that of $X$ and $B$.
SVD of a data matrix after an orthogonal projection to a subspace In the SVD $X = USV^\prime$, where $X$ is an $n\times p$ matrix, $V$ is an orthogonal $p\times p$ matrix. Suppose $B$ is an orthogonal $p\times q$ matrix: that is, $B^\prime B = 1_q$. Let $$S V^\prim
32,227
SVD of a data matrix after an orthogonal projection to a subspace
For a matrix $B$ with orthonormal columns (but not square), I would like a way of finding an SVD of $XB$ in terms of the SVD of $X = USV^T$. As suggested by @whuber, a first step towards finding the SVD of $XB$ is to add columns to $B$ to make it square (and thus orthogonal). Call this matrix $\tilde B = [B; B_{\perp}]$, and let $k$ be the number of columns of $B_{\perp}$. Then because $\tilde B$ is orthogonal, if $X = USV^T$ is an SVD of $X$, then $X\tilde B = US(\tilde B^TV)^T$ is an SVD of $X \tilde B$. Because $XB$ can be gotten from $X\tilde B$ by dropping the last $k$ columns, my original problem now reduces to the following: Given the SVD of a matrix $Y = DEF^T$, is there a way of finding the SVD of $Y' = D'E'F'^T$, where $Y'$ is the matrix resulting from dropping the last $k$ columns of $Y$? (Here I have $Y = X\tilde B$ and $Y' = XB$.) This problem is referred to as "downdating the SVD", and in general, there seem to be many approaches for doing this. One relevant approach is found here, and more discussion here. But in general, given that algorithms for downdating the SVD appear to be an area of active research, I'm concluding that there isn't a simple way of finding the SVD of $XB$ given only the SVD of $X$.
SVD of a data matrix after an orthogonal projection to a subspace
For a matrix $B$ with orthonormal columns (but not square), I would like a way of finding an SVD of $XB$ in terms of the SVD of $X = USV^T$. As suggested by @whuber, a first step towards finding the S
SVD of a data matrix after an orthogonal projection to a subspace For a matrix $B$ with orthonormal columns (but not square), I would like a way of finding an SVD of $XB$ in terms of the SVD of $X = USV^T$. As suggested by @whuber, a first step towards finding the SVD of $XB$ is to add columns to $B$ to make it square (and thus orthogonal). Call this matrix $\tilde B = [B; B_{\perp}]$, and let $k$ be the number of columns of $B_{\perp}$. Then because $\tilde B$ is orthogonal, if $X = USV^T$ is an SVD of $X$, then $X\tilde B = US(\tilde B^TV)^T$ is an SVD of $X \tilde B$. Because $XB$ can be gotten from $X\tilde B$ by dropping the last $k$ columns, my original problem now reduces to the following: Given the SVD of a matrix $Y = DEF^T$, is there a way of finding the SVD of $Y' = D'E'F'^T$, where $Y'$ is the matrix resulting from dropping the last $k$ columns of $Y$? (Here I have $Y = X\tilde B$ and $Y' = XB$.) This problem is referred to as "downdating the SVD", and in general, there seem to be many approaches for doing this. One relevant approach is found here, and more discussion here. But in general, given that algorithms for downdating the SVD appear to be an area of active research, I'm concluding that there isn't a simple way of finding the SVD of $XB$ given only the SVD of $X$.
SVD of a data matrix after an orthogonal projection to a subspace For a matrix $B$ with orthonormal columns (but not square), I would like a way of finding an SVD of $XB$ in terms of the SVD of $X = USV^T$. As suggested by @whuber, a first step towards finding the S
32,228
VAR in levels for cointegrated data
It is not recent but many textbooks, video series, etc in Econometrics still do not acknowledge this. You can have a look into the papers below. The classic reference would be the Sims, Stock and Watson paper. Definetly also look into Lütkepohl, he is an authority when it comes to SVARS. You are incorrect in stating that "there has to be cointegration" to use VAR in levels. You can also estimate VAR in levels of non-stationary variables when there is no cointegration present! However, the Phillips, Durlauf and Ashley, Vergbugge papers argue for SVARs in levels instead of VECMs if cointegration is present (under certain conditions). Sims, C. A., Stock, J. H., & Watson, M. W. (1990). Inference in linear time series models with some unit roots. Econometrica: Journal of the Econometric Society, 113-144. Ashley, R. A., & Verbrugge, R. J. (2009). To difference or not to difference: a Monte Carlo investigation of inference in vector autoregression models. International Journal of Data Analysis Techniques and Strategies, 1(3), 242-274. Phillips, P. C., & Durlauf, S. N. (1986). Multiple time series regression with integrated processes. The Review of Economic Studies, 53(4), 473-495. Lütkepohl, H. (2011). Vector autoregressive models. In International Encyclopedia of Statistical Science (pp. 1645-1647). Springer Berlin Heidelberg. Christiano, L. J., Eichenbaum, M., & Evans, C. (1994). The effects of monetary policy shocks: some evidence from the flow of funds (No. w4699). National Bureau of Economic Research. Doan, T. A. (1992). RATS: User's manual. Estima.ote
VAR in levels for cointegrated data
It is not recent but many textbooks, video series, etc in Econometrics still do not acknowledge this. You can have a look into the papers below. The classic reference would be the Sims, Stock and Wats
VAR in levels for cointegrated data It is not recent but many textbooks, video series, etc in Econometrics still do not acknowledge this. You can have a look into the papers below. The classic reference would be the Sims, Stock and Watson paper. Definetly also look into Lütkepohl, he is an authority when it comes to SVARS. You are incorrect in stating that "there has to be cointegration" to use VAR in levels. You can also estimate VAR in levels of non-stationary variables when there is no cointegration present! However, the Phillips, Durlauf and Ashley, Vergbugge papers argue for SVARs in levels instead of VECMs if cointegration is present (under certain conditions). Sims, C. A., Stock, J. H., & Watson, M. W. (1990). Inference in linear time series models with some unit roots. Econometrica: Journal of the Econometric Society, 113-144. Ashley, R. A., & Verbrugge, R. J. (2009). To difference or not to difference: a Monte Carlo investigation of inference in vector autoregression models. International Journal of Data Analysis Techniques and Strategies, 1(3), 242-274. Phillips, P. C., & Durlauf, S. N. (1986). Multiple time series regression with integrated processes. The Review of Economic Studies, 53(4), 473-495. Lütkepohl, H. (2011). Vector autoregressive models. In International Encyclopedia of Statistical Science (pp. 1645-1647). Springer Berlin Heidelberg. Christiano, L. J., Eichenbaum, M., & Evans, C. (1994). The effects of monetary policy shocks: some evidence from the flow of funds (No. w4699). National Bureau of Economic Research. Doan, T. A. (1992). RATS: User's manual. Estima.ote
VAR in levels for cointegrated data It is not recent but many textbooks, video series, etc in Econometrics still do not acknowledge this. You can have a look into the papers below. The classic reference would be the Sims, Stock and Wats
32,229
VAR in levels for cointegrated data
I want to expand on derFuchs post. Further, I feel that too often when a unit root is present, people automatically just first difference their data. It's not always necessary! Prediction We've always known that we can run a VAR in levels when series follow a unit root. For example, assume the two series $x$ and $y$ follow a unit root. If we regress $x$ on $y$ (i.e. $y_t = \alpha + x_{t-1} + \epsilon$) and they are not cointegrated, we'll obtain spurious results. However, if we include lags of $y$ then the results will no longer be spurious. This is because the lags of $y$ will guarantee that the residuals will be stationary. If we regress $x$ on $y$ and they are cointegrated, we're fine. After all, in the traditional two-step ECM method we estimate this regression in the first stage. We've only discussed AR models with distributed lags. However, VARs are just a system of AR models with distributed lags, so the above intuition still holds in the VAR context. The reason why this all works is because unit roots (other than in the spurious regression case) have little impact the coefficients estimates. For example, if $z$ follows a unit root and we fit an AR(1), we'll get a coefficient of roughly 1; which is the best estimate of where a random walk will be next period (i.e. where it was last period). However, because $z$ follows a stochastic trend, it will not have a tendency to come back to its mean. Loosely speaking, this implies that the variance of our estimates will tend toward infinity as we have more data (i.e. no asymptotic variance). Broadly speaking, a unit root is a problem for estimating variance (i.e. standard errors) and less so for means (i.e. coefficients). Inference As discussed above, the nature of a random walk (i.e. a unit root process) implies that the variance is explosive. You can see this yourself. Estimate prediction intervals after fitting an AR(1) to a unit root process. As a result of this fact, it is tricky to perform hypothesis testing. Let's again abuse our incorrect, but enlighting, statement from above. If a unit root process has a variance that tends toward infinity, then you will never be able to reject any null hypothesis. The big breakthrough of Sims, Stock, and Watson is that they showed that under some situations it is possible to perform inference when a process follows a unit root. Another good paper, that expands on Sims, Stock, and Watson is Toda and Yamamoto (1995). They show that Granger Causality is possible in the presence of a unit root. Finally, keep in mind that unit roots are still really tricky. They will impact your VAR in weird ways. For example, a unit root implies that the MA representation of your VAR does not exist, as the coefficients matrix is not invertible. Therefore an IRF will not be accurate (though some people still do it).
VAR in levels for cointegrated data
I want to expand on derFuchs post. Further, I feel that too often when a unit root is present, people automatically just first difference their data. It's not always necessary! Prediction We've always
VAR in levels for cointegrated data I want to expand on derFuchs post. Further, I feel that too often when a unit root is present, people automatically just first difference their data. It's not always necessary! Prediction We've always known that we can run a VAR in levels when series follow a unit root. For example, assume the two series $x$ and $y$ follow a unit root. If we regress $x$ on $y$ (i.e. $y_t = \alpha + x_{t-1} + \epsilon$) and they are not cointegrated, we'll obtain spurious results. However, if we include lags of $y$ then the results will no longer be spurious. This is because the lags of $y$ will guarantee that the residuals will be stationary. If we regress $x$ on $y$ and they are cointegrated, we're fine. After all, in the traditional two-step ECM method we estimate this regression in the first stage. We've only discussed AR models with distributed lags. However, VARs are just a system of AR models with distributed lags, so the above intuition still holds in the VAR context. The reason why this all works is because unit roots (other than in the spurious regression case) have little impact the coefficients estimates. For example, if $z$ follows a unit root and we fit an AR(1), we'll get a coefficient of roughly 1; which is the best estimate of where a random walk will be next period (i.e. where it was last period). However, because $z$ follows a stochastic trend, it will not have a tendency to come back to its mean. Loosely speaking, this implies that the variance of our estimates will tend toward infinity as we have more data (i.e. no asymptotic variance). Broadly speaking, a unit root is a problem for estimating variance (i.e. standard errors) and less so for means (i.e. coefficients). Inference As discussed above, the nature of a random walk (i.e. a unit root process) implies that the variance is explosive. You can see this yourself. Estimate prediction intervals after fitting an AR(1) to a unit root process. As a result of this fact, it is tricky to perform hypothesis testing. Let's again abuse our incorrect, but enlighting, statement from above. If a unit root process has a variance that tends toward infinity, then you will never be able to reject any null hypothesis. The big breakthrough of Sims, Stock, and Watson is that they showed that under some situations it is possible to perform inference when a process follows a unit root. Another good paper, that expands on Sims, Stock, and Watson is Toda and Yamamoto (1995). They show that Granger Causality is possible in the presence of a unit root. Finally, keep in mind that unit roots are still really tricky. They will impact your VAR in weird ways. For example, a unit root implies that the MA representation of your VAR does not exist, as the coefficients matrix is not invertible. Therefore an IRF will not be accurate (though some people still do it).
VAR in levels for cointegrated data I want to expand on derFuchs post. Further, I feel that too often when a unit root is present, people automatically just first difference their data. It's not always necessary! Prediction We've always
32,230
Should I use Poisson distribution for non-integer, count-like data?
1) Is it correct to use a GLMM with Poisson distribution with such data? (I don't think so but glmer seems to work anyway) No, it is not correct. By "count data" we generally mean data that records number of cases, so it can be only non-negative and integer-valued. The same is with Poisson distribution, that is a distribution for non-negative integer-valued data. Under Poisson distribution probability of observing non-integer value is zero and R behaves accordingly to it: dpois(c(1, 1.5, 2, 2.5, 3), 5) ## [1] 0.03368973 0.00000000 0.08422434 0.00000000 0.14037390 ## Warning messages: ## 1: In dpois(c(1, 1.5, 2, 2.5, 3), 5) : non-integer x = 1.500000 ## 2: In dpois(c(1, 1.5, 2, 2.5, 3), 5) : non-integer x = 2.500000 You can estimate log-linear glmm using this data but assuming Poisson distribution means that you treat all the non-integers as improbable values so R throws appropriate warnings. This means that the estimates of log-likelihood and the ones based on it, like AIC, won't be what you want them to be. This doesn't mean that you cannot estimate log-linear regression with non-integer data. You can, but you can't assume Poisson distribution for such data. See also What regression model is the most appropriate to use with count data? thread (check also the discussion in comments below the answer) and How does a Poisson distribution work when modeling continuous data and does it result in information loss? .
Should I use Poisson distribution for non-integer, count-like data?
1) Is it correct to use a GLMM with Poisson distribution with such data? (I don't think so but glmer seems to work anyway) No, it is not correct. By "count data" we generally mean data that records
Should I use Poisson distribution for non-integer, count-like data? 1) Is it correct to use a GLMM with Poisson distribution with such data? (I don't think so but glmer seems to work anyway) No, it is not correct. By "count data" we generally mean data that records number of cases, so it can be only non-negative and integer-valued. The same is with Poisson distribution, that is a distribution for non-negative integer-valued data. Under Poisson distribution probability of observing non-integer value is zero and R behaves accordingly to it: dpois(c(1, 1.5, 2, 2.5, 3), 5) ## [1] 0.03368973 0.00000000 0.08422434 0.00000000 0.14037390 ## Warning messages: ## 1: In dpois(c(1, 1.5, 2, 2.5, 3), 5) : non-integer x = 1.500000 ## 2: In dpois(c(1, 1.5, 2, 2.5, 3), 5) : non-integer x = 2.500000 You can estimate log-linear glmm using this data but assuming Poisson distribution means that you treat all the non-integers as improbable values so R throws appropriate warnings. This means that the estimates of log-likelihood and the ones based on it, like AIC, won't be what you want them to be. This doesn't mean that you cannot estimate log-linear regression with non-integer data. You can, but you can't assume Poisson distribution for such data. See also What regression model is the most appropriate to use with count data? thread (check also the discussion in comments below the answer) and How does a Poisson distribution work when modeling continuous data and does it result in information loss? .
Should I use Poisson distribution for non-integer, count-like data? 1) Is it correct to use a GLMM with Poisson distribution with such data? (I don't think so but glmer seems to work anyway) No, it is not correct. By "count data" we generally mean data that records
32,231
Should I use Poisson distribution for non-integer, count-like data?
Since the problem arises because two treatments are relevant for the territory why not create a new pseudo-treatment? So if you have treatments A, B, C then a territory which receives A and B is recorded as having received AB? Obviously this could lead to a multiplicity of treatments with correspondingly few occurrences but without more information about your data we cannot tell whether that is going to be tricky.
Should I use Poisson distribution for non-integer, count-like data?
Since the problem arises because two treatments are relevant for the territory why not create a new pseudo-treatment? So if you have treatments A, B, C then a territory which receives A and B is recor
Should I use Poisson distribution for non-integer, count-like data? Since the problem arises because two treatments are relevant for the territory why not create a new pseudo-treatment? So if you have treatments A, B, C then a territory which receives A and B is recorded as having received AB? Obviously this could lead to a multiplicity of treatments with correspondingly few occurrences but without more information about your data we cannot tell whether that is going to be tricky.
Should I use Poisson distribution for non-integer, count-like data? Since the problem arises because two treatments are relevant for the territory why not create a new pseudo-treatment? So if you have treatments A, B, C then a territory which receives A and B is recor
32,232
Negative binomial regression in R allowing for correlation between dispersion & regression coefficients
I haven't found another R package which does this, but I have written code which, based on the maximum likelihood estimates of a model fitted with glm.nb, calculates the full variance covariance matrix using the observed information matrix. Comparing to values from SAS this appears to match, but if anyone spots an error or finds that it does not match the variance covariance matrix from SAS or Stata, please add a comment to this answer. glm.nb.cov <- function(mod) { #given a model fitted by glm.nb in MASS, this function returns a variance covariance matrix for the #regression coefficients and dispersion parameter, without assuming independence between these #note that the model must have been fitted with x=TRUE argument so that design matrix is available #formulae based on p23-p24 of http://pointer.esalq.usp.br/departamentos/lce/arquivos/aulas/2011/LCE5868/OverdispersionBook.pdf #and http://www.math.mcgill.ca/~dstephens/523/Papers/Lawless-1987-CJS.pdf k <- mod$theta #p is number of regression coefficients p <- dim(vcov(mod))[1] #construct observed information matrix obsInfo <- array(0, dim=c(p+1, p+1)) #first calculate top left part for regression coefficients for (i in 1:p) { for (j in 1:p) { obsInfo[i,j] <- sum( (1+mod$y/mod$theta)*mod$fitted.values*mod$x[,i]*mod$x[,j] / (1+mod$fitted.values/mod$theta)^2 ) } } #information for dispersion parameter obsInfo[(p+1),(p+1)] <- -sum(trigamma(mod$theta+mod$y) - trigamma(mod$theta) - 1/(mod$fitted.values+mod$theta) + (mod$theta+mod$y)/(mod$theta+mod$fitted.values)^2 - 1/(mod$fitted.values+mod$theta) + 1/mod$theta) #covariance between regression coefficients and dispersion for (i in 1:p) { obsInfo[(p+1),i] <- -sum(((mod$y-mod$fitted.values) * mod$fitted.values / ( (mod$theta+mod$fitted.values)^2 )) * mod$x[,i] ) obsInfo[i,(p+1)] <- obsInfo[(p+1),i] } #return variance covariance matrix solve(obsInfo) }
Negative binomial regression in R allowing for correlation between dispersion & regression coefficie
I haven't found another R package which does this, but I have written code which, based on the maximum likelihood estimates of a model fitted with glm.nb, calculates the full variance covariance matri
Negative binomial regression in R allowing for correlation between dispersion & regression coefficients I haven't found another R package which does this, but I have written code which, based on the maximum likelihood estimates of a model fitted with glm.nb, calculates the full variance covariance matrix using the observed information matrix. Comparing to values from SAS this appears to match, but if anyone spots an error or finds that it does not match the variance covariance matrix from SAS or Stata, please add a comment to this answer. glm.nb.cov <- function(mod) { #given a model fitted by glm.nb in MASS, this function returns a variance covariance matrix for the #regression coefficients and dispersion parameter, without assuming independence between these #note that the model must have been fitted with x=TRUE argument so that design matrix is available #formulae based on p23-p24 of http://pointer.esalq.usp.br/departamentos/lce/arquivos/aulas/2011/LCE5868/OverdispersionBook.pdf #and http://www.math.mcgill.ca/~dstephens/523/Papers/Lawless-1987-CJS.pdf k <- mod$theta #p is number of regression coefficients p <- dim(vcov(mod))[1] #construct observed information matrix obsInfo <- array(0, dim=c(p+1, p+1)) #first calculate top left part for regression coefficients for (i in 1:p) { for (j in 1:p) { obsInfo[i,j] <- sum( (1+mod$y/mod$theta)*mod$fitted.values*mod$x[,i]*mod$x[,j] / (1+mod$fitted.values/mod$theta)^2 ) } } #information for dispersion parameter obsInfo[(p+1),(p+1)] <- -sum(trigamma(mod$theta+mod$y) - trigamma(mod$theta) - 1/(mod$fitted.values+mod$theta) + (mod$theta+mod$y)/(mod$theta+mod$fitted.values)^2 - 1/(mod$fitted.values+mod$theta) + 1/mod$theta) #covariance between regression coefficients and dispersion for (i in 1:p) { obsInfo[(p+1),i] <- -sum(((mod$y-mod$fitted.values) * mod$fitted.values / ( (mod$theta+mod$fitted.values)^2 )) * mod$x[,i] ) obsInfo[i,(p+1)] <- obsInfo[(p+1),i] } #return variance covariance matrix solve(obsInfo) }
Negative binomial regression in R allowing for correlation between dispersion & regression coefficie I haven't found another R package which does this, but I have written code which, based on the maximum likelihood estimates of a model fitted with glm.nb, calculates the full variance covariance matri
32,233
Bootstrapping a sample from a finite population
Bootstrap sampling should resemble the process of sampling the data from the population. In case of finite population you sampled fraction $f$ out of population of size $N$, i.e. $n = fN$ cases. There are two problems with using bootstrap in such scenario: (1) if you used traditional bootstrap, you'd be sampling with replacement rather than without replacement, (2) if you sampled without replacement $fn$ cases, then you'd end up with sample smaller than $n$. The first scenario is a bad idea since in such case bootstrap would not resemble the original sampling process. For using bootstrap in finite population case you have three alternatives: Sample without replacement samples of size $fn$ and then rescale the results. Finding the appropriate rescaling factor can be more complicated then it sounds, so this may not be the best alternative. First sample without replacement $N-n$ cases out of your sample, concatenate them to the sample, and then sample without replacement $n$ cases out of it. This is called mirror-match bootstrap. First sample with replacement $N$ cases out of your sample, and then sample out of it $n$ cases without replacement. This is called superpopulation bootstrap. To learn more about those methods you could check the following resources: Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their application. New York, NY: Cambridge University Press. Sitter, R. R. (1992). A resampling procedure for complex survey data. Journal of the American Statistical Association, 87(419), 755-765. Sitter, R. R. (1992). Comparing three bootstrap methods for survey data. Canadian Journal of Statistics, 20(2), 135-154.
Bootstrapping a sample from a finite population
Bootstrap sampling should resemble the process of sampling the data from the population. In case of finite population you sampled fraction $f$ out of population of size $N$, i.e. $n = fN$ cases. There
Bootstrapping a sample from a finite population Bootstrap sampling should resemble the process of sampling the data from the population. In case of finite population you sampled fraction $f$ out of population of size $N$, i.e. $n = fN$ cases. There are two problems with using bootstrap in such scenario: (1) if you used traditional bootstrap, you'd be sampling with replacement rather than without replacement, (2) if you sampled without replacement $fn$ cases, then you'd end up with sample smaller than $n$. The first scenario is a bad idea since in such case bootstrap would not resemble the original sampling process. For using bootstrap in finite population case you have three alternatives: Sample without replacement samples of size $fn$ and then rescale the results. Finding the appropriate rescaling factor can be more complicated then it sounds, so this may not be the best alternative. First sample without replacement $N-n$ cases out of your sample, concatenate them to the sample, and then sample without replacement $n$ cases out of it. This is called mirror-match bootstrap. First sample with replacement $N$ cases out of your sample, and then sample out of it $n$ cases without replacement. This is called superpopulation bootstrap. To learn more about those methods you could check the following resources: Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their application. New York, NY: Cambridge University Press. Sitter, R. R. (1992). A resampling procedure for complex survey data. Journal of the American Statistical Association, 87(419), 755-765. Sitter, R. R. (1992). Comparing three bootstrap methods for survey data. Canadian Journal of Statistics, 20(2), 135-154.
Bootstrapping a sample from a finite population Bootstrap sampling should resemble the process of sampling the data from the population. In case of finite population you sampled fraction $f$ out of population of size $N$, i.e. $n = fN$ cases. There
32,234
Why are regression coefficients in a factor analysis model called "loadings"?
I don't get how it follows that we should say "factor loads variable" rather than vice-versa Abstract explanation. If a point seen as object has a coordinate on an axis seen as feature then the coordinate is how much the feature loads the point, how much it charges, by itself, that point. If my height is 1.86 m then this is how I'm loaded by height (not how much height is loaded by me). Note that loading is variable's coordinate on factor-as-axis on the loading plot. Latent-trait explanation. Factor is conceptualized as an entity which plays "in" the variables or "behind" them and which makes them correlate. Therefore "load" is intuitively a good word to express the degree how strongly the variable is dependent on, driven by, the latent factor. Factor analysis model is regressional model whereby factors explain or "influence" observed variables. Any regression coefficient (not only factor analytic) may be labeled a "loading": regressional coefficient = regressional weight = regressional loading. More reason to call a factor's coefficient "loading" comes from the fact that in the factor model, factors $F$s are set standardized, each unit-variance, while a variable $V$ isn't necessarily standardized. There comes therefore that the effect on $V$ is realized/expressed completely and only via the loading coefficients. Whenever in regressional model a standardized variable predicts a potentially unstandardized one - call the coefficient "loading". Why we need the term "loading" at all, when we already had the term "regression coefficient" We actually don't need. Word "loading" is simply a tradition stemming from psychologists' liking for figurative sense (FA started to develop a century ago among psychologists). Moreover, the term "loading" may have somewhat different statistical meaning in other related multivariate methods (such as discriminant analysis). In general, some people in some cases call "loadings" regression coefficients, while other or in other cases - correlation coefficients. So the term is confusing. It is not a statistical term, ultimately. If you don't like the word, don't use it. You may also say "variable loads (on) factor" if you want; to me, it is simply a thoughtless speech, not a vice. P.S. I've just looked in an English dictionary (English isn't my language) and observed that to load may have meanings as (1) "I loaded the cart" (by a bag, or by myself as embarked); (2) "the ship loads (up) many passengers (on it)". If to follow the second word usage, it would be quite OK to say "the variable loads the factor (on itself, the variable) well".
Why are regression coefficients in a factor analysis model called "loadings"?
I don't get how it follows that we should say "factor loads variable" rather than vice-versa Abstract explanation. If a point seen as object has a coordinate on an axis seen as feature then the coo
Why are regression coefficients in a factor analysis model called "loadings"? I don't get how it follows that we should say "factor loads variable" rather than vice-versa Abstract explanation. If a point seen as object has a coordinate on an axis seen as feature then the coordinate is how much the feature loads the point, how much it charges, by itself, that point. If my height is 1.86 m then this is how I'm loaded by height (not how much height is loaded by me). Note that loading is variable's coordinate on factor-as-axis on the loading plot. Latent-trait explanation. Factor is conceptualized as an entity which plays "in" the variables or "behind" them and which makes them correlate. Therefore "load" is intuitively a good word to express the degree how strongly the variable is dependent on, driven by, the latent factor. Factor analysis model is regressional model whereby factors explain or "influence" observed variables. Any regression coefficient (not only factor analytic) may be labeled a "loading": regressional coefficient = regressional weight = regressional loading. More reason to call a factor's coefficient "loading" comes from the fact that in the factor model, factors $F$s are set standardized, each unit-variance, while a variable $V$ isn't necessarily standardized. There comes therefore that the effect on $V$ is realized/expressed completely and only via the loading coefficients. Whenever in regressional model a standardized variable predicts a potentially unstandardized one - call the coefficient "loading". Why we need the term "loading" at all, when we already had the term "regression coefficient" We actually don't need. Word "loading" is simply a tradition stemming from psychologists' liking for figurative sense (FA started to develop a century ago among psychologists). Moreover, the term "loading" may have somewhat different statistical meaning in other related multivariate methods (such as discriminant analysis). In general, some people in some cases call "loadings" regression coefficients, while other or in other cases - correlation coefficients. So the term is confusing. It is not a statistical term, ultimately. If you don't like the word, don't use it. You may also say "variable loads (on) factor" if you want; to me, it is simply a thoughtless speech, not a vice. P.S. I've just looked in an English dictionary (English isn't my language) and observed that to load may have meanings as (1) "I loaded the cart" (by a bag, or by myself as embarked); (2) "the ship loads (up) many passengers (on it)". If to follow the second word usage, it would be quite OK to say "the variable loads the factor (on itself, the variable) well".
Why are regression coefficients in a factor analysis model called "loadings"? I don't get how it follows that we should say "factor loads variable" rather than vice-versa Abstract explanation. If a point seen as object has a coordinate on an axis seen as feature then the coo
32,235
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1?
The matched-pairs $t$-test with $n$ pairs is actually just a one-sample $t$-test with a sample of size $n$. You have $n$ differences $d_1,\ldots,d_n$, and these are i.i.d. and normally distributed. $$ \begin{array}{ccccc} \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} & = & \begin{bmatrix} \bar d \\ \vdots \\ \bar d \end{bmatrix} & + & \begin{bmatrix} d_1 - \bar d \\ \vdots \\ d_1 - \bar d \end{bmatrix} \\[10pt] n \text{ d.f.} & & 1 \text{ d.f.} & & (n-1) \text{ d.f.} \end{array} $$ The first column after $\text{“}{=}\text{''}$ has $1$ degree of freedom because of the linear constraint that says all entries are equal; the second has $n-1$ degrees of freedom because of the linear constraint that says the sum of the entries is $0$.
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1?
The matched-pairs $t$-test with $n$ pairs is actually just a one-sample $t$-test with a sample of size $n$. You have $n$ differences $d_1,\ldots,d_n$, and these are i.i.d. and normally distributed. $
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1? The matched-pairs $t$-test with $n$ pairs is actually just a one-sample $t$-test with a sample of size $n$. You have $n$ differences $d_1,\ldots,d_n$, and these are i.i.d. and normally distributed. $$ \begin{array}{ccccc} \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} & = & \begin{bmatrix} \bar d \\ \vdots \\ \bar d \end{bmatrix} & + & \begin{bmatrix} d_1 - \bar d \\ \vdots \\ d_1 - \bar d \end{bmatrix} \\[10pt] n \text{ d.f.} & & 1 \text{ d.f.} & & (n-1) \text{ d.f.} \end{array} $$ The first column after $\text{“}{=}\text{''}$ has $1$ degree of freedom because of the linear constraint that says all entries are equal; the second has $n-1$ degrees of freedom because of the linear constraint that says the sum of the entries is $0$.
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1? The matched-pairs $t$-test with $n$ pairs is actually just a one-sample $t$-test with a sample of size $n$. You have $n$ differences $d_1,\ldots,d_n$, and these are i.i.d. and normally distributed. $
32,236
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1?
Many, many thanks to Michael Hardy for answering my question. The idea is this: let $$\mathbf{y} = \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix}$$ and $\boldsymbol{\beta} = [\mu_1 - \mu_2]$. Then our linear model is then $$\mathbf{y} = \mathbf{1}_{n \times 1}\boldsymbol{\beta} + \boldsymbol{\epsilon}$$ where $\mathbf{1}_{n \times 1}$ is the $n$-vector of all ones, and $$\boldsymbol{\epsilon} = \begin{bmatrix} \epsilon_1 \\ \vdots \\ \epsilon_n \end{bmatrix} \sim \mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I}_n)\text{.}$$ Obviously $\mathbf{X} = \mathbf{1}_{n \times 1}$ has rank $1$, so then we have $n-1$ degrees of freedom. How do we know to set $\boldsymbol{\beta}$ equal to $[\mu_1 - \mu_2]$? Recall that $$\mathbb{E}[\mathbf{y}] = \mathbf{X}\boldsymbol{\beta}$$ and as it can be easily seen, $\mathbb{E}[d_j] = \mu_1 - \mu_2$ for all $j$. Given our $\mathbf{X}$, it is obvious what $\boldsymbol{\beta}$ should be. This is because $$\mathbb{E}[\mathbf{y}] = \mathbb{E}\left[\begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} \right] = \begin{bmatrix} \mathbb{E}[d_1] \\ \vdots \\ \mathbb{E}[d_n] \end{bmatrix} = \begin{bmatrix} \mu_1 - \mu_2 \\ \vdots \\ \mu_1 - \mu_2 \end{bmatrix} = \mathbf{X}\boldsymbol\beta = \mathbf{1}_{n \times 1}\boldsymbol\beta = \begin{bmatrix} 1 \\ \vdots \\ 1 \end{bmatrix}\boldsymbol\beta$$ so $\boldsymbol\beta$ should be a $1 \times 1$ matrix with $\boldsymbol\beta = [\mu_1 - \mu_2]$. Set $\mathbf{c}^{\prime} = [1]$. Then our hypothesis test is $$H_0: \mathbf{c}^{\prime}\boldsymbol{\beta} = 0\text{.}$$ Our test statistic is thus $$\dfrac{\mathbf{c}^{\prime}\hat{\boldsymbol{\beta}}}{\sqrt{\hat{\sigma}^2\mathbf{c}^{\prime}\left(\mathbf{X}^{\prime}\mathbf{X}\right)^{-1}\mathbf{c}}}\text{.}$$ We have $$\hat{\sigma}^2 = \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})}\text{.}$$ After some work, it can be shown that $$\mathbf{P}_\mathbf{X} = \mathbf{P}_{\mathbf{1}_{n \times 1}} = \mathbf{1}_{n \times 1}\left(\dfrac{1}{n}\right)\mathbf{1}^{\prime}\text{.}$$ It can also be shown that $\mathbf{I}-\mathbf{P}_{\mathbf{X}}$ is symmetric and idempotent. So, $$\begin{align} \hat{\sigma}^2 &= \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})} \\ &= \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})} \\ &= \dfrac{\|(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}\|^{2}}{n-r(\mathbf{X})} \\ &=\dfrac{\left\|\left[\mathbf{I}-\mathbf{1}_{n \times 1}\left(\dfrac{1}{n}\right)\mathbf{1}^{\prime}\right]\mathbf{y} \right\|^2}{n-1} \\ &= \dfrac{\left\|\begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} - \begin{bmatrix} \bar{d}_\cdot \\ \vdots \\ \bar{d}_\cdot \end{bmatrix} \right\|^2}{n-1} \\ &= \dfrac{\sum_{i=1}^{n}(d_i-\bar{d}_{\cdot})^2}{n-1} \\ &= s^2_d \end{align}$$ and $$\mathbf{X}^{\prime}\mathbf{X} = \mathbf{1}_{n \times 1}^{\prime}\mathbf{1}_{n \times 1} = n$$ which obviously has inverse $1/n$, thus giving a test statistic $$\dfrac{\hat\mu_1-\hat\mu_2}{\sqrt{s^2_d/n}}$$ which would be tested on a central $t$-distribution with $n - 1$ degrees of freedom as desired.
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1?
Many, many thanks to Michael Hardy for answering my question. The idea is this: let $$\mathbf{y} = \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix}$$ and $\boldsymbol{\beta} = [\mu_1 - \mu_2]$. The
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1? Many, many thanks to Michael Hardy for answering my question. The idea is this: let $$\mathbf{y} = \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix}$$ and $\boldsymbol{\beta} = [\mu_1 - \mu_2]$. Then our linear model is then $$\mathbf{y} = \mathbf{1}_{n \times 1}\boldsymbol{\beta} + \boldsymbol{\epsilon}$$ where $\mathbf{1}_{n \times 1}$ is the $n$-vector of all ones, and $$\boldsymbol{\epsilon} = \begin{bmatrix} \epsilon_1 \\ \vdots \\ \epsilon_n \end{bmatrix} \sim \mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I}_n)\text{.}$$ Obviously $\mathbf{X} = \mathbf{1}_{n \times 1}$ has rank $1$, so then we have $n-1$ degrees of freedom. How do we know to set $\boldsymbol{\beta}$ equal to $[\mu_1 - \mu_2]$? Recall that $$\mathbb{E}[\mathbf{y}] = \mathbf{X}\boldsymbol{\beta}$$ and as it can be easily seen, $\mathbb{E}[d_j] = \mu_1 - \mu_2$ for all $j$. Given our $\mathbf{X}$, it is obvious what $\boldsymbol{\beta}$ should be. This is because $$\mathbb{E}[\mathbf{y}] = \mathbb{E}\left[\begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} \right] = \begin{bmatrix} \mathbb{E}[d_1] \\ \vdots \\ \mathbb{E}[d_n] \end{bmatrix} = \begin{bmatrix} \mu_1 - \mu_2 \\ \vdots \\ \mu_1 - \mu_2 \end{bmatrix} = \mathbf{X}\boldsymbol\beta = \mathbf{1}_{n \times 1}\boldsymbol\beta = \begin{bmatrix} 1 \\ \vdots \\ 1 \end{bmatrix}\boldsymbol\beta$$ so $\boldsymbol\beta$ should be a $1 \times 1$ matrix with $\boldsymbol\beta = [\mu_1 - \mu_2]$. Set $\mathbf{c}^{\prime} = [1]$. Then our hypothesis test is $$H_0: \mathbf{c}^{\prime}\boldsymbol{\beta} = 0\text{.}$$ Our test statistic is thus $$\dfrac{\mathbf{c}^{\prime}\hat{\boldsymbol{\beta}}}{\sqrt{\hat{\sigma}^2\mathbf{c}^{\prime}\left(\mathbf{X}^{\prime}\mathbf{X}\right)^{-1}\mathbf{c}}}\text{.}$$ We have $$\hat{\sigma}^2 = \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})}\text{.}$$ After some work, it can be shown that $$\mathbf{P}_\mathbf{X} = \mathbf{P}_{\mathbf{1}_{n \times 1}} = \mathbf{1}_{n \times 1}\left(\dfrac{1}{n}\right)\mathbf{1}^{\prime}\text{.}$$ It can also be shown that $\mathbf{I}-\mathbf{P}_{\mathbf{X}}$ is symmetric and idempotent. So, $$\begin{align} \hat{\sigma}^2 &= \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})} \\ &= \dfrac{\mathbf{y}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})^{\prime}(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}}{n-r(\mathbf{X})} \\ &= \dfrac{\|(\mathbf{I}-\mathbf{P}_{\mathbf{X}})\mathbf{y}\|^{2}}{n-r(\mathbf{X})} \\ &=\dfrac{\left\|\left[\mathbf{I}-\mathbf{1}_{n \times 1}\left(\dfrac{1}{n}\right)\mathbf{1}^{\prime}\right]\mathbf{y} \right\|^2}{n-1} \\ &= \dfrac{\left\|\begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix} - \begin{bmatrix} \bar{d}_\cdot \\ \vdots \\ \bar{d}_\cdot \end{bmatrix} \right\|^2}{n-1} \\ &= \dfrac{\sum_{i=1}^{n}(d_i-\bar{d}_{\cdot})^2}{n-1} \\ &= s^2_d \end{align}$$ and $$\mathbf{X}^{\prime}\mathbf{X} = \mathbf{1}_{n \times 1}^{\prime}\mathbf{1}_{n \times 1} = n$$ which obviously has inverse $1/n$, thus giving a test statistic $$\dfrac{\hat\mu_1-\hat\mu_2}{\sqrt{s^2_d/n}}$$ which would be tested on a central $t$-distribution with $n - 1$ degrees of freedom as desired.
Why is the degrees of freedom for a matched pairs $t$-test the number of pairs minus 1? Many, many thanks to Michael Hardy for answering my question. The idea is this: let $$\mathbf{y} = \begin{bmatrix} d_1 \\ \vdots \\ d_n \end{bmatrix}$$ and $\boldsymbol{\beta} = [\mu_1 - \mu_2]$. The
32,237
Intuition behind the characteristic equation of an AR or MA process
When trying to get an intuitive understanding of formal mathematical models, it is usually best to start with a simple model and then generalise later. So, with that in mind, let's start with an AR$(1)$ model with zero mean based on a white-noise series $\varepsilon_i \sim \text{IID N}(0,1)$. This model can be written in scalar form as: $$Y_t = \phi Y_{t-1} + \sigma \varepsilon_t.$$ Now, you can substitute in this auto-regression to get $Y_t$ in terms of earlier and earlier terms: $$\begin{equation} \begin{aligned} Y_t &= \phi Y_{t-1} + \sigma \varepsilon_t \\[6pt] &= \phi (\phi Y_{t-2} + \sigma \varepsilon_{t-1}) + \sigma \varepsilon_t \\[6pt] &= \phi^2 Y_{t-2} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1}) \\[6pt] &= \phi^2 (\phi Y_{t-3} + \sigma \varepsilon_{t-2}) + \sigma (\varepsilon_t + \phi \varepsilon_{t-1}) \\[6pt] &= \phi^3 Y_{t-3} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2}) \\[6pt] &= \phi^3 (\phi Y_{t-4} + \sigma \varepsilon_{t-3}) + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2}) \\[6pt] &= \phi^4 Y_{t-4} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2} + \phi^3 \varepsilon_{t-3}) \\[6pt] &= \cdots \\[6pt] &= \phi^k Y_{t-k} + \sigma \sum_{i=0}^{k-1} \phi^i \varepsilon_{t-i}. \end{aligned} \end{equation}$$ If $|\phi| < 1$ then this first term vanishes as $k \rightarrow \infty$ and then you have the MA$(\infty)$ representation: $$Y_t = \sigma \sum_{i=0}^\infty \phi^i \varepsilon_{t-i}.$$ This shows you that if $|\phi| < 1$ then you can write an AR$(1)$ process as an MA$(\infty)$ process. The infinite sum in this expression is called a generating function, and in this representation it allows you to find the distribution of the observable series of values. Using the characteristic polynomial: Rather than doing all this in scalar form, the model can be written using the lag-operator $L$ as: $$\phi(L) Y_t = \sigma \varepsilon_t,$$ where $\phi(L) = 1 - \phi L$ is the auto-regressive characteristic polynomial (which in this case is an affine function). Now, it turns out that this polynomial function can be inverted in the same way as a polynomial function involving a real or complex number (as opposed to the lag operator). That is, if $|\phi| < 1$ then the polynomial follows the inversion rule for an infinite geometric sum: $$\phi^{-1}(L) = \frac{1}{1-\phi L} = \sum_{i=0}^\infty \phi^i L^i.$$ Applying this to the process you get the MA$(\infty)$ representation we derived in scalar form before: $$Y_t = \sigma \phi^{-1}(L) \varepsilon_t = \sigma \sum_{i=0}^\infty \phi^i L^i \varepsilon_t = \sigma \sum_{i=0}^\infty \phi^i \varepsilon_{t-i}.$$ You can see from the above that it is possible to deal with time-series models via scalar methods, without using the lag operator at all. By introducing the lag operator, and polynomial functions of this operator, certain calculations (like the above inversion) become much simpler. In order to verify that these are allowable, mathematicians have to appeal to the theory of functions and operators to establish that polynomials involving the lag function behave like polynomials involving real/complex numbers. Once they have established that, this allows them to use polynomials involving the lag operator to simplify changes of form in time-series models.
Intuition behind the characteristic equation of an AR or MA process
When trying to get an intuitive understanding of formal mathematical models, it is usually best to start with a simple model and then generalise later. So, with that in mind, let's start with an AR$(
Intuition behind the characteristic equation of an AR or MA process When trying to get an intuitive understanding of formal mathematical models, it is usually best to start with a simple model and then generalise later. So, with that in mind, let's start with an AR$(1)$ model with zero mean based on a white-noise series $\varepsilon_i \sim \text{IID N}(0,1)$. This model can be written in scalar form as: $$Y_t = \phi Y_{t-1} + \sigma \varepsilon_t.$$ Now, you can substitute in this auto-regression to get $Y_t$ in terms of earlier and earlier terms: $$\begin{equation} \begin{aligned} Y_t &= \phi Y_{t-1} + \sigma \varepsilon_t \\[6pt] &= \phi (\phi Y_{t-2} + \sigma \varepsilon_{t-1}) + \sigma \varepsilon_t \\[6pt] &= \phi^2 Y_{t-2} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1}) \\[6pt] &= \phi^2 (\phi Y_{t-3} + \sigma \varepsilon_{t-2}) + \sigma (\varepsilon_t + \phi \varepsilon_{t-1}) \\[6pt] &= \phi^3 Y_{t-3} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2}) \\[6pt] &= \phi^3 (\phi Y_{t-4} + \sigma \varepsilon_{t-3}) + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2}) \\[6pt] &= \phi^4 Y_{t-4} + \sigma (\varepsilon_t + \phi \varepsilon_{t-1} + \phi^2 \varepsilon_{t-2} + \phi^3 \varepsilon_{t-3}) \\[6pt] &= \cdots \\[6pt] &= \phi^k Y_{t-k} + \sigma \sum_{i=0}^{k-1} \phi^i \varepsilon_{t-i}. \end{aligned} \end{equation}$$ If $|\phi| < 1$ then this first term vanishes as $k \rightarrow \infty$ and then you have the MA$(\infty)$ representation: $$Y_t = \sigma \sum_{i=0}^\infty \phi^i \varepsilon_{t-i}.$$ This shows you that if $|\phi| < 1$ then you can write an AR$(1)$ process as an MA$(\infty)$ process. The infinite sum in this expression is called a generating function, and in this representation it allows you to find the distribution of the observable series of values. Using the characteristic polynomial: Rather than doing all this in scalar form, the model can be written using the lag-operator $L$ as: $$\phi(L) Y_t = \sigma \varepsilon_t,$$ where $\phi(L) = 1 - \phi L$ is the auto-regressive characteristic polynomial (which in this case is an affine function). Now, it turns out that this polynomial function can be inverted in the same way as a polynomial function involving a real or complex number (as opposed to the lag operator). That is, if $|\phi| < 1$ then the polynomial follows the inversion rule for an infinite geometric sum: $$\phi^{-1}(L) = \frac{1}{1-\phi L} = \sum_{i=0}^\infty \phi^i L^i.$$ Applying this to the process you get the MA$(\infty)$ representation we derived in scalar form before: $$Y_t = \sigma \phi^{-1}(L) \varepsilon_t = \sigma \sum_{i=0}^\infty \phi^i L^i \varepsilon_t = \sigma \sum_{i=0}^\infty \phi^i \varepsilon_{t-i}.$$ You can see from the above that it is possible to deal with time-series models via scalar methods, without using the lag operator at all. By introducing the lag operator, and polynomial functions of this operator, certain calculations (like the above inversion) become much simpler. In order to verify that these are allowable, mathematicians have to appeal to the theory of functions and operators to establish that polynomials involving the lag function behave like polynomials involving real/complex numbers. Once they have established that, this allows them to use polynomials involving the lag operator to simplify changes of form in time-series models.
Intuition behind the characteristic equation of an AR or MA process When trying to get an intuitive understanding of formal mathematical models, it is usually best to start with a simple model and then generalise later. So, with that in mind, let's start with an AR$(
32,238
What is the gradient-log-normalizer?
Using notation from the wikipedia page (https://en.wikipedia.org/wiki/Exponential_family), an exponential family is a family of probability distributions that have pmfs/pdfs that can be written as (noting that $\theta$, $x$ can be vector valued): $$f_{\theta}(x)=h(x)\exp[\eta(\theta)^Tt(x)-A(\theta)]$$ where $\eta(\theta)=\eta$ are the natural parameters, $t(x)$ are the sufficient statistics, and $A(\theta)$ is the log normalizer (sometimes called the log partition function). The reason $A(\theta)$ is called the log normalizer, as it can be verified that, in the continuous case, for this to be a valid pdf, we must have $$A(\theta)=\log\left[\int h(x)\exp[\eta(\theta)^Tt(x)]dx\right],$$ and in the discrete case, for this to be a valid pmf, we must have $$A(\theta)=\log\left[\sum_x h(x)\exp[\eta(\theta)^Tt(x)]\right].$$ In each case we notice that $\int h(x)\exp[\eta(\theta)^Tt(x)]dx$ and $\sum_x h(x)\exp[\eta(\theta)^Tt(x)]$ are the normalization constants of the distributions, hence the name log normalizer. Now to see the specific relationship between the softmax function and the $k$ dimensional categorical distribution, we'll have to use a specific parameterization of the distribution. Namely, let $\theta_1,\cdots,\theta_{k-1}$ be such that $0<\theta_1,\cdots,\theta_{k-1}$ and $\sum_{i=1}^{k-1}\theta_i<1$, and define $\theta_k=1-\sum_{i=1}^{k-1}\theta_i$ (letting $\theta=(\theta_1,\cdots,\theta_{k})$). The pmf for this distribution is (letting $x=(x_1,\cdots,x_{k})$ be a one hot vector, i.e. $x_i=1$ and $x_j=0$ for $i\neq j$): $$f_{\theta}(x)=\prod_{i=1}^k\theta_i^{x_i}.$$ To write this as an exponential family, note that $h(x)=1$, $\eta(\theta)=(\log[\theta_1/\theta_k],\cdots, \log[\theta_{k-1}/\theta_k],0)$, $t(x)=(x_1,\cdots,x_{k})$, and $A(\theta)=-\log[\theta_k]$, so: $$f_{\theta}(x)=\exp[(\log[\theta_1/\theta_k],\cdots, \log[\theta_{k-1}/\theta_k],0)^T(x_1,\cdots,x_{k})-(-\log[\theta_k])].$$ Now let's suggestively write $\eta(\theta_i)=\log[\theta_i/\theta_k]=\eta_i$, so that we can write $\theta_i=\frac{e^{\eta_i}}{\sum_{j=1}^ke^{\eta_j}}$. Then the log normalizer becomes $$A(\eta)=-\log\left[\frac{e^{\eta_k}}{\sum_{j=1}^ke^{\eta_j}}\right]= -\log\left[\frac{1}{\sum_{j=1}^ke^{\eta_j}}\right]=\log\left[\sum_{j=1}^ke^{\eta_j}\right].$$ Taking the partial derivative with respect to $\eta_i$, we find $$\frac{\partial}{\partial \eta_i}A(\eta)=\frac{e^{\eta_i}}{\sum_{j=1}^ke^{\eta_j}},$$ revealing that the gradient of the log normalizer is indeed the softmax function: $$\nabla A(\eta)=\left[\frac{e^{\eta_1}}{\sum_{j=1}^ke^{\eta_j}},\cdots,\frac{e^{\eta_k}}{\sum_{j=1}^ke^{\eta_j}}\right].$$
What is the gradient-log-normalizer?
Using notation from the wikipedia page (https://en.wikipedia.org/wiki/Exponential_family), an exponential family is a family of probability distributions that have pmfs/pdfs that can be written as (no
What is the gradient-log-normalizer? Using notation from the wikipedia page (https://en.wikipedia.org/wiki/Exponential_family), an exponential family is a family of probability distributions that have pmfs/pdfs that can be written as (noting that $\theta$, $x$ can be vector valued): $$f_{\theta}(x)=h(x)\exp[\eta(\theta)^Tt(x)-A(\theta)]$$ where $\eta(\theta)=\eta$ are the natural parameters, $t(x)$ are the sufficient statistics, and $A(\theta)$ is the log normalizer (sometimes called the log partition function). The reason $A(\theta)$ is called the log normalizer, as it can be verified that, in the continuous case, for this to be a valid pdf, we must have $$A(\theta)=\log\left[\int h(x)\exp[\eta(\theta)^Tt(x)]dx\right],$$ and in the discrete case, for this to be a valid pmf, we must have $$A(\theta)=\log\left[\sum_x h(x)\exp[\eta(\theta)^Tt(x)]\right].$$ In each case we notice that $\int h(x)\exp[\eta(\theta)^Tt(x)]dx$ and $\sum_x h(x)\exp[\eta(\theta)^Tt(x)]$ are the normalization constants of the distributions, hence the name log normalizer. Now to see the specific relationship between the softmax function and the $k$ dimensional categorical distribution, we'll have to use a specific parameterization of the distribution. Namely, let $\theta_1,\cdots,\theta_{k-1}$ be such that $0<\theta_1,\cdots,\theta_{k-1}$ and $\sum_{i=1}^{k-1}\theta_i<1$, and define $\theta_k=1-\sum_{i=1}^{k-1}\theta_i$ (letting $\theta=(\theta_1,\cdots,\theta_{k})$). The pmf for this distribution is (letting $x=(x_1,\cdots,x_{k})$ be a one hot vector, i.e. $x_i=1$ and $x_j=0$ for $i\neq j$): $$f_{\theta}(x)=\prod_{i=1}^k\theta_i^{x_i}.$$ To write this as an exponential family, note that $h(x)=1$, $\eta(\theta)=(\log[\theta_1/\theta_k],\cdots, \log[\theta_{k-1}/\theta_k],0)$, $t(x)=(x_1,\cdots,x_{k})$, and $A(\theta)=-\log[\theta_k]$, so: $$f_{\theta}(x)=\exp[(\log[\theta_1/\theta_k],\cdots, \log[\theta_{k-1}/\theta_k],0)^T(x_1,\cdots,x_{k})-(-\log[\theta_k])].$$ Now let's suggestively write $\eta(\theta_i)=\log[\theta_i/\theta_k]=\eta_i$, so that we can write $\theta_i=\frac{e^{\eta_i}}{\sum_{j=1}^ke^{\eta_j}}$. Then the log normalizer becomes $$A(\eta)=-\log\left[\frac{e^{\eta_k}}{\sum_{j=1}^ke^{\eta_j}}\right]= -\log\left[\frac{1}{\sum_{j=1}^ke^{\eta_j}}\right]=\log\left[\sum_{j=1}^ke^{\eta_j}\right].$$ Taking the partial derivative with respect to $\eta_i$, we find $$\frac{\partial}{\partial \eta_i}A(\eta)=\frac{e^{\eta_i}}{\sum_{j=1}^ke^{\eta_j}},$$ revealing that the gradient of the log normalizer is indeed the softmax function: $$\nabla A(\eta)=\left[\frac{e^{\eta_1}}{\sum_{j=1}^ke^{\eta_j}},\cdots,\frac{e^{\eta_k}}{\sum_{j=1}^ke^{\eta_j}}\right].$$
What is the gradient-log-normalizer? Using notation from the wikipedia page (https://en.wikipedia.org/wiki/Exponential_family), an exponential family is a family of probability distributions that have pmfs/pdfs that can be written as (no
32,239
X,Y univariate random variable with $F_{X,Y}(x,y)=G_1(x)G_2(y)$: are they independent?
Yes, it's true that these assumptions imply $X$ and $Y$ are independent. Simplify the notation by writing $F = F_{X,Y}$. By definition, $$F(x,y) = \Pr(X \le x, Y \le y).$$ Therefore the limit of $F(x,y)$ as $y$ increases without bound exists and is the chance that $X$ does not exceed $x$: $$F_X(x) = \Pr(X \le x) = \lim_{y\to\infty} F(x,y) = G_1(x) \lim_{y\to\infty} G_2(y).$$ Choosing any $x$ for which $F_X(x)\ne 0$ shows $G_2^\infty = \lim_{y\to\infty}G_2(y)$ is nonzero. (Such an $x$ must exist by the law of total probability, which asserts $\lim_{x\to\infty}F_X(x)=1$.) Thus $$G_1(x) = \frac{F_X(x)}{G_2^\infty}$$ for all $x$. Exchanging the roles of $X$ and $Y$ and using analogous notation, $$G_2(y) = \frac{F_Y(y)}{G_1^\infty}$$ for all $y$. Taking the joint limit as both $x$ and $y$ grow without bound shows $$1 = \lim_{x,y\to\infty} F(x,y) = G_1^\infty G_2^\infty.$$ Therefore $$F(x,y) = G_1(x)G_2(y) = \frac{F_X(x)F_Y(y)}{G_1^\infty G_2^\infty} = F_X(x)F_Y(y),$$ demonstrating $X$ and $Y$ are independent.
X,Y univariate random variable with $F_{X,Y}(x,y)=G_1(x)G_2(y)$: are they independent?
Yes, it's true that these assumptions imply $X$ and $Y$ are independent. Simplify the notation by writing $F = F_{X,Y}$. By definition, $$F(x,y) = \Pr(X \le x, Y \le y).$$ Therefore the limit of $F(x
X,Y univariate random variable with $F_{X,Y}(x,y)=G_1(x)G_2(y)$: are they independent? Yes, it's true that these assumptions imply $X$ and $Y$ are independent. Simplify the notation by writing $F = F_{X,Y}$. By definition, $$F(x,y) = \Pr(X \le x, Y \le y).$$ Therefore the limit of $F(x,y)$ as $y$ increases without bound exists and is the chance that $X$ does not exceed $x$: $$F_X(x) = \Pr(X \le x) = \lim_{y\to\infty} F(x,y) = G_1(x) \lim_{y\to\infty} G_2(y).$$ Choosing any $x$ for which $F_X(x)\ne 0$ shows $G_2^\infty = \lim_{y\to\infty}G_2(y)$ is nonzero. (Such an $x$ must exist by the law of total probability, which asserts $\lim_{x\to\infty}F_X(x)=1$.) Thus $$G_1(x) = \frac{F_X(x)}{G_2^\infty}$$ for all $x$. Exchanging the roles of $X$ and $Y$ and using analogous notation, $$G_2(y) = \frac{F_Y(y)}{G_1^\infty}$$ for all $y$. Taking the joint limit as both $x$ and $y$ grow without bound shows $$1 = \lim_{x,y\to\infty} F(x,y) = G_1^\infty G_2^\infty.$$ Therefore $$F(x,y) = G_1(x)G_2(y) = \frac{F_X(x)F_Y(y)}{G_1^\infty G_2^\infty} = F_X(x)F_Y(y),$$ demonstrating $X$ and $Y$ are independent.
X,Y univariate random variable with $F_{X,Y}(x,y)=G_1(x)G_2(y)$: are they independent? Yes, it's true that these assumptions imply $X$ and $Y$ are independent. Simplify the notation by writing $F = F_{X,Y}$. By definition, $$F(x,y) = \Pr(X \le x, Y \le y).$$ Therefore the limit of $F(x
32,240
Is a model with a sine wave time-series stationary?
Stationarity is a property of a stochastic process. A perfect sine wave is not a stochastic process. Hence, it can't be stationary or non-stationary. It doesn't have any random parts. $$y_t=\sin (\phi t+\theta)$$ It's like asking whether a song is black or white. The music has no color, it has many other properties but color is not one of them. Now, you could look at the problem differently. As you wrote the phase and frequency are unknown. So, if you look at the family of processes: $$y_t=\sin (\phi_i t+\theta_i)$$ Where $\phi_i,\theta_i$ come from some distribution, and you're to estimate $E[y_t]$, then it's a more interesting question. It's still not a stochastic process though. The stochastic process represents an evolution of random variables. In the case of a perfect sine wave it's entirely defined by two random variables $\phi_i,\theta_i$. There's no evolution. In other words there's got to be some kind of randomness and uncertainty introduced as time progresses in order for the process to be stochastic. In your case all the uncertainty is introduced at time 0.
Is a model with a sine wave time-series stationary?
Stationarity is a property of a stochastic process. A perfect sine wave is not a stochastic process. Hence, it can't be stationary or non-stationary. It doesn't have any random parts. $$y_t=\sin (\phi
Is a model with a sine wave time-series stationary? Stationarity is a property of a stochastic process. A perfect sine wave is not a stochastic process. Hence, it can't be stationary or non-stationary. It doesn't have any random parts. $$y_t=\sin (\phi t+\theta)$$ It's like asking whether a song is black or white. The music has no color, it has many other properties but color is not one of them. Now, you could look at the problem differently. As you wrote the phase and frequency are unknown. So, if you look at the family of processes: $$y_t=\sin (\phi_i t+\theta_i)$$ Where $\phi_i,\theta_i$ come from some distribution, and you're to estimate $E[y_t]$, then it's a more interesting question. It's still not a stochastic process though. The stochastic process represents an evolution of random variables. In the case of a perfect sine wave it's entirely defined by two random variables $\phi_i,\theta_i$. There's no evolution. In other words there's got to be some kind of randomness and uncertainty introduced as time progresses in order for the process to be stochastic. In your case all the uncertainty is introduced at time 0.
Is a model with a sine wave time-series stationary? Stationarity is a property of a stochastic process. A perfect sine wave is not a stochastic process. Hence, it can't be stationary or non-stationary. It doesn't have any random parts. $$y_t=\sin (\phi
32,241
Is a model with a sine wave time-series stationary?
For the sine wave to be stationary it needs a random phase! As whuber points out, it is not enough that the phase is random, it must have a uniform distribution on the interval $[0,2\pi)$.
Is a model with a sine wave time-series stationary?
For the sine wave to be stationary it needs a random phase! As whuber points out, it is not enough that the phase is random, it must have a uniform distribution on the interval $[0,2\pi)$.
Is a model with a sine wave time-series stationary? For the sine wave to be stationary it needs a random phase! As whuber points out, it is not enough that the phase is random, it must have a uniform distribution on the interval $[0,2\pi)$.
Is a model with a sine wave time-series stationary? For the sine wave to be stationary it needs a random phase! As whuber points out, it is not enough that the phase is random, it must have a uniform distribution on the interval $[0,2\pi)$.
32,242
Why is linear regression different from PCA?
With linear regression, we are modeling the conditional mean of the outcome, $E[Y|X] = a + bX$. Therefore, the $X$s are thought of as being "conditioned upon"; part of the experimental design, or representative of the population of interest. That means any distance between the observed $Y$ and it's predicted (conditional mean) value, $\hat{Y}$ is thought of as an error and is given the value $r = Y - \hat{Y}$ as the "residual error". The conditional error of $Y$ is estimated from these values (again, no variability is considered on the behalf of $X$ values). Geometrically, that is a "straight up and down" kind of measurement. In cases where there is measurement variability in $X$ as well, some considerations and assumptions must be discussed briefly to motivate usage of linear regression in this fashion. In particular, regression models are prone to nondifferential misclassification which may attenuate the slope of the regression model, $b$.
Why is linear regression different from PCA?
With linear regression, we are modeling the conditional mean of the outcome, $E[Y|X] = a + bX$. Therefore, the $X$s are thought of as being "conditioned upon"; part of the experimental design, or repr
Why is linear regression different from PCA? With linear regression, we are modeling the conditional mean of the outcome, $E[Y|X] = a + bX$. Therefore, the $X$s are thought of as being "conditioned upon"; part of the experimental design, or representative of the population of interest. That means any distance between the observed $Y$ and it's predicted (conditional mean) value, $\hat{Y}$ is thought of as an error and is given the value $r = Y - \hat{Y}$ as the "residual error". The conditional error of $Y$ is estimated from these values (again, no variability is considered on the behalf of $X$ values). Geometrically, that is a "straight up and down" kind of measurement. In cases where there is measurement variability in $X$ as well, some considerations and assumptions must be discussed briefly to motivate usage of linear regression in this fashion. In particular, regression models are prone to nondifferential misclassification which may attenuate the slope of the regression model, $b$.
Why is linear regression different from PCA? With linear regression, we are modeling the conditional mean of the outcome, $E[Y|X] = a + bX$. Therefore, the $X$s are thought of as being "conditioned upon"; part of the experimental design, or repr
32,243
Why is linear regression different from PCA?
I thought with linear regression we always use some Euclidean distance metric to calculate the error from what our hypothesis function predicts vs. what the actual data point was You were absolutely right. It's Euclidean in this sense: the observations are dimensions. Think of your observations of dependent variable $y_i$, as random variable. So you have a $N$-dimensional vector $Y=(y_1,y_2,\dots,y_N)$. You estimate the model and obtain $N$-dimensional vector of predicted $\hat Y=(\hat y_1,\hat y_2,\dots,\hat y_N)$. Now you minimize the sum of squares SSE, which is the squared Euclidean distance between the actuals and predicted: $||\hat Y-Y||^2=\sum_{i=1}^N (\hat y_i-y_i)^2$
Why is linear regression different from PCA?
I thought with linear regression we always use some Euclidean distance metric to calculate the error from what our hypothesis function predicts vs. what the actual data point was You were absolutely
Why is linear regression different from PCA? I thought with linear regression we always use some Euclidean distance metric to calculate the error from what our hypothesis function predicts vs. what the actual data point was You were absolutely right. It's Euclidean in this sense: the observations are dimensions. Think of your observations of dependent variable $y_i$, as random variable. So you have a $N$-dimensional vector $Y=(y_1,y_2,\dots,y_N)$. You estimate the model and obtain $N$-dimensional vector of predicted $\hat Y=(\hat y_1,\hat y_2,\dots,\hat y_N)$. Now you minimize the sum of squares SSE, which is the squared Euclidean distance between the actuals and predicted: $||\hat Y-Y||^2=\sum_{i=1}^N (\hat y_i-y_i)^2$
Why is linear regression different from PCA? I thought with linear regression we always use some Euclidean distance metric to calculate the error from what our hypothesis function predicts vs. what the actual data point was You were absolutely
32,244
Relationship between F and Student's t distributions
It's because quantiles are only preserved under monotone transformations, and the square function fails to be monotone when we're dealing with positive and negative numbers (a $t$ random variable can be both). If we look at the $0.8$ quantile of the $F$ distribution $q_{0.8}$ then we know $80\%$ of the probability mass lies between this point and zero. But that means $80\%$ of the probability mass of the corresponding $t$ distribution lies between $-\sqrt{q_{0.8}}$ and $\sqrt{q_{0.8}}$, and so $\sqrt{q_{0.8}}$ is not the $0.8$ quantile of the $t$ distribution. This value corresponds to a larger quantile since we are not including the probability below $-\sqrt{q_{0.8}}$. Because the $t$ distribution is symmetric about zero the extra probability we would be adding is $(1 - 0.8) / 2 = 0.1$, which explains the $0.9$.
Relationship between F and Student's t distributions
It's because quantiles are only preserved under monotone transformations, and the square function fails to be monotone when we're dealing with positive and negative numbers (a $t$ random variable can
Relationship between F and Student's t distributions It's because quantiles are only preserved under monotone transformations, and the square function fails to be monotone when we're dealing with positive and negative numbers (a $t$ random variable can be both). If we look at the $0.8$ quantile of the $F$ distribution $q_{0.8}$ then we know $80\%$ of the probability mass lies between this point and zero. But that means $80\%$ of the probability mass of the corresponding $t$ distribution lies between $-\sqrt{q_{0.8}}$ and $\sqrt{q_{0.8}}$, and so $\sqrt{q_{0.8}}$ is not the $0.8$ quantile of the $t$ distribution. This value corresponds to a larger quantile since we are not including the probability below $-\sqrt{q_{0.8}}$. Because the $t$ distribution is symmetric about zero the extra probability we would be adding is $(1 - 0.8) / 2 = 0.1$, which explains the $0.9$.
Relationship between F and Student's t distributions It's because quantiles are only preserved under monotone transformations, and the square function fails to be monotone when we're dealing with positive and negative numbers (a $t$ random variable can
32,245
What is exactly distributed according to t-distribution?
You are very close... If $X_1, \dots, X_n$ is a sample of i.i.d normal observations with mean $\mu$ and variance $\sigma^2$, then the standardized mean $$ \frac{\bar X_n-\mu}{\sigma/\sqrt{n}} $$ is standard normal. Now, as you pointed out, in reality we never know $\sigma$. So we replace $\sigma$ by its sample estimate $S$ and consider the "studentized" mean $$ T = \frac{\bar X_n-\mu}{S/\sqrt{n}} $$ instead. This random variable is slightly different from the one above. Consequently, its distribution is slightly non-normal, namely Student with $n-1$ degrees of freedom. For not too small $n$, $S$ is close to $\sigma$ (that's the consistency of the sample standard deviation). Then, the standardized mean is very close to the studentized one. This explains why the Student distribution with many degrees of freedom looks like the normal. The studentized mean is the starting point to derive confidence intervals and hypothesis tests for $\mu$. Example: To find a lower 95% confidence limit $\bar X_n -c$ for $\mu$, you solve the following equation $$ P(\bar X_n -c \le \mu) = 0.95 $$ for $c$. To do so, you try to modify the equation in the probability so that the studentized mean appears (try to figure out the substeps): $$ P(T \le \frac{c}{S/\sqrt{n}}) = 0.95. $$ Then you use the fact that $T$ has a Student distribution with $n-1$ df to get rid of the probability: $$ \frac{c}{S/\sqrt{n}} = qt_{0.95;n-1}, $$ where $qt_{0.95;n-1}$ is the corresponding 95% quantile. Thus, $$ c = \frac{S}{\sqrt{n}} \cdot qt_{0.95;n-1} $$ and the (famous) lower confidence limit follows: $$ \bar X_n - \frac{S}{\sqrt{n}} \cdot qt_{0.95;n-1} $$
What is exactly distributed according to t-distribution?
You are very close... If $X_1, \dots, X_n$ is a sample of i.i.d normal observations with mean $\mu$ and variance $\sigma^2$, then the standardized mean $$ \frac{\bar X_n-\mu}{\sigma/\sqrt{n}} $$ is
What is exactly distributed according to t-distribution? You are very close... If $X_1, \dots, X_n$ is a sample of i.i.d normal observations with mean $\mu$ and variance $\sigma^2$, then the standardized mean $$ \frac{\bar X_n-\mu}{\sigma/\sqrt{n}} $$ is standard normal. Now, as you pointed out, in reality we never know $\sigma$. So we replace $\sigma$ by its sample estimate $S$ and consider the "studentized" mean $$ T = \frac{\bar X_n-\mu}{S/\sqrt{n}} $$ instead. This random variable is slightly different from the one above. Consequently, its distribution is slightly non-normal, namely Student with $n-1$ degrees of freedom. For not too small $n$, $S$ is close to $\sigma$ (that's the consistency of the sample standard deviation). Then, the standardized mean is very close to the studentized one. This explains why the Student distribution with many degrees of freedom looks like the normal. The studentized mean is the starting point to derive confidence intervals and hypothesis tests for $\mu$. Example: To find a lower 95% confidence limit $\bar X_n -c$ for $\mu$, you solve the following equation $$ P(\bar X_n -c \le \mu) = 0.95 $$ for $c$. To do so, you try to modify the equation in the probability so that the studentized mean appears (try to figure out the substeps): $$ P(T \le \frac{c}{S/\sqrt{n}}) = 0.95. $$ Then you use the fact that $T$ has a Student distribution with $n-1$ df to get rid of the probability: $$ \frac{c}{S/\sqrt{n}} = qt_{0.95;n-1}, $$ where $qt_{0.95;n-1}$ is the corresponding 95% quantile. Thus, $$ c = \frac{S}{\sqrt{n}} \cdot qt_{0.95;n-1} $$ and the (famous) lower confidence limit follows: $$ \bar X_n - \frac{S}{\sqrt{n}} \cdot qt_{0.95;n-1} $$
What is exactly distributed according to t-distribution? You are very close... If $X_1, \dots, X_n$ is a sample of i.i.d normal observations with mean $\mu$ and variance $\sigma^2$, then the standardized mean $$ \frac{\bar X_n-\mu}{\sigma/\sqrt{n}} $$ is
32,246
Estimating Multilevel Logistic Regression Models
There are perhaps too many questions here. Some comments: you might consider using glmer from the lme4 package (glmer(Y~X*Z+(1|cluster),family=binomial,data=sim_data)); it uses Laplace approximation or Gauss-Hermite quadrature, which are generally more accurate than PQL (although the answers are very similar in this case). The niter argument specifies the maximum number of iterations; only one iteration was actually necessary I'm not sure what your question is about the interaction term. Whether you should drop non-significant interaction terms or not is a bit of a can of worms, and depends both on your statistical philosophy and on the goals of your analysis (e.g. see this question) the denominator degrees of freedom are being calculated according to a simple 'inner-outer' heuristic a simple 'inner-outer' rule described on page 91 of Pinheiro and Bates (2000), which is available on Google Books ... it is generally a reasonable approximation but the computation of degrees of freedom is complex, especially for GLMMs if you're trying to replicate "A simulation study of sample size for multilevel logistic regression models" by Moineddin et al. (DOI: 10.1186/1471-2288-7-34), you need to run a large number of simulations and compute averages, not just compare a single run. Furthermore, you should probably try to get closer to their methods (coming back to my first point, they state that they use SAS PROC NLMIXED with adaptive Gauss-Hermite quadrature, so you'd be better off with e.g. glmer(...,nAGQ=10); it still won't match exactly, but it'll probably be closer than glmmPQL.
Estimating Multilevel Logistic Regression Models
There are perhaps too many questions here. Some comments: you might consider using glmer from the lme4 package (glmer(Y~X*Z+(1|cluster),family=binomial,data=sim_data)); it uses Laplace approximation
Estimating Multilevel Logistic Regression Models There are perhaps too many questions here. Some comments: you might consider using glmer from the lme4 package (glmer(Y~X*Z+(1|cluster),family=binomial,data=sim_data)); it uses Laplace approximation or Gauss-Hermite quadrature, which are generally more accurate than PQL (although the answers are very similar in this case). The niter argument specifies the maximum number of iterations; only one iteration was actually necessary I'm not sure what your question is about the interaction term. Whether you should drop non-significant interaction terms or not is a bit of a can of worms, and depends both on your statistical philosophy and on the goals of your analysis (e.g. see this question) the denominator degrees of freedom are being calculated according to a simple 'inner-outer' heuristic a simple 'inner-outer' rule described on page 91 of Pinheiro and Bates (2000), which is available on Google Books ... it is generally a reasonable approximation but the computation of degrees of freedom is complex, especially for GLMMs if you're trying to replicate "A simulation study of sample size for multilevel logistic regression models" by Moineddin et al. (DOI: 10.1186/1471-2288-7-34), you need to run a large number of simulations and compute averages, not just compare a single run. Furthermore, you should probably try to get closer to their methods (coming back to my first point, they state that they use SAS PROC NLMIXED with adaptive Gauss-Hermite quadrature, so you'd be better off with e.g. glmer(...,nAGQ=10); it still won't match exactly, but it'll probably be closer than glmmPQL.
Estimating Multilevel Logistic Regression Models There are perhaps too many questions here. Some comments: you might consider using glmer from the lme4 package (glmer(Y~X*Z+(1|cluster),family=binomial,data=sim_data)); it uses Laplace approximation
32,247
How to apply regression on principal components to predict an output variable?
You don't choose a subset of your original 99 (100-1) variables. Each of the principal components are linear combinations of all 99 predictor variables (x-variables, IVs, ...). If you use the first 40 principal components, each of them is a function of all 99 original predictor-variables. (At least with ordinary PCA - there are sparse/regularized versions such as the SPCA of Zou, Hastie and Tibshirani that will yield components based on fewer variables.) Consider the simple case of two positively correlated variables, which for simplicity we will assume are equally variable. Then the first principal component will be a (fractional) multiple of the sum of both variates and the second will be a (fractional) multiple of the difference of the two variates; if the two are not equally variable, the first principal component will weight the more-variable one more heavily, but it will still involve both. So you start with your 99 x-variables, from which you compute your 40 principal components by applying the corresponding weights on each of the original variables. [NB in my discussion I assume $y$ and the $X$'s are already centered.] You then use your 40 new variables as if they were predictors in their own right, just as you would with any multiple regression problem. (In practice, there's more efficient ways of getting the estimates, but let's leave the computational aspects aside and just deal with a basic idea) In respect of your second question, it's not clear what you mean by "reversing of the PCA". Your PCs are linear combinations of the original variates. Let's say your original variates are in $X$, and you compute $Z=XW$ (where $X$ is $n\times 99$ and $W$ is the $99\times 40$ matrix which contains the principal component weights for the $40$ components you're using), then you estimate $\hat{y}=Z\hat{\beta}_\text{PC}$ via regression. Then you can write $\hat{y}=Z\hat{\beta}_\text{PC}=XW\hat{\beta}_\text{PC}=X\hat{\beta}^*$ say (where $\hat{\beta}^*=W\hat{\beta}_\text{PC}$, obviously), so you can write it as a function of the original predictors; I don't know if that's what you meant by 'reversing', but it's a meaningful way to look at the original relationship between $y$ and $X$. It's not the same as the coefficients you get by estimating a regression on the original X's of course -- it's regularized by doing the PCA; even though you'd get coefficients for each of your original X's this way, they only have the d.f. of the number of components you fitted. Also see Wikipedia on principal component regression.
How to apply regression on principal components to predict an output variable?
You don't choose a subset of your original 99 (100-1) variables. Each of the principal components are linear combinations of all 99 predictor variables (x-variables, IVs, ...). If you use the first 4
How to apply regression on principal components to predict an output variable? You don't choose a subset of your original 99 (100-1) variables. Each of the principal components are linear combinations of all 99 predictor variables (x-variables, IVs, ...). If you use the first 40 principal components, each of them is a function of all 99 original predictor-variables. (At least with ordinary PCA - there are sparse/regularized versions such as the SPCA of Zou, Hastie and Tibshirani that will yield components based on fewer variables.) Consider the simple case of two positively correlated variables, which for simplicity we will assume are equally variable. Then the first principal component will be a (fractional) multiple of the sum of both variates and the second will be a (fractional) multiple of the difference of the two variates; if the two are not equally variable, the first principal component will weight the more-variable one more heavily, but it will still involve both. So you start with your 99 x-variables, from which you compute your 40 principal components by applying the corresponding weights on each of the original variables. [NB in my discussion I assume $y$ and the $X$'s are already centered.] You then use your 40 new variables as if they were predictors in their own right, just as you would with any multiple regression problem. (In practice, there's more efficient ways of getting the estimates, but let's leave the computational aspects aside and just deal with a basic idea) In respect of your second question, it's not clear what you mean by "reversing of the PCA". Your PCs are linear combinations of the original variates. Let's say your original variates are in $X$, and you compute $Z=XW$ (where $X$ is $n\times 99$ and $W$ is the $99\times 40$ matrix which contains the principal component weights for the $40$ components you're using), then you estimate $\hat{y}=Z\hat{\beta}_\text{PC}$ via regression. Then you can write $\hat{y}=Z\hat{\beta}_\text{PC}=XW\hat{\beta}_\text{PC}=X\hat{\beta}^*$ say (where $\hat{\beta}^*=W\hat{\beta}_\text{PC}$, obviously), so you can write it as a function of the original predictors; I don't know if that's what you meant by 'reversing', but it's a meaningful way to look at the original relationship between $y$ and $X$. It's not the same as the coefficients you get by estimating a regression on the original X's of course -- it's regularized by doing the PCA; even though you'd get coefficients for each of your original X's this way, they only have the d.f. of the number of components you fitted. Also see Wikipedia on principal component regression.
How to apply regression on principal components to predict an output variable? You don't choose a subset of your original 99 (100-1) variables. Each of the principal components are linear combinations of all 99 predictor variables (x-variables, IVs, ...). If you use the first 4
32,248
How should I implement this interaction between a continuous and categorical predictor?
Assuming your continuous variable is $x_1$ and we expand the binary $x_2$ to include $x_3$ then I suggest using: $$ y_{i} = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3 x_3 + \beta_4x_1x_2 + \beta_5x_1x_3 + \varepsilon_i $$ So if the continous variable interacts with the reference category, it will be included in the model by default. If there is an interaction with the second or third category then $\beta_4$ or $\beta_5$ will contain the difference from the reference category. Also, as you're suggesting, it wouldn't make any sense to put an interaction effect between $x_2$ and $x_3$. The indicator variables should be coded with 0,1. In this case, if the indicator is not not true, the variable is zero and the corresponding $\beta$ drops out of the equation. This makes for much easier interpretation of the coefficients. For example, if your category is the second category ($x_2=1$). Then for a given $x_1$, the interpretation of the category effect is $\beta_0 + \beta_2$. And given $x_2=1$, for every unit increase in $x_1$ there is a $\beta_1 + \beta_4$ increase in your response variable. Here is a wonderful post about centering variables. As a short answer, if you were going to center your variable before, then adding interaction effects shouldn't change that. Actually, adding interaction effects is one reason that some people start centering variables to reduce collinearity.
How should I implement this interaction between a continuous and categorical predictor?
Assuming your continuous variable is $x_1$ and we expand the binary $x_2$ to include $x_3$ then I suggest using: $$ y_{i} = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3 x_3 + \beta_4x_1x_2 + \beta_5x_
How should I implement this interaction between a continuous and categorical predictor? Assuming your continuous variable is $x_1$ and we expand the binary $x_2$ to include $x_3$ then I suggest using: $$ y_{i} = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3 x_3 + \beta_4x_1x_2 + \beta_5x_1x_3 + \varepsilon_i $$ So if the continous variable interacts with the reference category, it will be included in the model by default. If there is an interaction with the second or third category then $\beta_4$ or $\beta_5$ will contain the difference from the reference category. Also, as you're suggesting, it wouldn't make any sense to put an interaction effect between $x_2$ and $x_3$. The indicator variables should be coded with 0,1. In this case, if the indicator is not not true, the variable is zero and the corresponding $\beta$ drops out of the equation. This makes for much easier interpretation of the coefficients. For example, if your category is the second category ($x_2=1$). Then for a given $x_1$, the interpretation of the category effect is $\beta_0 + \beta_2$. And given $x_2=1$, for every unit increase in $x_1$ there is a $\beta_1 + \beta_4$ increase in your response variable. Here is a wonderful post about centering variables. As a short answer, if you were going to center your variable before, then adding interaction effects shouldn't change that. Actually, adding interaction effects is one reason that some people start centering variables to reduce collinearity.
How should I implement this interaction between a continuous and categorical predictor? Assuming your continuous variable is $x_1$ and we expand the binary $x_2$ to include $x_3$ then I suggest using: $$ y_{i} = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3 x_3 + \beta_4x_1x_2 + \beta_5x_
32,249
Back-testing or cross-validating when the model-building process was interactive
FYI, this might be more appropriate for SE.DataScience, but for the time being, I'll answer it here. It seems to me like you might be in a situation where you will have no choice but to write a script that will implement your solutions. Never having worked with splines, my knowledge of them is strictly theoretical so please bear with me and let me know if there is anything I'm not seeing. Broadly speaking, it appears that you have a couple of different items that you will have to resolve in order to implement this. 1.) Determining the model parameters in a dynamic fashion. You have previously mentioned that you've used a combination of domain knowledge and univariate measures. That seems to me like something that you should be able to handle heuristically. You will have to agree at the outset on a set of rules which your program will implement. This may or may not be a trivial task as you will have to do some hard thinking about the potential implications of those rules. This may require you to re-visit every step of your process and cataloging not just the decisions, but also the reasons behind those decisions. 2.) Actually implementing your program. In order to make your performance testing properly dynamic and easy to maintain and modify going forward, you will have to think about how you're going to structure it. You will likely want to use some sort of loop for your main model predictive performance estimation, preferably with a user-definable length in order to allow for greater flexibility going forward. You will also likely want to write separate functions for each action that you want your program to take as this will make it easier to test functionality, and to maintain and modify your program going forward. You will, at a minimum, likely need functions for dataset selection (i.e. only time periods that have "gone by" at the moment of backtesting), cleaning and validation (which you'll really have to think about, as data munging is a critical part of model building), functions for model training parameters, and functions for model prediction and performance measure collection and storage. Your question about outlier detection and handling also falls under those two concerns and I would go about implementing by writing smaller loops within your main program loop that would continue to "clean" and refit the model until it's reached a point where you would be happy with it (which again, you'll have to define yourself). If this sounds like a big task, it's because it is; people have written entire software libraries (sometimes very lucratively) in order to perform this sort of task. Beyond that, it's hard to offer any more specific advice without knowing more about your processes, data structure, and the programming language you've done your work in thus far. If any of this of useful to you and you'd like me to expand on any of it, comment, let me know, and I'd be more than happy to do so.
Back-testing or cross-validating when the model-building process was interactive
FYI, this might be more appropriate for SE.DataScience, but for the time being, I'll answer it here. It seems to me like you might be in a situation where you will have no choice but to write a script
Back-testing or cross-validating when the model-building process was interactive FYI, this might be more appropriate for SE.DataScience, but for the time being, I'll answer it here. It seems to me like you might be in a situation where you will have no choice but to write a script that will implement your solutions. Never having worked with splines, my knowledge of them is strictly theoretical so please bear with me and let me know if there is anything I'm not seeing. Broadly speaking, it appears that you have a couple of different items that you will have to resolve in order to implement this. 1.) Determining the model parameters in a dynamic fashion. You have previously mentioned that you've used a combination of domain knowledge and univariate measures. That seems to me like something that you should be able to handle heuristically. You will have to agree at the outset on a set of rules which your program will implement. This may or may not be a trivial task as you will have to do some hard thinking about the potential implications of those rules. This may require you to re-visit every step of your process and cataloging not just the decisions, but also the reasons behind those decisions. 2.) Actually implementing your program. In order to make your performance testing properly dynamic and easy to maintain and modify going forward, you will have to think about how you're going to structure it. You will likely want to use some sort of loop for your main model predictive performance estimation, preferably with a user-definable length in order to allow for greater flexibility going forward. You will also likely want to write separate functions for each action that you want your program to take as this will make it easier to test functionality, and to maintain and modify your program going forward. You will, at a minimum, likely need functions for dataset selection (i.e. only time periods that have "gone by" at the moment of backtesting), cleaning and validation (which you'll really have to think about, as data munging is a critical part of model building), functions for model training parameters, and functions for model prediction and performance measure collection and storage. Your question about outlier detection and handling also falls under those two concerns and I would go about implementing by writing smaller loops within your main program loop that would continue to "clean" and refit the model until it's reached a point where you would be happy with it (which again, you'll have to define yourself). If this sounds like a big task, it's because it is; people have written entire software libraries (sometimes very lucratively) in order to perform this sort of task. Beyond that, it's hard to offer any more specific advice without knowing more about your processes, data structure, and the programming language you've done your work in thus far. If any of this of useful to you and you'd like me to expand on any of it, comment, let me know, and I'd be more than happy to do so.
Back-testing or cross-validating when the model-building process was interactive FYI, this might be more appropriate for SE.DataScience, but for the time being, I'll answer it here. It seems to me like you might be in a situation where you will have no choice but to write a script
32,250
Back-testing or cross-validating when the model-building process was interactive
Rather than trying to figure out how to automate your manual model tuning efforts, I would circumvent that problem all together by looking into lower variance learners that require far less tuning, even if that is at some cost of increased model bias. You want confidence in your backtest results which largely comes down to low sampling variance in your predictions, and introducing some automated tuning process on top of a learner that already has sampling variance itself is working against that goal. It might seem like the tail is wagging the dog here, but anything that requires a lot of careful tuning (manual or automated) is not a great candidate for a truly honest backtest environment IMO.
Back-testing or cross-validating when the model-building process was interactive
Rather than trying to figure out how to automate your manual model tuning efforts, I would circumvent that problem all together by looking into lower variance learners that require far less tuning, ev
Back-testing or cross-validating when the model-building process was interactive Rather than trying to figure out how to automate your manual model tuning efforts, I would circumvent that problem all together by looking into lower variance learners that require far less tuning, even if that is at some cost of increased model bias. You want confidence in your backtest results which largely comes down to low sampling variance in your predictions, and introducing some automated tuning process on top of a learner that already has sampling variance itself is working against that goal. It might seem like the tail is wagging the dog here, but anything that requires a lot of careful tuning (manual or automated) is not a great candidate for a truly honest backtest environment IMO.
Back-testing or cross-validating when the model-building process was interactive Rather than trying to figure out how to automate your manual model tuning efforts, I would circumvent that problem all together by looking into lower variance learners that require far less tuning, ev
32,251
How to find weights for a dissimiliarity measure
This is a big issue in some areas of machine learning. I'm not as familiar with it as I'd like, but I think these should get you started. Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) seems to work very well on some data sets. Neighborhood components analysis is a very nice linear algorithm, and nonlinear versions have been developed as well. There's a whole literature that deals with this issue from the perspective of "learning a kernel". I don't know much about it, but this paper is highly cited. Given that your data is so high-dimensional (and probably sparse?), you might not need anything too nonlinear. Maybe neighborhood components analysis is the best place to start? It's closest to the idea of a weighted $L_2$ norm, like you suggested in your question.
How to find weights for a dissimiliarity measure
This is a big issue in some areas of machine learning. I'm not as familiar with it as I'd like, but I think these should get you started. Dimensionality Reduction by Learning an Invariant Mapping (D
How to find weights for a dissimiliarity measure This is a big issue in some areas of machine learning. I'm not as familiar with it as I'd like, but I think these should get you started. Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) seems to work very well on some data sets. Neighborhood components analysis is a very nice linear algorithm, and nonlinear versions have been developed as well. There's a whole literature that deals with this issue from the perspective of "learning a kernel". I don't know much about it, but this paper is highly cited. Given that your data is so high-dimensional (and probably sparse?), you might not need anything too nonlinear. Maybe neighborhood components analysis is the best place to start? It's closest to the idea of a weighted $L_2$ norm, like you suggested in your question.
How to find weights for a dissimiliarity measure This is a big issue in some areas of machine learning. I'm not as familiar with it as I'd like, but I think these should get you started. Dimensionality Reduction by Learning an Invariant Mapping (D
32,252
How to find weights for a dissimiliarity measure
Putting an $a_i$ weight on a feature in your similarity measure is equivalent so scaling your data set by $1/w_i$. In other words, you are asking about data preprocessing and scaling. This is too broad to be answered well in a single question. Look for: feature selection feature weighting normalization dimensionality reduction other projection techniques other distance functions "learning to rank" There is a massive amount of literature and even conference tracks dedicated to this. Some methods to get you started: Fisher's linear discriminant analysis Large Margin Nearest Neighbors
How to find weights for a dissimiliarity measure
Putting an $a_i$ weight on a feature in your similarity measure is equivalent so scaling your data set by $1/w_i$. In other words, you are asking about data preprocessing and scaling. This is too broa
How to find weights for a dissimiliarity measure Putting an $a_i$ weight on a feature in your similarity measure is equivalent so scaling your data set by $1/w_i$. In other words, you are asking about data preprocessing and scaling. This is too broad to be answered well in a single question. Look for: feature selection feature weighting normalization dimensionality reduction other projection techniques other distance functions "learning to rank" There is a massive amount of literature and even conference tracks dedicated to this. Some methods to get you started: Fisher's linear discriminant analysis Large Margin Nearest Neighbors
How to find weights for a dissimiliarity measure Putting an $a_i$ weight on a feature in your similarity measure is equivalent so scaling your data set by $1/w_i$. In other words, you are asking about data preprocessing and scaling. This is too broa
32,253
Logarithm of incomplete Beta function for large $\alpha,\beta$
The Beta incomplete function can be written with the help of the Gauss hypergeometric function: $$B_x(a,b)=\frac{1}{a}x^a{(1-x)}^b F(a+b,1,a+1,x)$$ and the CDF of the $Beta(a,b)$ distribution evaluated at $x$ is $$B_x(a,b)/B(a,b).$$ A good implementation of the Gauss hypergeometric function is provided in the gsl package - a wrapper for the special functions of the Gnu Scientific Library. So you can write the log-CDF like this: library(gsl) logpbeta <- function(x,a,b) log(hyperg_2F1(a+b,1,a+1,x)) + a*log(x)+b*log(1-x)-log(a) - lbeta(a,b) And it gives the same result as Mathematica for your example: > logpbeta(0.5555555, 1925.74, 33.7179) [1] -994.7676 I don't know for which range the parameters the result is correct. And I have not find the answer in the GNU reference manual. Note that the zero you obtain with pbeta seems to be due to the evaluation of $B_x(a,b)$ followed by the log-transformation: > x <- 0.5555555 > a <- 1925.74 > b <- 33.7179 > log(hyperg_2F1(a+b,1,a+1,x)*x^a*(1-x)^b/a) [1] -Inf > hyperg_2F1(a+b,1,a+1,x) [1] 2.298761 > x^a*(1-x)^b/a [1] 0
Logarithm of incomplete Beta function for large $\alpha,\beta$
The Beta incomplete function can be written with the help of the Gauss hypergeometric function: $$B_x(a,b)=\frac{1}{a}x^a{(1-x)}^b F(a+b,1,a+1,x)$$ and the CDF of the $Beta(a,b)$ distribution evaluate
Logarithm of incomplete Beta function for large $\alpha,\beta$ The Beta incomplete function can be written with the help of the Gauss hypergeometric function: $$B_x(a,b)=\frac{1}{a}x^a{(1-x)}^b F(a+b,1,a+1,x)$$ and the CDF of the $Beta(a,b)$ distribution evaluated at $x$ is $$B_x(a,b)/B(a,b).$$ A good implementation of the Gauss hypergeometric function is provided in the gsl package - a wrapper for the special functions of the Gnu Scientific Library. So you can write the log-CDF like this: library(gsl) logpbeta <- function(x,a,b) log(hyperg_2F1(a+b,1,a+1,x)) + a*log(x)+b*log(1-x)-log(a) - lbeta(a,b) And it gives the same result as Mathematica for your example: > logpbeta(0.5555555, 1925.74, 33.7179) [1] -994.7676 I don't know for which range the parameters the result is correct. And I have not find the answer in the GNU reference manual. Note that the zero you obtain with pbeta seems to be due to the evaluation of $B_x(a,b)$ followed by the log-transformation: > x <- 0.5555555 > a <- 1925.74 > b <- 33.7179 > log(hyperg_2F1(a+b,1,a+1,x)*x^a*(1-x)^b/a) [1] -Inf > hyperg_2F1(a+b,1,a+1,x) [1] 2.298761 > x^a*(1-x)^b/a [1] 0
Logarithm of incomplete Beta function for large $\alpha,\beta$ The Beta incomplete function can be written with the help of the Gauss hypergeometric function: $$B_x(a,b)=\frac{1}{a}x^a{(1-x)}^b F(a+b,1,a+1,x)$$ and the CDF of the $Beta(a,b)$ distribution evaluate
32,254
Poisson Hypothesis Testing for Two Parameters
Note that normally the equality goes in the null (with good reason). That issue aside, I'll mention a couple of approaches to a test of this kind of hypothesis A very simple test: condition on the total observed count $n$, which converts it to a binomial test of proportions. Imagine there are $w_\text{on}$ on-weeks and $w_\text{off}$ off-weeks and $w$ weeks combined. Then under the null, the expected proportions are $\frac{w_\text{on}}{w}$ and $\frac{w_\text{off}}{w}$ respectively. You can do a one-tailed test of the proportion in the on-weeks quite easily. You could construct a one tailed test by adapting a statistic related to a likelihood-ratio test; the z-form of the Wald-test or a score test can be done one tailed for example and should work well for largish $\lambda$. There are other takes on it.
Poisson Hypothesis Testing for Two Parameters
Note that normally the equality goes in the null (with good reason). That issue aside, I'll mention a couple of approaches to a test of this kind of hypothesis A very simple test: condition on the to
Poisson Hypothesis Testing for Two Parameters Note that normally the equality goes in the null (with good reason). That issue aside, I'll mention a couple of approaches to a test of this kind of hypothesis A very simple test: condition on the total observed count $n$, which converts it to a binomial test of proportions. Imagine there are $w_\text{on}$ on-weeks and $w_\text{off}$ off-weeks and $w$ weeks combined. Then under the null, the expected proportions are $\frac{w_\text{on}}{w}$ and $\frac{w_\text{off}}{w}$ respectively. You can do a one-tailed test of the proportion in the on-weeks quite easily. You could construct a one tailed test by adapting a statistic related to a likelihood-ratio test; the z-form of the Wald-test or a score test can be done one tailed for example and should work well for largish $\lambda$. There are other takes on it.
Poisson Hypothesis Testing for Two Parameters Note that normally the equality goes in the null (with good reason). That issue aside, I'll mention a couple of approaches to a test of this kind of hypothesis A very simple test: condition on the to
32,255
Poisson Hypothesis Testing for Two Parameters
What about just used the GLM with Poisson error structure and log-link??? But the idea about binomial may be more powerfull.
Poisson Hypothesis Testing for Two Parameters
What about just used the GLM with Poisson error structure and log-link??? But the idea about binomial may be more powerfull.
Poisson Hypothesis Testing for Two Parameters What about just used the GLM with Poisson error structure and log-link??? But the idea about binomial may be more powerfull.
Poisson Hypothesis Testing for Two Parameters What about just used the GLM with Poisson error structure and log-link??? But the idea about binomial may be more powerfull.
32,256
Poisson Hypothesis Testing for Two Parameters
I'd settle it with a Poisson or Quasi-Poisson GLM with a preference for quasi-Poisson or negative binomial. The problem with using traditional Poisson is that it requires the variance and mean be equal which is very likely not the case. The quasi-Poisson or NB estimates the variance unrestricted by the mean. You could do any of these in R very easily. # week on = 1, week off = 0 week.status <- c(1, 1, 0, 0) calls <- c(2, 6, 2, 3) model <- glm(calls ~ week.status, family = poisson()) # or change the poisson() after family to quasipoisson() # or use the neg binomial glm from the MASS package The GLM approach is beneficial and as you can expand to include additional variables (e.g., month of year) that might impact call volume. To do it by hand, I'd probably use a normal approximation and a two sample t test.
Poisson Hypothesis Testing for Two Parameters
I'd settle it with a Poisson or Quasi-Poisson GLM with a preference for quasi-Poisson or negative binomial. The problem with using traditional Poisson is that it requires the variance and mean be equ
Poisson Hypothesis Testing for Two Parameters I'd settle it with a Poisson or Quasi-Poisson GLM with a preference for quasi-Poisson or negative binomial. The problem with using traditional Poisson is that it requires the variance and mean be equal which is very likely not the case. The quasi-Poisson or NB estimates the variance unrestricted by the mean. You could do any of these in R very easily. # week on = 1, week off = 0 week.status <- c(1, 1, 0, 0) calls <- c(2, 6, 2, 3) model <- glm(calls ~ week.status, family = poisson()) # or change the poisson() after family to quasipoisson() # or use the neg binomial glm from the MASS package The GLM approach is beneficial and as you can expand to include additional variables (e.g., month of year) that might impact call volume. To do it by hand, I'd probably use a normal approximation and a two sample t test.
Poisson Hypothesis Testing for Two Parameters I'd settle it with a Poisson or Quasi-Poisson GLM with a preference for quasi-Poisson or negative binomial. The problem with using traditional Poisson is that it requires the variance and mean be equ
32,257
Poisson Hypothesis Testing for Two Parameters
We start with Maximum Likelihood Estimate for Poisson parameter, which is mean. So, $\hat\lambda_1=\bar Y~~and~~\hat\lambda_2=\bar X$ Now,you can test simply $\bar Y-\bar X\sim N(\lambda_1-\lambda_2,\frac{\lambda_1}{n_1}+\frac{\lambda_2}{n_2})$ and then compare by getting Z-Value=$\frac{(\bar Y-\bar X)-\lambda_1-\lambda_2}{\sqrt{\frac{\lambda_1}{n_1}+\frac{\lambda_2}{n_2}}}$ Note:-rejection criteria is $Z<Critical~Value$
Poisson Hypothesis Testing for Two Parameters
We start with Maximum Likelihood Estimate for Poisson parameter, which is mean. So, $\hat\lambda_1=\bar Y~~and~~\hat\lambda_2=\bar X$ Now,you can test simply $\bar Y-\bar X\sim N(\lambda_1-\lambda_2,\
Poisson Hypothesis Testing for Two Parameters We start with Maximum Likelihood Estimate for Poisson parameter, which is mean. So, $\hat\lambda_1=\bar Y~~and~~\hat\lambda_2=\bar X$ Now,you can test simply $\bar Y-\bar X\sim N(\lambda_1-\lambda_2,\frac{\lambda_1}{n_1}+\frac{\lambda_2}{n_2})$ and then compare by getting Z-Value=$\frac{(\bar Y-\bar X)-\lambda_1-\lambda_2}{\sqrt{\frac{\lambda_1}{n_1}+\frac{\lambda_2}{n_2}}}$ Note:-rejection criteria is $Z<Critical~Value$
Poisson Hypothesis Testing for Two Parameters We start with Maximum Likelihood Estimate for Poisson parameter, which is mean. So, $\hat\lambda_1=\bar Y~~and~~\hat\lambda_2=\bar X$ Now,you can test simply $\bar Y-\bar X\sim N(\lambda_1-\lambda_2,\
32,258
Poisson Hypothesis Testing for Two Parameters
Starting from page 125 of Casella's Testing Statistical Hypothesis the answer to the type of question you have formulated is outlined. I have attached a link to a pdf I found online of it for your reference. Casella's Testing Statistical Hypothesis, Third Edition.
Poisson Hypothesis Testing for Two Parameters
Starting from page 125 of Casella's Testing Statistical Hypothesis the answer to the type of question you have formulated is outlined. I have attached a link to a pdf I found online of it for your ref
Poisson Hypothesis Testing for Two Parameters Starting from page 125 of Casella's Testing Statistical Hypothesis the answer to the type of question you have formulated is outlined. I have attached a link to a pdf I found online of it for your reference. Casella's Testing Statistical Hypothesis, Third Edition.
Poisson Hypothesis Testing for Two Parameters Starting from page 125 of Casella's Testing Statistical Hypothesis the answer to the type of question you have formulated is outlined. I have attached a link to a pdf I found online of it for your ref
32,259
LASSO for explanatory models: shrinked parameters or not?
If your goal is to accurately estimate the parameters in your model then how close you are to the true model is how you should select your model. Predictive validity via cross-validation is one way to do this and is the preferred$^*$ way for selecting $\lambda$ in LASSO regression. Now, to answer the question as to which parameter estimate is the "real estimate" one should look at which parameter is "closest" to the real parameter value. Does "closest" mean the parameter estimates that minimize bias? If so, then the least square estimator is unbiased in linear regression. Does closest mean the parameter estimate that minimizes mean square error (MSE)? Then it can be shown that there is a specification of ridge regression that will give you estimates that minimize MSE (similar to LASSO, ridge regression shrinks parameter estimates toward zero but, different from LASSO, parameter estimates do not reach zero). Similarly, there are several specifications of the tuning paramater $\lambda$ in LASSO that will result in smaller MSE than linear regression (see here). As the statistician, you have to determine what is the "best" estimate and report it (preferably with some indication of the confidence of the estimate) to those who are not well versed in statistics. What is "best" may or may not be a biased estimate. The glmnet function in R does a pretty good job of selecting good values of $\lambda$ and, in summary, selecting $\lambda$ through cross-validation and reporting the parameter estimates is a perfectly reasonable way to estimate the "real" value of the parameters. $^*$A Bayesian LASSO model that selects $\lambda$ by marginal likelihood is preferred by some but I'm, perhaps incorrectly, assuming you are doing a frequentist LASSO model.
LASSO for explanatory models: shrinked parameters or not?
If your goal is to accurately estimate the parameters in your model then how close you are to the true model is how you should select your model. Predictive validity via cross-validation is one way to
LASSO for explanatory models: shrinked parameters or not? If your goal is to accurately estimate the parameters in your model then how close you are to the true model is how you should select your model. Predictive validity via cross-validation is one way to do this and is the preferred$^*$ way for selecting $\lambda$ in LASSO regression. Now, to answer the question as to which parameter estimate is the "real estimate" one should look at which parameter is "closest" to the real parameter value. Does "closest" mean the parameter estimates that minimize bias? If so, then the least square estimator is unbiased in linear regression. Does closest mean the parameter estimate that minimizes mean square error (MSE)? Then it can be shown that there is a specification of ridge regression that will give you estimates that minimize MSE (similar to LASSO, ridge regression shrinks parameter estimates toward zero but, different from LASSO, parameter estimates do not reach zero). Similarly, there are several specifications of the tuning paramater $\lambda$ in LASSO that will result in smaller MSE than linear regression (see here). As the statistician, you have to determine what is the "best" estimate and report it (preferably with some indication of the confidence of the estimate) to those who are not well versed in statistics. What is "best" may or may not be a biased estimate. The glmnet function in R does a pretty good job of selecting good values of $\lambda$ and, in summary, selecting $\lambda$ through cross-validation and reporting the parameter estimates is a perfectly reasonable way to estimate the "real" value of the parameters. $^*$A Bayesian LASSO model that selects $\lambda$ by marginal likelihood is preferred by some but I'm, perhaps incorrectly, assuming you are doing a frequentist LASSO model.
LASSO for explanatory models: shrinked parameters or not? If your goal is to accurately estimate the parameters in your model then how close you are to the true model is how you should select your model. Predictive validity via cross-validation is one way to
32,260
What are the differences between filters learned in autoencoder and convolutional neural network?
In case of CNN filters are applied to small patches of an image at each possible location (which also makes them translation invariant). Autoencoder's hidden layers get whole image (output of the previous layer) as their input, which doesn't look like a good idea for images: usually only spatially local features correlate, whereas more distant ones are less correlated. Also, these hidden neurons are not translation invariant. Thus, CNNs are like usual ANNs with a special kind of regularization, which zeros out most of weights to make use of locality.
What are the differences between filters learned in autoencoder and convolutional neural network?
In case of CNN filters are applied to small patches of an image at each possible location (which also makes them translation invariant). Autoencoder's hidden layers get whole image (output of the prev
What are the differences between filters learned in autoencoder and convolutional neural network? In case of CNN filters are applied to small patches of an image at each possible location (which also makes them translation invariant). Autoencoder's hidden layers get whole image (output of the previous layer) as their input, which doesn't look like a good idea for images: usually only spatially local features correlate, whereas more distant ones are less correlated. Also, these hidden neurons are not translation invariant. Thus, CNNs are like usual ANNs with a special kind of regularization, which zeros out most of weights to make use of locality.
What are the differences between filters learned in autoencoder and convolutional neural network? In case of CNN filters are applied to small patches of an image at each possible location (which also makes them translation invariant). Autoencoder's hidden layers get whole image (output of the prev
32,261
Is a Bayesian estimate with a "flat prior" the same as a maximum likelihood estimate?
Summarizing and extending from the comments: "A Bayesian MAP estimate may coincide with an MLE. However, the posterior distribution has no equivalent from a likelihood perspective". What do you mean by "A Bayesian estimate"? Often, with Bayes, we will just summarize the data by the posterior distribution (assuming it exists, in this case, sometimes, with a flat prior (not integrating to one) we get a formal posterior which do not integrate to one, so is not really a distribution). Such Bayesian summary do not have a likelihood variant, as usually seen. Some are trying to rectify this, by introducing the concept of a confidence distribution based on the likelihood function, see http://folk.uio.no/tores/Publications_files/Schweder_Hjort_Confidence%20and%20likelihood_SJS2002.pdf (and their forthcoming book). But, if you go the way of defining a bayes estimator, you have various ways of doing that! You can choose the MAP estimator, which formally may be the same as the MLE. Or you can choose an estimator based on decision theory, by minimizing some posterior expected loss function. Many possibilities, and none of those has a likelihood equivalent.
Is a Bayesian estimate with a "flat prior" the same as a maximum likelihood estimate?
Summarizing and extending from the comments: "A Bayesian MAP estimate may coincide with an MLE. However, the posterior distribution has no equivalent from a likelihood perspective". What do you mean
Is a Bayesian estimate with a "flat prior" the same as a maximum likelihood estimate? Summarizing and extending from the comments: "A Bayesian MAP estimate may coincide with an MLE. However, the posterior distribution has no equivalent from a likelihood perspective". What do you mean by "A Bayesian estimate"? Often, with Bayes, we will just summarize the data by the posterior distribution (assuming it exists, in this case, sometimes, with a flat prior (not integrating to one) we get a formal posterior which do not integrate to one, so is not really a distribution). Such Bayesian summary do not have a likelihood variant, as usually seen. Some are trying to rectify this, by introducing the concept of a confidence distribution based on the likelihood function, see http://folk.uio.no/tores/Publications_files/Schweder_Hjort_Confidence%20and%20likelihood_SJS2002.pdf (and their forthcoming book). But, if you go the way of defining a bayes estimator, you have various ways of doing that! You can choose the MAP estimator, which formally may be the same as the MLE. Or you can choose an estimator based on decision theory, by minimizing some posterior expected loss function. Many possibilities, and none of those has a likelihood equivalent.
Is a Bayesian estimate with a "flat prior" the same as a maximum likelihood estimate? Summarizing and extending from the comments: "A Bayesian MAP estimate may coincide with an MLE. However, the posterior distribution has no equivalent from a likelihood perspective". What do you mean
32,262
How to calculate a sample size for validating correct/incorrectness of records in a data table?
This can be framed as testing the null hypothesis that there are some invalid records in the data set ($K>0$) vs the alternative that there are none ($K=0$), given that there are no invalid records found in the sample ($k=0$). The proximal null, the toughest to reject, is that there's a single invalid record ($K=1$). Substitute these into the hypergeometric probability mass function for a sample of size $n$ from a data-set of size $N$ to get the p-value (there are no possible smaller values of $k$ to be considered): $$f(k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$ $$ = \frac{\binom{1}{0}\binom{N-1}{n-0}}{\binom{N}{n}}$$ $$ =\frac{N-n}{N}=p$$ So the minimum sample size $n^*$ required to be able to reject the null hypothesis at a significance level $p$ (or equivalently to obtain a one-sided $\alpha=1-p$ confidence interval of $K=0$) is simply $$n^*=\lceil (1-p) N \rceil$$ $$n^*=\lceil \alpha N \rceil$$ With $N=1000$, and $\alpha=0.95$, $n^*=950$. If that seems a lot, consider that all of a thousand records' being valid is a strict criterion; if you consider relaxing it the same approach can be used to test say $K>9$.
How to calculate a sample size for validating correct/incorrectness of records in a data table?
This can be framed as testing the null hypothesis that there are some invalid records in the data set ($K>0$) vs the alternative that there are none ($K=0$), given that there are no invalid records fo
How to calculate a sample size for validating correct/incorrectness of records in a data table? This can be framed as testing the null hypothesis that there are some invalid records in the data set ($K>0$) vs the alternative that there are none ($K=0$), given that there are no invalid records found in the sample ($k=0$). The proximal null, the toughest to reject, is that there's a single invalid record ($K=1$). Substitute these into the hypergeometric probability mass function for a sample of size $n$ from a data-set of size $N$ to get the p-value (there are no possible smaller values of $k$ to be considered): $$f(k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$ $$ = \frac{\binom{1}{0}\binom{N-1}{n-0}}{\binom{N}{n}}$$ $$ =\frac{N-n}{N}=p$$ So the minimum sample size $n^*$ required to be able to reject the null hypothesis at a significance level $p$ (or equivalently to obtain a one-sided $\alpha=1-p$ confidence interval of $K=0$) is simply $$n^*=\lceil (1-p) N \rceil$$ $$n^*=\lceil \alpha N \rceil$$ With $N=1000$, and $\alpha=0.95$, $n^*=950$. If that seems a lot, consider that all of a thousand records' being valid is a strict criterion; if you consider relaxing it the same approach can be used to test say $K>9$.
How to calculate a sample size for validating correct/incorrectness of records in a data table? This can be framed as testing the null hypothesis that there are some invalid records in the data set ($K>0$) vs the alternative that there are none ($K=0$), given that there are no invalid records fo
32,263
Test if 2 exponentially distributed datasets are different
Exponentially distributed lifetimes are an especially simple case for survival analysis. Analyzing them is often the first example worked to get students started before moving to more complicated situations. In addition, survival analysis is naturally suited to censored data. In short, I suggest you use survival analysis with a grouping indicator for the two distributions as a treatment effect. You could use a parametric model (e.g., the Weibull model, as the exponential is a special case of the Weibull), or you could use non-parametric methods, such as the log rank test, if you prefer.
Test if 2 exponentially distributed datasets are different
Exponentially distributed lifetimes are an especially simple case for survival analysis. Analyzing them is often the first example worked to get students started before moving to more complicated sit
Test if 2 exponentially distributed datasets are different Exponentially distributed lifetimes are an especially simple case for survival analysis. Analyzing them is often the first example worked to get students started before moving to more complicated situations. In addition, survival analysis is naturally suited to censored data. In short, I suggest you use survival analysis with a grouping indicator for the two distributions as a treatment effect. You could use a parametric model (e.g., the Weibull model, as the exponential is a special case of the Weibull), or you could use non-parametric methods, such as the log rank test, if you prefer.
Test if 2 exponentially distributed datasets are different Exponentially distributed lifetimes are an especially simple case for survival analysis. Analyzing them is often the first example worked to get students started before moving to more complicated sit
32,264
Test if 2 exponentially distributed datasets are different
You are interested in the following test: $H_0: \lambda_1 = \lambda_2$ where $\lambda_i$ is the single parameter that uniquely identifies the exponential distribution you are dealing with. Since $\lambda$ also corresponds to the mean of this distribution you are essentially interested in testing the difference of means in these two distributions. Since you have a large sample size, to test this we may appeal to the central limit theorem which tells us the following: Central Limit Theorem: suppose $X_1, X_2, ...X_n$ is a sequence of i.i.d. random variables with $E[X_i] = \mu \text{ and } Var[X_i] = \sigma^2 < \infty$. Then as $n$ approaches infinity, the random variable $\sqrt{n}(\bar{X} − \mu)$ converge in distribution to a normal $N(0, σ^2)$ distribution. In other words, your sample means for each of the two groups are approximately normally distributed. Since you don't know the true value of $\sigma^2$, you may perform a perform a t-test for a difference of means.
Test if 2 exponentially distributed datasets are different
You are interested in the following test: $H_0: \lambda_1 = \lambda_2$ where $\lambda_i$ is the single parameter that uniquely identifies the exponential distribution you are dealing with. Since $\lam
Test if 2 exponentially distributed datasets are different You are interested in the following test: $H_0: \lambda_1 = \lambda_2$ where $\lambda_i$ is the single parameter that uniquely identifies the exponential distribution you are dealing with. Since $\lambda$ also corresponds to the mean of this distribution you are essentially interested in testing the difference of means in these two distributions. Since you have a large sample size, to test this we may appeal to the central limit theorem which tells us the following: Central Limit Theorem: suppose $X_1, X_2, ...X_n$ is a sequence of i.i.d. random variables with $E[X_i] = \mu \text{ and } Var[X_i] = \sigma^2 < \infty$. Then as $n$ approaches infinity, the random variable $\sqrt{n}(\bar{X} − \mu)$ converge in distribution to a normal $N(0, σ^2)$ distribution. In other words, your sample means for each of the two groups are approximately normally distributed. Since you don't know the true value of $\sigma^2$, you may perform a perform a t-test for a difference of means.
Test if 2 exponentially distributed datasets are different You are interested in the following test: $H_0: \lambda_1 = \lambda_2$ where $\lambda_i$ is the single parameter that uniquely identifies the exponential distribution you are dealing with. Since $\lam
32,265
What's the model representation for the first difference of a local level model?
You have arrived to the stationary form of the local level model: $$ \Delta y_t \equiv x_t = \underbrace{\Delta \alpha_t}_{\eta_{t-1}} + \Delta \epsilon_t \,, $$ where $\Delta$ is the difference operator such that $\Delta y_t = y_t - y_{t-1}$. Now, I think it is easier to first check the statistical properties (mean, covariances, autocorrelations) of this stationary form. For example, the mean of this process is given by: $$ \hbox{E}[x_t] = \hbox{E}[\eta_{t-1}] + \hbox{E}[\epsilon_t] - \hbox{E}[\epsilon_{t-1}] = 0 + 0 - 0 = 0 \,. $$ You can do the same to obtain the covariances of order $k$, $\gamma(k)$: \begin{eqnarray} \begin{array}{ll} \gamma(0) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})^2\right] = \dots \\ \gamma(1) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})(\eta_{t-2} + \epsilon_{t-1} - \epsilon_{t-2})\right] &=& \dots \\ \gamma(2) &=& \cdots \\ \gamma(>2) &=& \cdots \end{array} \end{eqnarray} You just need to take the expectation of the cross-products of all terms bearing in mind that $\eta_t$ and $\epsilon_t$ are independently distributed, they are independent of each other and the variance of each one are respectively $\sigma^2_\eta$ and $\sigma^2_\epsilon$. Then, it will be straightforward to get the expression of the autocorrelations of order $k>0$, $\rho(k) = \frac{\gamma(k)}{\gamma(0)}$. This will have a form that is characteristic of a moving-average of order 1, MA(1) (the autocorrelations are zero for $k>1$) and, hence, $x_t$ can be represented as a MA(1) process and $y_t$ as an ARIMA(0,1,1) process. In order to find out the relationship between the parameters of the local level model and the MA coefficient, you can equate the expression of the first order autocorrelation obtained before with the expression of the first order autocorrelation of a MA(1). Following the same strategy as above, you can find that $\rho(1)$ for a MA(1) with coefficient $\theta$ is given by $\rho(1) = \theta/(1 + \theta^2)$. The expression that you get by doing this will also reveal that the local level model is a restricted ARIMA(0,1,1) model where the MA coefficient $\theta$ can take only negative values. Edit Equation (c.5) is okay. You can get the relationship between the parameters of the local level model and the MA coefficient solving the equation (c.5) for $\theta$. You can rewrite it as a quadratic equation to be solved for $\theta$. One of the solutions can be discarded because it implies a non-invertible MA, $|\theta|>1$. When solving this equation, it will be helpful to define $q=\sigma^2_\eta/\sigma^2_\epsilon$. Also, check that $\frac{\sqrt{\sigma^4_\eta + 4\sigma^2_\eta\sigma^2_\epsilon}}{2\sigma^2_\epsilon} = \frac{\sqrt{q^2 + 4q}}{2}$. This way you will get a more neat expression. Then, given that $0 < q < \infty$, you can check that the range of possible values for $\theta$ are zero or negative values.
What's the model representation for the first difference of a local level model?
You have arrived to the stationary form of the local level model: $$ \Delta y_t \equiv x_t = \underbrace{\Delta \alpha_t}_{\eta_{t-1}} + \Delta \epsilon_t \,, $$ where $\Delta$ is the difference opera
What's the model representation for the first difference of a local level model? You have arrived to the stationary form of the local level model: $$ \Delta y_t \equiv x_t = \underbrace{\Delta \alpha_t}_{\eta_{t-1}} + \Delta \epsilon_t \,, $$ where $\Delta$ is the difference operator such that $\Delta y_t = y_t - y_{t-1}$. Now, I think it is easier to first check the statistical properties (mean, covariances, autocorrelations) of this stationary form. For example, the mean of this process is given by: $$ \hbox{E}[x_t] = \hbox{E}[\eta_{t-1}] + \hbox{E}[\epsilon_t] - \hbox{E}[\epsilon_{t-1}] = 0 + 0 - 0 = 0 \,. $$ You can do the same to obtain the covariances of order $k$, $\gamma(k)$: \begin{eqnarray} \begin{array}{ll} \gamma(0) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})^2\right] = \dots \\ \gamma(1) &=& E\left[(\eta_{t-1} + \epsilon_t - \epsilon_{t-1})(\eta_{t-2} + \epsilon_{t-1} - \epsilon_{t-2})\right] &=& \dots \\ \gamma(2) &=& \cdots \\ \gamma(>2) &=& \cdots \end{array} \end{eqnarray} You just need to take the expectation of the cross-products of all terms bearing in mind that $\eta_t$ and $\epsilon_t$ are independently distributed, they are independent of each other and the variance of each one are respectively $\sigma^2_\eta$ and $\sigma^2_\epsilon$. Then, it will be straightforward to get the expression of the autocorrelations of order $k>0$, $\rho(k) = \frac{\gamma(k)}{\gamma(0)}$. This will have a form that is characteristic of a moving-average of order 1, MA(1) (the autocorrelations are zero for $k>1$) and, hence, $x_t$ can be represented as a MA(1) process and $y_t$ as an ARIMA(0,1,1) process. In order to find out the relationship between the parameters of the local level model and the MA coefficient, you can equate the expression of the first order autocorrelation obtained before with the expression of the first order autocorrelation of a MA(1). Following the same strategy as above, you can find that $\rho(1)$ for a MA(1) with coefficient $\theta$ is given by $\rho(1) = \theta/(1 + \theta^2)$. The expression that you get by doing this will also reveal that the local level model is a restricted ARIMA(0,1,1) model where the MA coefficient $\theta$ can take only negative values. Edit Equation (c.5) is okay. You can get the relationship between the parameters of the local level model and the MA coefficient solving the equation (c.5) for $\theta$. You can rewrite it as a quadratic equation to be solved for $\theta$. One of the solutions can be discarded because it implies a non-invertible MA, $|\theta|>1$. When solving this equation, it will be helpful to define $q=\sigma^2_\eta/\sigma^2_\epsilon$. Also, check that $\frac{\sqrt{\sigma^4_\eta + 4\sigma^2_\eta\sigma^2_\epsilon}}{2\sigma^2_\epsilon} = \frac{\sqrt{q^2 + 4q}}{2}$. This way you will get a more neat expression. Then, given that $0 < q < \infty$, you can check that the range of possible values for $\theta$ are zero or negative values.
What's the model representation for the first difference of a local level model? You have arrived to the stationary form of the local level model: $$ \Delta y_t \equiv x_t = \underbrace{\Delta \alpha_t}_{\eta_{t-1}} + \Delta \epsilon_t \,, $$ where $\Delta$ is the difference opera
32,266
What's the model representation for the first difference of a local level model?
Some explicit guidance and hints: Your answer in (a) looks okay to me. In (b) you would either need to go on and show the properties of the series $x_t$ (what's its ACF, say? What are the properties you need?) or to explicitly rewrite it in the form of an MA (which is easier, I think - you might just recast it as a transform to an MA in $\zeta$ say, $x_t=\theta(B)\,\zeta_t$, where $\zeta_t=...$). Also I don't think you really need a state equation, so don't worry too much. You can always write a null one.
What's the model representation for the first difference of a local level model?
Some explicit guidance and hints: Your answer in (a) looks okay to me. In (b) you would either need to go on and show the properties of the series $x_t$ (what's its ACF, say? What are the properties y
What's the model representation for the first difference of a local level model? Some explicit guidance and hints: Your answer in (a) looks okay to me. In (b) you would either need to go on and show the properties of the series $x_t$ (what's its ACF, say? What are the properties you need?) or to explicitly rewrite it in the form of an MA (which is easier, I think - you might just recast it as a transform to an MA in $\zeta$ say, $x_t=\theta(B)\,\zeta_t$, where $\zeta_t=...$). Also I don't think you really need a state equation, so don't worry too much. You can always write a null one.
What's the model representation for the first difference of a local level model? Some explicit guidance and hints: Your answer in (a) looks okay to me. In (b) you would either need to go on and show the properties of the series $x_t$ (what's its ACF, say? What are the properties y
32,267
Why does adding more terms into a linear model always increase the r-squared value?
Certainly this can happen: if the new predictor is contained in the linear span of the predictors already in the model. Think about it geometrically: your new "fitting subspace" (the possible linear combinations of your predictors) is exactly the same as the old one, so the optimal fit and the sum of squares is unchanged. However, this is only a sufficient condition for $R^2$ to be unchanged, not a necessary one. Consider three points like this: xx <- c(-1,0,1) yy <- c(1,-2,1) plot(xx,yy,pch=19) abline(h=0) abline(v=0) model.1 <- lm(yy~1) abline(model.1,col="red",lty=2) summary(model.1) model.2 <- lm(yy~xx) abline(model.2,col="green",lty=3) summary(model.2) If we add xx as a predictor to the simple mean model, we get the same fit and the same $R^2$. Such a construction should be possible with larger models, as well.
Why does adding more terms into a linear model always increase the r-squared value?
Certainly this can happen: if the new predictor is contained in the linear span of the predictors already in the model. Think about it geometrically: your new "fitting subspace" (the possible linear c
Why does adding more terms into a linear model always increase the r-squared value? Certainly this can happen: if the new predictor is contained in the linear span of the predictors already in the model. Think about it geometrically: your new "fitting subspace" (the possible linear combinations of your predictors) is exactly the same as the old one, so the optimal fit and the sum of squares is unchanged. However, this is only a sufficient condition for $R^2$ to be unchanged, not a necessary one. Consider three points like this: xx <- c(-1,0,1) yy <- c(1,-2,1) plot(xx,yy,pch=19) abline(h=0) abline(v=0) model.1 <- lm(yy~1) abline(model.1,col="red",lty=2) summary(model.1) model.2 <- lm(yy~xx) abline(model.2,col="green",lty=3) summary(model.2) If we add xx as a predictor to the simple mean model, we get the same fit and the same $R^2$. Such a construction should be possible with larger models, as well.
Why does adding more terms into a linear model always increase the r-squared value? Certainly this can happen: if the new predictor is contained in the linear span of the predictors already in the model. Think about it geometrically: your new "fitting subspace" (the possible linear c
32,268
Why does adding more terms into a linear model always increase the r-squared value?
Adding more terms into a linear model may keep the r squared value exactly the same or increase the r squared value. It is called non-decreasing property of R square. To demonstrate this property, first recall that the objective of least squares linear regression is $$ min{SSE}=min\displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= min_{\beta}\sum_{i=1}^n\left(y_i -\beta_0 - \beta_1x_{i,1} - \beta_2x_{i,2} -…- \beta_px_{i,p}\right)^2 $$ R square is $$ R^2=1-\frac{SSE}{SST} $$ When the extra variable is included, the objective of least squares linear regression becomes $$ min{SSE}=min_{\beta}\sum_{i=1}^n\left(y_i -\beta_0 - \beta_1x_{i,1} - \beta_2x_{i,2} -…- \beta_px_{i,p}-\beta_{p+1}x_{i,p+1}\right)^2 $$ If extra estimated coefficient($\beta_{p+1}$) is zero, the SSE and the R square will stay unchanged. Or if extra estimated coefficient($\beta_{p+1}$) takes a nonzero value , the SSE will reduce. In this case, the R square will increase, because it improves the quality of the fit.
Why does adding more terms into a linear model always increase the r-squared value?
Adding more terms into a linear model may keep the r squared value exactly the same or increase the r squared value. It is called non-decreasing property of R square. To demonstrate this property, fir
Why does adding more terms into a linear model always increase the r-squared value? Adding more terms into a linear model may keep the r squared value exactly the same or increase the r squared value. It is called non-decreasing property of R square. To demonstrate this property, first recall that the objective of least squares linear regression is $$ min{SSE}=min\displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= min_{\beta}\sum_{i=1}^n\left(y_i -\beta_0 - \beta_1x_{i,1} - \beta_2x_{i,2} -…- \beta_px_{i,p}\right)^2 $$ R square is $$ R^2=1-\frac{SSE}{SST} $$ When the extra variable is included, the objective of least squares linear regression becomes $$ min{SSE}=min_{\beta}\sum_{i=1}^n\left(y_i -\beta_0 - \beta_1x_{i,1} - \beta_2x_{i,2} -…- \beta_px_{i,p}-\beta_{p+1}x_{i,p+1}\right)^2 $$ If extra estimated coefficient($\beta_{p+1}$) is zero, the SSE and the R square will stay unchanged. Or if extra estimated coefficient($\beta_{p+1}$) takes a nonzero value , the SSE will reduce. In this case, the R square will increase, because it improves the quality of the fit.
Why does adding more terms into a linear model always increase the r-squared value? Adding more terms into a linear model may keep the r squared value exactly the same or increase the r squared value. It is called non-decreasing property of R square. To demonstrate this property, fir
32,269
Definition of exponential family
I'm not sure what you're missing except maybe the support: An exponential family is defined with a density or mass function and a support $\Omega$. You can find a measure-theoretic definition of exponential families in: Shao, J. (2003). Mathematical Statistics. Springer. http://books.google.ca/books?id=cyqTPotl7QcC He writes: A parametric family $\{P_\theta: \theta \in \Theta\}$ dominated by a $\sigma$-finite measure $\nu$ on $(\Omega, \mathcal F)$ is called an exponential family if and only if $$\frac{dP_\theta}{d\nu} = \exp\left({[T(\omega)]}^\intercal \eta(\theta) - \xi(\theta)\right)h(\omega), \qquad \omega \in \Omega,$$ where $\exp(x) = e^x$ is the exponential function, $T$ is a random $p$-vector with a fixed positive integer $p$, $\eta$ is a function from $\Theta$ to $\mathcal R^p$, $h$ is a nonnegative Borel function on $(\Omega, \mathcal F)$, and $\xi(\theta) = \log\left(\int_\Omega > e^{{[T(\omega)]}^\intercal \eta(\theta)}h(\omega)d\nu(\omega)\right)$. (I changed his notation to write dot products as $x^\intercal y$ rather than his $x y^\intercal$, which looks to me more like an outer product.) I think that $\nu$ then could be your counting measure in the case of a p.m.f., or a Lebesgue measure for a p.d.f., etc.
Definition of exponential family
I'm not sure what you're missing except maybe the support: An exponential family is defined with a density or mass function and a support $\Omega$. You can find a measure-theoretic definition of expo
Definition of exponential family I'm not sure what you're missing except maybe the support: An exponential family is defined with a density or mass function and a support $\Omega$. You can find a measure-theoretic definition of exponential families in: Shao, J. (2003). Mathematical Statistics. Springer. http://books.google.ca/books?id=cyqTPotl7QcC He writes: A parametric family $\{P_\theta: \theta \in \Theta\}$ dominated by a $\sigma$-finite measure $\nu$ on $(\Omega, \mathcal F)$ is called an exponential family if and only if $$\frac{dP_\theta}{d\nu} = \exp\left({[T(\omega)]}^\intercal \eta(\theta) - \xi(\theta)\right)h(\omega), \qquad \omega \in \Omega,$$ where $\exp(x) = e^x$ is the exponential function, $T$ is a random $p$-vector with a fixed positive integer $p$, $\eta$ is a function from $\Theta$ to $\mathcal R^p$, $h$ is a nonnegative Borel function on $(\Omega, \mathcal F)$, and $\xi(\theta) = \log\left(\int_\Omega > e^{{[T(\omega)]}^\intercal \eta(\theta)}h(\omega)d\nu(\omega)\right)$. (I changed his notation to write dot products as $x^\intercal y$ rather than his $x y^\intercal$, which looks to me more like an outer product.) I think that $\nu$ then could be your counting measure in the case of a p.m.f., or a Lebesgue measure for a p.d.f., etc.
Definition of exponential family I'm not sure what you're missing except maybe the support: An exponential family is defined with a density or mass function and a support $\Omega$. You can find a measure-theoretic definition of expo
32,270
Goodness of fit for discrete data: best approach
If I understood your question correctly, you just need to fit data to distribution. In this case, you could use one of functions in R packages, such as fitdistr from MASS package, which uses maximum likelihood estimation (MLE) and supports discrete distributions, including binomial and Poisson. Then, as a second step, you would need to perform one (or more) of goodness-of-fit (GoF) tests to validate results. Kolmogorov-Smirnov, Anderson-Darling and (AFAIK) Lilliefors tests all are not applicable to discrete distributions. However, fortunately, chi-square GoF test is applicable to both continuous and discrete distributions and in R is a matter of calling stats::chisq.test() function. Alternatively, as your data represents a discrete distribution, you can use vcd package and its function goodfit(). This function can be used either as a replacement for standard GoF test chisq.test(), or, even better, as a full workflow (distribution fitting and GoF testing). For the full workflow option, just use default setup and do not specify parameters par (you can specify size, if type = "nbinomial"). The parameters will be estimated, using maximum likelihood or minimum chi-square (you can select the method). Results can be obtained by calling summary() function.
Goodness of fit for discrete data: best approach
If I understood your question correctly, you just need to fit data to distribution. In this case, you could use one of functions in R packages, such as fitdistr from MASS package, which uses maximum l
Goodness of fit for discrete data: best approach If I understood your question correctly, you just need to fit data to distribution. In this case, you could use one of functions in R packages, such as fitdistr from MASS package, which uses maximum likelihood estimation (MLE) and supports discrete distributions, including binomial and Poisson. Then, as a second step, you would need to perform one (or more) of goodness-of-fit (GoF) tests to validate results. Kolmogorov-Smirnov, Anderson-Darling and (AFAIK) Lilliefors tests all are not applicable to discrete distributions. However, fortunately, chi-square GoF test is applicable to both continuous and discrete distributions and in R is a matter of calling stats::chisq.test() function. Alternatively, as your data represents a discrete distribution, you can use vcd package and its function goodfit(). This function can be used either as a replacement for standard GoF test chisq.test(), or, even better, as a full workflow (distribution fitting and GoF testing). For the full workflow option, just use default setup and do not specify parameters par (you can specify size, if type = "nbinomial"). The parameters will be estimated, using maximum likelihood or minimum chi-square (you can select the method). Results can be obtained by calling summary() function.
Goodness of fit for discrete data: best approach If I understood your question correctly, you just need to fit data to distribution. In this case, you could use one of functions in R packages, such as fitdistr from MASS package, which uses maximum l
32,271
Equivalence testing - tost method - why CI of 90%?
The $1-2\alpha$ is not because you calculate the CI for each group separately. It is because you calculate the "inequivalence" to the upper and to the lower end separately. The parameter $\theta$ lies in the equivalence interval $[\epsilon_L, \epsilon_U]$ iff $$\theta \geq \epsilon_L \wedge \theta \leq \epsilon_U.$$ Each part is tested separately by a one sided test at level $1-\alpha$. Only if both tests are significant, we can conclude equivalence. (This is the very intuitive intersection-union-principle.) Turning this into a single confidence interval, we must remove $\alpha$ from both the upper and the lower probability mass of the CI. So we end up with $1-2\alpha$. The TOST-CI is simply the intersection of the one-sided CIs. By the way, it is still possible to do the TOST with a $1-\alpha$ CI, but it would be unnecessarily conservative.
Equivalence testing - tost method - why CI of 90%?
The $1-2\alpha$ is not because you calculate the CI for each group separately. It is because you calculate the "inequivalence" to the upper and to the lower end separately. The parameter $\theta$ lies
Equivalence testing - tost method - why CI of 90%? The $1-2\alpha$ is not because you calculate the CI for each group separately. It is because you calculate the "inequivalence" to the upper and to the lower end separately. The parameter $\theta$ lies in the equivalence interval $[\epsilon_L, \epsilon_U]$ iff $$\theta \geq \epsilon_L \wedge \theta \leq \epsilon_U.$$ Each part is tested separately by a one sided test at level $1-\alpha$. Only if both tests are significant, we can conclude equivalence. (This is the very intuitive intersection-union-principle.) Turning this into a single confidence interval, we must remove $\alpha$ from both the upper and the lower probability mass of the CI. So we end up with $1-2\alpha$. The TOST-CI is simply the intersection of the one-sided CIs. By the way, it is still possible to do the TOST with a $1-\alpha$ CI, but it would be unnecessarily conservative.
Equivalence testing - tost method - why CI of 90%? The $1-2\alpha$ is not because you calculate the CI for each group separately. It is because you calculate the "inequivalence" to the upper and to the lower end separately. The parameter $\theta$ lies
32,272
Equivalence testing - tost method - why CI of 90%?
The answer to this question is that 90% is possible because of a logical fact that makes this "bonus" in confidence possible. In the TOST procedure, two one-tailed tests are conducted at a 5% level. The type 1 error rate stills remains at 5% because if one test decision is a type 1 error, the other one cannot be a type 1 error anymore. For example if one test falsely states that the difference is larger than -3 (i.e. in fact it is smaller than -3), the other test which tests if it is smaller than 3 cannot produce a type 1 error because the value is in fact smaller than 3.
Equivalence testing - tost method - why CI of 90%?
The answer to this question is that 90% is possible because of a logical fact that makes this "bonus" in confidence possible. In the TOST procedure, two one-tailed tests are conducted at a 5% level. T
Equivalence testing - tost method - why CI of 90%? The answer to this question is that 90% is possible because of a logical fact that makes this "bonus" in confidence possible. In the TOST procedure, two one-tailed tests are conducted at a 5% level. The type 1 error rate stills remains at 5% because if one test decision is a type 1 error, the other one cannot be a type 1 error anymore. For example if one test falsely states that the difference is larger than -3 (i.e. in fact it is smaller than -3), the other test which tests if it is smaller than 3 cannot produce a type 1 error because the value is in fact smaller than 3.
Equivalence testing - tost method - why CI of 90%? The answer to this question is that 90% is possible because of a logical fact that makes this "bonus" in confidence possible. In the TOST procedure, two one-tailed tests are conducted at a 5% level. T
32,273
Normalize variables for calculation of correlation coefficient
The answer depends on what exactly you're interested in. If you're only interested in whether there is a monotonic relationship between the two variables, use Spearman's rank correlation coefficient. Moreover, as Nick Cox says in his comment, any kind of linear scaling is unnecessary.
Normalize variables for calculation of correlation coefficient
The answer depends on what exactly you're interested in. If you're only interested in whether there is a monotonic relationship between the two variables, use Spearman's rank correlation coefficient.
Normalize variables for calculation of correlation coefficient The answer depends on what exactly you're interested in. If you're only interested in whether there is a monotonic relationship between the two variables, use Spearman's rank correlation coefficient. Moreover, as Nick Cox says in his comment, any kind of linear scaling is unnecessary.
Normalize variables for calculation of correlation coefficient The answer depends on what exactly you're interested in. If you're only interested in whether there is a monotonic relationship between the two variables, use Spearman's rank correlation coefficient.
32,274
Truncated Von Mises-Fisher distribution
Because the analysis should not be too sensitive to the prior, we should feel free to make minor modifications to the prior. Instead of truncating it, why not reflect all the probability into the first hyperquadrant? That is, continue to use a von Mises-Fisher prior for $(x_1,x_2,\ldots,x_{n})$ (with $n\approx 500$) but base your analysis on $(|x_1|,|x_2|,\ldots,|x_{n}|)$. That would not need any renormalization at all. The objection immediately arises that the calculations would require a $500$-fold sum, amounting to $2^{500}$ terms, which is an impossible calculation. Although that is true, an algebraic simplification makes it possible. I am suggesting using a prior $$f(\mathbf x; \mu, \kappa) = C(\mu, \kappa) \sum_{i\in \{-1,1\}^n} \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots, i_n\mu_n) \cdot \mathbf x\right)$$ where $C(\mu,\kappa)$ is the normalizing constant for the von Mises-Fisher distribution with parameters $(\mu, \kappa)$, all the $x_i$ are non-negative (and, without any loss of generality, you may as well assume all the $\mu_i$ are non-negative, too). But by separately performing the sum over the last component, the foregoing can be written $$C(\mu, \kappa) \sum_{i\in \{-1,1\}^{n-1}} \left(\exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,\mu_n) \cdot \mathbf x\right) + \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,-\mu_n) \cdot \mathbf x\right)\right) \\ = C(\mu, \kappa) 2\cosh(\kappa \mu_n x_n)\sum_{i\in \{-1,1\}^{n-1}} \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,i_{n-1}\mu_{n-1}) \cdot \mathbf x_{[-n]}\right)$$ where $\mathbf x_{[-n]} = (x_1, x_2, \ldots, x_{n-1})$. Proceeding inductively on $n$ yields $$f(\mathbf x; \mu, \kappa) = C(\mu, \kappa) 2^n \prod_{i=1}^n \cosh(\kappa \mu_i x_i)$$ which is quite tractable. For $\kappa \gg 0$ (that is, as this prior grows a little less diffuse), $f(\mathbf x; \mu, \kappa)$ approaches the truncated von Mises-Fisher distribution.
Truncated Von Mises-Fisher distribution
Because the analysis should not be too sensitive to the prior, we should feel free to make minor modifications to the prior. Instead of truncating it, why not reflect all the probability into the fir
Truncated Von Mises-Fisher distribution Because the analysis should not be too sensitive to the prior, we should feel free to make minor modifications to the prior. Instead of truncating it, why not reflect all the probability into the first hyperquadrant? That is, continue to use a von Mises-Fisher prior for $(x_1,x_2,\ldots,x_{n})$ (with $n\approx 500$) but base your analysis on $(|x_1|,|x_2|,\ldots,|x_{n}|)$. That would not need any renormalization at all. The objection immediately arises that the calculations would require a $500$-fold sum, amounting to $2^{500}$ terms, which is an impossible calculation. Although that is true, an algebraic simplification makes it possible. I am suggesting using a prior $$f(\mathbf x; \mu, \kappa) = C(\mu, \kappa) \sum_{i\in \{-1,1\}^n} \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots, i_n\mu_n) \cdot \mathbf x\right)$$ where $C(\mu,\kappa)$ is the normalizing constant for the von Mises-Fisher distribution with parameters $(\mu, \kappa)$, all the $x_i$ are non-negative (and, without any loss of generality, you may as well assume all the $\mu_i$ are non-negative, too). But by separately performing the sum over the last component, the foregoing can be written $$C(\mu, \kappa) \sum_{i\in \{-1,1\}^{n-1}} \left(\exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,\mu_n) \cdot \mathbf x\right) + \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,-\mu_n) \cdot \mathbf x\right)\right) \\ = C(\mu, \kappa) 2\cosh(\kappa \mu_n x_n)\sum_{i\in \{-1,1\}^{n-1}} \exp\left(\kappa (i_1 \mu_1, i_2\mu_2, \ldots,i_{n-1}\mu_{n-1}) \cdot \mathbf x_{[-n]}\right)$$ where $\mathbf x_{[-n]} = (x_1, x_2, \ldots, x_{n-1})$. Proceeding inductively on $n$ yields $$f(\mathbf x; \mu, \kappa) = C(\mu, \kappa) 2^n \prod_{i=1}^n \cosh(\kappa \mu_i x_i)$$ which is quite tractable. For $\kappa \gg 0$ (that is, as this prior grows a little less diffuse), $f(\mathbf x; \mu, \kappa)$ approaches the truncated von Mises-Fisher distribution.
Truncated Von Mises-Fisher distribution Because the analysis should not be too sensitive to the prior, we should feel free to make minor modifications to the prior. Instead of truncating it, why not reflect all the probability into the fir
32,275
How does the $\phi(x_i)$ function look for Gaussian RBF kernel?
You are missing one thing, namely the fact that we do not need to know the images of data instances in feature space $\phi(\mathbf{x}_i)$. For some kernel functions, the feature space is very complex/unknown (for instance some graph kernels), or infinite dimensional (for example the RBF kernel). Kernel methods only need to be able to compute inner products between two images in feature space, e.g. $\kappa(\mathbf{x}_i,\mathbf{x}_j)=\langle\phi(\mathbf{x}_i),\phi(\mathbf{x}_j)\rangle$. We don't have to know the feature space to be able to compute inner products in it. This is called the kernel trick. For an SVM, specifically, $\mathbf{w}$ is the separating hyperplane in feature space. You cannot always write this down in input space. Again, for the RBF kernel $\mathbf{w}$ resides in an infinite dimensional feature space. All we need to be able to do is compute the inner product of $\mathbf{w}$ and the image of the test instance $\mathbf{z}$ in feature space $\phi(\mathbf{z}$), which is: $$\langle\mathbf{w},\phi(\mathbf{z})\rangle = \sum_{i\in SV}\alpha_i y_i \kappa(\mathbf{x}_i,\mathbf{z}).$$ SVMs exploit the so-called representer theorem, which states that the resulting models can always be expressed as a weighted sum of kernel evaluations between some training instances (the support vectors) and the test instance. This is in fact exploited by all kernel methods. The RBF kernel maps onto an infinite dimensional feature space. For a writeup on this you may consult these slides by Chih-Jen Lin, particularly slides 10 and 11. For a one-dimensional $x$: $$\phi_{RBF}(x) = e^{-\gamma x^2}\big[1,\sqrt{\frac{2\gamma}{1!}}x, \sqrt{\frac{(2\gamma)^2}{2!}}x^2, \sqrt{\frac{(2\gamma)^3}{3!}}x^3,\ldots\big]^T,$$ which follows from the Taylor expansion of the exponential function.
How does the $\phi(x_i)$ function look for Gaussian RBF kernel?
You are missing one thing, namely the fact that we do not need to know the images of data instances in feature space $\phi(\mathbf{x}_i)$. For some kernel functions, the feature space is very complex/
How does the $\phi(x_i)$ function look for Gaussian RBF kernel? You are missing one thing, namely the fact that we do not need to know the images of data instances in feature space $\phi(\mathbf{x}_i)$. For some kernel functions, the feature space is very complex/unknown (for instance some graph kernels), or infinite dimensional (for example the RBF kernel). Kernel methods only need to be able to compute inner products between two images in feature space, e.g. $\kappa(\mathbf{x}_i,\mathbf{x}_j)=\langle\phi(\mathbf{x}_i),\phi(\mathbf{x}_j)\rangle$. We don't have to know the feature space to be able to compute inner products in it. This is called the kernel trick. For an SVM, specifically, $\mathbf{w}$ is the separating hyperplane in feature space. You cannot always write this down in input space. Again, for the RBF kernel $\mathbf{w}$ resides in an infinite dimensional feature space. All we need to be able to do is compute the inner product of $\mathbf{w}$ and the image of the test instance $\mathbf{z}$ in feature space $\phi(\mathbf{z}$), which is: $$\langle\mathbf{w},\phi(\mathbf{z})\rangle = \sum_{i\in SV}\alpha_i y_i \kappa(\mathbf{x}_i,\mathbf{z}).$$ SVMs exploit the so-called representer theorem, which states that the resulting models can always be expressed as a weighted sum of kernel evaluations between some training instances (the support vectors) and the test instance. This is in fact exploited by all kernel methods. The RBF kernel maps onto an infinite dimensional feature space. For a writeup on this you may consult these slides by Chih-Jen Lin, particularly slides 10 and 11. For a one-dimensional $x$: $$\phi_{RBF}(x) = e^{-\gamma x^2}\big[1,\sqrt{\frac{2\gamma}{1!}}x, \sqrt{\frac{(2\gamma)^2}{2!}}x^2, \sqrt{\frac{(2\gamma)^3}{3!}}x^3,\ldots\big]^T,$$ which follows from the Taylor expansion of the exponential function.
How does the $\phi(x_i)$ function look for Gaussian RBF kernel? You are missing one thing, namely the fact that we do not need to know the images of data instances in feature space $\phi(\mathbf{x}_i)$. For some kernel functions, the feature space is very complex/
32,276
Extent of multiple testing correction
I think the answer to your question is that multiple correction depends on the context of the problem you are solving. If you first consider a priori testing and post-hoc testing then you can see where correction for multiple tests come into play. Let’s say you formulate a single hypothesis, collect data and test the hypothesis. No need to correct in this case obviously. If you decide a priori to carry out two or more tests on the data set you may or may not correct for multiple testing. The correction may be different for each test and may be selected using your domain knowledge. On the other hand, you may simply use one of the usual correction methods. A priori tests are generally small in number. If you had a large number of hypotheses to tests you may decide on larger sample sizes, different samples etc, etc. In other words, you can design your experiment to give you the best possible chance of drawing correct conclusions from your hypotheses. Post-hoc tests on the other hand are performed on a set of data with no particular hypothesis in mind. You are data dredging to some extent and you will certainly need to apply Bonferroni or FDR (or your own favourite) correction. As different data sets collected over you’re your lifetime (or for a paper) are generally independent and asking different questions, there should be no need to worry about correcting for every test ever carried out. Remember that multiple corrections protect against familywise error (i.e. protection for a family of tests) rather than individual test error. If you can logically group your tests into families I think you will find suitable multiple comparisons bounds for these families.
Extent of multiple testing correction
I think the answer to your question is that multiple correction depends on the context of the problem you are solving. If you first consider a priori testing and post-hoc testing then you can see wher
Extent of multiple testing correction I think the answer to your question is that multiple correction depends on the context of the problem you are solving. If you first consider a priori testing and post-hoc testing then you can see where correction for multiple tests come into play. Let’s say you formulate a single hypothesis, collect data and test the hypothesis. No need to correct in this case obviously. If you decide a priori to carry out two or more tests on the data set you may or may not correct for multiple testing. The correction may be different for each test and may be selected using your domain knowledge. On the other hand, you may simply use one of the usual correction methods. A priori tests are generally small in number. If you had a large number of hypotheses to tests you may decide on larger sample sizes, different samples etc, etc. In other words, you can design your experiment to give you the best possible chance of drawing correct conclusions from your hypotheses. Post-hoc tests on the other hand are performed on a set of data with no particular hypothesis in mind. You are data dredging to some extent and you will certainly need to apply Bonferroni or FDR (or your own favourite) correction. As different data sets collected over you’re your lifetime (or for a paper) are generally independent and asking different questions, there should be no need to worry about correcting for every test ever carried out. Remember that multiple corrections protect against familywise error (i.e. protection for a family of tests) rather than individual test error. If you can logically group your tests into families I think you will find suitable multiple comparisons bounds for these families.
Extent of multiple testing correction I think the answer to your question is that multiple correction depends on the context of the problem you are solving. If you first consider a priori testing and post-hoc testing then you can see wher
32,277
Extent of multiple testing correction
You can think of the family-wise error rate (FWER; for more information, see this article). I would say if you run a single experiment to test A, B, and C, you should apply multiple-testing correction. If you run a separate experiment for each A, B, and C, then no correction will be needed. You may be asking why we should need to control the error rate on a per-experiment basis. Here is my opinion. Imagine that some NIH or FDA type institution mandate that you correct for every test you have ever done. Consider that you run a experiment with a single test, and that is your first experiment. No adjustment will be needed here. Now consider that you run a new experiment again with a single test, but this time it is your $1,000^{th}$ experiment. Then you would have to use $\alpha$ of 0.05/1,000 = 0.00005! Who would want to run any experiments with such a low $\alpha$? So my guess is that, when Tukey proposed the experiment-wise error rate, he may have wanted to be fair to each experiment, since each experiment takes money, time, and resources.
Extent of multiple testing correction
You can think of the family-wise error rate (FWER; for more information, see this article). I would say if you run a single experiment to test A, B, and C, you should apply multiple-testing correction
Extent of multiple testing correction You can think of the family-wise error rate (FWER; for more information, see this article). I would say if you run a single experiment to test A, B, and C, you should apply multiple-testing correction. If you run a separate experiment for each A, B, and C, then no correction will be needed. You may be asking why we should need to control the error rate on a per-experiment basis. Here is my opinion. Imagine that some NIH or FDA type institution mandate that you correct for every test you have ever done. Consider that you run a experiment with a single test, and that is your first experiment. No adjustment will be needed here. Now consider that you run a new experiment again with a single test, but this time it is your $1,000^{th}$ experiment. Then you would have to use $\alpha$ of 0.05/1,000 = 0.00005! Who would want to run any experiments with such a low $\alpha$? So my guess is that, when Tukey proposed the experiment-wise error rate, he may have wanted to be fair to each experiment, since each experiment takes money, time, and resources.
Extent of multiple testing correction You can think of the family-wise error rate (FWER; for more information, see this article). I would say if you run a single experiment to test A, B, and C, you should apply multiple-testing correction
32,278
How is this "United States of Reddit" graph created?
First, I am James Dowdell, so I'm rather uniquely qualified to answer (created an account to answer, can confirm identity if anybody is worried). The simple answer is indeed what others have surmised: this is a http://en.wikipedia.org/wiki/Voronoi_diagram . We used the same concept on page 194, where the voronoi sites there are the latitude longitude pairs listed by craigslist.org . Unfortunately, this knowledge itself isn't actually very useful. With the Craigslist graph, it's clear what values to use for the sites. But what magic trick did Dataclysm use to assign x/y coordinates in this graph? The answer to that is far more involved than most people would expect, and I can't say I recommend redoing what we did. I bet somebody else here could recommend an approach that gets more or less the same result and is far simpler. The truth is: Christian and I went back and forth for over 3 months creating graphs for this chapter, that we could never make work. But, the results of one approach often fed into the next. The most critical thing unfortunately involves a technique and some image assets I'm not at liberty to explore or share in any meaningful way, because we may still yet use them somehow. What I'll say is that we took a complicated http://en.wikipedia.org/wiki/Graph_theory#Graph that we compiled with permission from Reddit's data, involving userids and subreddits, and we played around with this graph and various derivatives of it inside http://gephi.github.io/ (I'm particularly a fan of "OpenOrd" these days). In fact we got a magnificent image - would have been the highlight of the book if it had been published - but while it would have worked fine on a website it didn't print well in a book - not enough room or resolution. Christian was originally considering setting it as a fold out in the book, but it just wasn't cost effective for Crown. However, at this point we had an image that had x/y coordinates for the subreddits and they were at least relatively arranged properly in x/y space. We were also in a hurry because the publish deadline was approaching. I'm a programmer first and a data guy second, so to accomodate the extremely tight boundaries of the page in the book and the time left on the clock, my instinct was to write a program in Box2D which simulated the boundaries of the page as walls, put an extremely shrunk version of the graph inside, and simulated growing those nodes (not natural for Box2D by the way, it expects rigid bodies that don't change) until everything was flush against the walls and each other. Nodes grew at a rate proportional to the size of the subreddit they represented, which meant that final sizes would also be proportional in the same way. I unfortunately don't have a screenshot of the actual run that produced the graph in the book, but the run for an unpublished related graph I attach here: screenshot of box2d program while running The result of that didn't look very nice at all, but it did give me something very valuable: the voronoi sites. I took the centroids of the resulting box2d polygons, put them through a standard process, and that's what was used for the graph in the book. Text labels were applied by hand in photoshop I believe. Incidentally, the cell coloring was related to a statistic we had developed to form the graph back in (A)
How is this "United States of Reddit" graph created?
First, I am James Dowdell, so I'm rather uniquely qualified to answer (created an account to answer, can confirm identity if anybody is worried). The simple answer is indeed what others have surmised:
How is this "United States of Reddit" graph created? First, I am James Dowdell, so I'm rather uniquely qualified to answer (created an account to answer, can confirm identity if anybody is worried). The simple answer is indeed what others have surmised: this is a http://en.wikipedia.org/wiki/Voronoi_diagram . We used the same concept on page 194, where the voronoi sites there are the latitude longitude pairs listed by craigslist.org . Unfortunately, this knowledge itself isn't actually very useful. With the Craigslist graph, it's clear what values to use for the sites. But what magic trick did Dataclysm use to assign x/y coordinates in this graph? The answer to that is far more involved than most people would expect, and I can't say I recommend redoing what we did. I bet somebody else here could recommend an approach that gets more or less the same result and is far simpler. The truth is: Christian and I went back and forth for over 3 months creating graphs for this chapter, that we could never make work. But, the results of one approach often fed into the next. The most critical thing unfortunately involves a technique and some image assets I'm not at liberty to explore or share in any meaningful way, because we may still yet use them somehow. What I'll say is that we took a complicated http://en.wikipedia.org/wiki/Graph_theory#Graph that we compiled with permission from Reddit's data, involving userids and subreddits, and we played around with this graph and various derivatives of it inside http://gephi.github.io/ (I'm particularly a fan of "OpenOrd" these days). In fact we got a magnificent image - would have been the highlight of the book if it had been published - but while it would have worked fine on a website it didn't print well in a book - not enough room or resolution. Christian was originally considering setting it as a fold out in the book, but it just wasn't cost effective for Crown. However, at this point we had an image that had x/y coordinates for the subreddits and they were at least relatively arranged properly in x/y space. We were also in a hurry because the publish deadline was approaching. I'm a programmer first and a data guy second, so to accomodate the extremely tight boundaries of the page in the book and the time left on the clock, my instinct was to write a program in Box2D which simulated the boundaries of the page as walls, put an extremely shrunk version of the graph inside, and simulated growing those nodes (not natural for Box2D by the way, it expects rigid bodies that don't change) until everything was flush against the walls and each other. Nodes grew at a rate proportional to the size of the subreddit they represented, which meant that final sizes would also be proportional in the same way. I unfortunately don't have a screenshot of the actual run that produced the graph in the book, but the run for an unpublished related graph I attach here: screenshot of box2d program while running The result of that didn't look very nice at all, but it did give me something very valuable: the voronoi sites. I took the centroids of the resulting box2d polygons, put them through a standard process, and that's what was used for the graph in the book. Text labels were applied by hand in photoshop I believe. Incidentally, the cell coloring was related to a statistic we had developed to form the graph back in (A)
How is this "United States of Reddit" graph created? First, I am James Dowdell, so I'm rather uniquely qualified to answer (created an account to answer, can confirm identity if anybody is worried). The simple answer is indeed what others have surmised:
32,279
How is this "United States of Reddit" graph created?
It looks more like a word cloud problem with a Voronoi polygon appearance. You need to use the word frequency to decide the location (high frequency means center). As long as the location of the words determined, drawing the Voronoi polygon should not be a big deal.
How is this "United States of Reddit" graph created?
It looks more like a word cloud problem with a Voronoi polygon appearance. You need to use the word frequency to decide the location (high frequency means center). As long as the location of the words
How is this "United States of Reddit" graph created? It looks more like a word cloud problem with a Voronoi polygon appearance. You need to use the word frequency to decide the location (high frequency means center). As long as the location of the words determined, drawing the Voronoi polygon should not be a big deal.
How is this "United States of Reddit" graph created? It looks more like a word cloud problem with a Voronoi polygon appearance. You need to use the word frequency to decide the location (high frequency means center). As long as the location of the words
32,280
Inverse covariance matrix, off-diagonal entries
The underlying intuition is quite general: because multiplying a matrix by its inverse has to produce a matrix with a lot of zeros, if the original matrix contains only positive values then obviously the inverse has to contain some negative values in order to produce those zeros. But the intuition goes wrong in making the leap from "some" to "most." The problem is that only one negative coefficient is needed in each row to make this happen. As a counterexample, consider the family of $n\times n$ matrices $X_{n,\epsilon} = A_{n-1} + \epsilon 1_{n}^\prime 1_{n}$ for $\epsilon \gt 0$ and positive integers $n$ where $$A_{n-1} = \pmatrix{ 2 & -1 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ -1 & 2 & -1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 2 & -1 & 0 & 0 & 0 & \cdots & 0 \\ &&&&\ddots&&&&\\ 0 & \cdots & 0 & 0 & 0 & -1 & 2 & -1 & 0 \\ 0 & \cdots & 0 & 0 & 0 & 0 & -1 & 2 & -1 \\ 0 & \cdots & 0 & 0 & 0 & 0 & 0 & -1 & 2} $$ and $$1_{n} = (1,1,\ldots, 1)$$ has $n$ coefficients. Notice that when $0\lt\epsilon\lt 1,$ $X_{n,\epsilon}$ has only $2(n-1)$ negative coefficients (namely, $-1+\epsilon$) and the remaining $n^2 - 2n + 2 = (n-1)^2 + 1$ of them (namely, $2+\epsilon$ and $\epsilon$) are strictly positive. I chose these matrices $A_{n-1}$ because (1) they are (obviously) symmetric; (2) they are positive-definite (this is not so obvious, but it's an easy consequence of the theory of Lie Algebras in which they naturally arise); and (3) they have simple inverses with positive coefficients, $$A_{n-1}^{-1} = \left(b_{ij}\right);\quad b_{ij} = \frac{\min(n+1-i,n+1-j)\min(i,j)}{n+1}.$$ For instance, $$A_{3-1}^{-1} = \frac{1}{4}\pmatrix{3&2&1 \\ 2 & 4&2\\1&2&3}.$$ This is easy to prove simply by multiplying the two pairs of matrices and computing that the result is the $n\times n$ identity matrix. The Sherman-Morrison formula asserts $$X_{n,\epsilon}^{-1} = A_{n-1}^{-1} - \color{gray}{\frac{\epsilon}{1 + \epsilon\, 1_{n} A_{n-1}^{-1} 1_{n}} \left(A_{n-1}^{-1} 1_{n}^\prime 1_{n} A_{n-1}^{-1}\right)} = A_{n-1}^{-1} + \color{gray}{O(\epsilon)}.\tag{*}$$ Because the smallest entry in $A_{n-1}^{-1}$ is $1/(n+1),$ we can easily find $0\lt \epsilon \lt 1$ that are also small enough to make all the entries in the subtracted (gray) part of $(*)$ less than $1/(n+1),$ which leaves all the entries of $X_{n}^{-1}$ positive. (For instance, $0 \lt \epsilon\lt 1/(2n^3)$ will serve.) Obviously $X_{n,\epsilon}^{-1}$ is symmetric. For sufficiently small positive $\epsilon$ its eigenvalues must be close to those of $A_{n-1}^{-1},$ all of which are positive (because $A_{n-1}$ itself is positive definite), which makes all such $X_{n,\epsilon}^{-1}$ legitimate covariance matrices. We may conclude For all $n\ge 1$ and (for each $n$) sufficiently small $\epsilon\gt 0,$ the matrix $X_{n,\epsilon}^{-1}$ is a covariance matrix with strictly positive entries and its inverse $X_{n,\epsilon}$ has $(n-1)^2 + 1$ strictly positive entries, too. Thus, as $n$ grows large, the proportion of its positive entries becomes arbitrarily close to $1,$ because $$\frac{(n-1)^2 + 1}{n^2} \gt \left(1-\frac{1}{n}\right)^2 \to 1.$$
Inverse covariance matrix, off-diagonal entries
The underlying intuition is quite general: because multiplying a matrix by its inverse has to produce a matrix with a lot of zeros, if the original matrix contains only positive values then obviously
Inverse covariance matrix, off-diagonal entries The underlying intuition is quite general: because multiplying a matrix by its inverse has to produce a matrix with a lot of zeros, if the original matrix contains only positive values then obviously the inverse has to contain some negative values in order to produce those zeros. But the intuition goes wrong in making the leap from "some" to "most." The problem is that only one negative coefficient is needed in each row to make this happen. As a counterexample, consider the family of $n\times n$ matrices $X_{n,\epsilon} = A_{n-1} + \epsilon 1_{n}^\prime 1_{n}$ for $\epsilon \gt 0$ and positive integers $n$ where $$A_{n-1} = \pmatrix{ 2 & -1 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ -1 & 2 & -1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 2 & -1 & 0 & 0 & 0 & \cdots & 0 \\ &&&&\ddots&&&&\\ 0 & \cdots & 0 & 0 & 0 & -1 & 2 & -1 & 0 \\ 0 & \cdots & 0 & 0 & 0 & 0 & -1 & 2 & -1 \\ 0 & \cdots & 0 & 0 & 0 & 0 & 0 & -1 & 2} $$ and $$1_{n} = (1,1,\ldots, 1)$$ has $n$ coefficients. Notice that when $0\lt\epsilon\lt 1,$ $X_{n,\epsilon}$ has only $2(n-1)$ negative coefficients (namely, $-1+\epsilon$) and the remaining $n^2 - 2n + 2 = (n-1)^2 + 1$ of them (namely, $2+\epsilon$ and $\epsilon$) are strictly positive. I chose these matrices $A_{n-1}$ because (1) they are (obviously) symmetric; (2) they are positive-definite (this is not so obvious, but it's an easy consequence of the theory of Lie Algebras in which they naturally arise); and (3) they have simple inverses with positive coefficients, $$A_{n-1}^{-1} = \left(b_{ij}\right);\quad b_{ij} = \frac{\min(n+1-i,n+1-j)\min(i,j)}{n+1}.$$ For instance, $$A_{3-1}^{-1} = \frac{1}{4}\pmatrix{3&2&1 \\ 2 & 4&2\\1&2&3}.$$ This is easy to prove simply by multiplying the two pairs of matrices and computing that the result is the $n\times n$ identity matrix. The Sherman-Morrison formula asserts $$X_{n,\epsilon}^{-1} = A_{n-1}^{-1} - \color{gray}{\frac{\epsilon}{1 + \epsilon\, 1_{n} A_{n-1}^{-1} 1_{n}} \left(A_{n-1}^{-1} 1_{n}^\prime 1_{n} A_{n-1}^{-1}\right)} = A_{n-1}^{-1} + \color{gray}{O(\epsilon)}.\tag{*}$$ Because the smallest entry in $A_{n-1}^{-1}$ is $1/(n+1),$ we can easily find $0\lt \epsilon \lt 1$ that are also small enough to make all the entries in the subtracted (gray) part of $(*)$ less than $1/(n+1),$ which leaves all the entries of $X_{n}^{-1}$ positive. (For instance, $0 \lt \epsilon\lt 1/(2n^3)$ will serve.) Obviously $X_{n,\epsilon}^{-1}$ is symmetric. For sufficiently small positive $\epsilon$ its eigenvalues must be close to those of $A_{n-1}^{-1},$ all of which are positive (because $A_{n-1}$ itself is positive definite), which makes all such $X_{n,\epsilon}^{-1}$ legitimate covariance matrices. We may conclude For all $n\ge 1$ and (for each $n$) sufficiently small $\epsilon\gt 0,$ the matrix $X_{n,\epsilon}^{-1}$ is a covariance matrix with strictly positive entries and its inverse $X_{n,\epsilon}$ has $(n-1)^2 + 1$ strictly positive entries, too. Thus, as $n$ grows large, the proportion of its positive entries becomes arbitrarily close to $1,$ because $$\frac{(n-1)^2 + 1}{n^2} \gt \left(1-\frac{1}{n}\right)^2 \to 1.$$
Inverse covariance matrix, off-diagonal entries The underlying intuition is quite general: because multiplying a matrix by its inverse has to produce a matrix with a lot of zeros, if the original matrix contains only positive values then obviously
32,281
Post hoc $\chi^2$ test with R
I like this question because too often, people do omnibus tests and then don't ask more specific questions about what is happening. If the goal is to compare "treatments" a, b, and c, I would suggest summarizing the data showing the percentages within each column, so you can see more clearly how they differ. Then to test these comparisons, one simple idea is to do the $\chi^2$ test on each pair of columns: > for (j in 1:3) print(chisq.test(mat[, -j])) Pearson's Chi-squared test data: mat[, -j] X-squared = 0.1542, df = 2, p-value = 0.9258 Pearson's Chi-squared test data: mat[, -j] X-squared = 4.5868, df = 2, p-value = 0.1009 Pearson's Chi-squared test data: mat[, -j] X-squared = 9.5653, df = 2, p-value = 0.008374 Since 3 tests are done, a Bonferroni correction is advised (multiply each $P$ value by 3). The last test, where column 3 is omitted, has a very low $P$ value, so you can conclude that the distributions of (good, fair, poor) are different for conditions a and b. Note, however, that condition c does not have much data, and that's largely why the other two results are nonsignificant. You could use a similar strategy to do pairwise comparisons of the rows.
Post hoc $\chi^2$ test with R
I like this question because too often, people do omnibus tests and then don't ask more specific questions about what is happening. If the goal is to compare "treatments" a, b, and c, I would suggest
Post hoc $\chi^2$ test with R I like this question because too often, people do omnibus tests and then don't ask more specific questions about what is happening. If the goal is to compare "treatments" a, b, and c, I would suggest summarizing the data showing the percentages within each column, so you can see more clearly how they differ. Then to test these comparisons, one simple idea is to do the $\chi^2$ test on each pair of columns: > for (j in 1:3) print(chisq.test(mat[, -j])) Pearson's Chi-squared test data: mat[, -j] X-squared = 0.1542, df = 2, p-value = 0.9258 Pearson's Chi-squared test data: mat[, -j] X-squared = 4.5868, df = 2, p-value = 0.1009 Pearson's Chi-squared test data: mat[, -j] X-squared = 9.5653, df = 2, p-value = 0.008374 Since 3 tests are done, a Bonferroni correction is advised (multiply each $P$ value by 3). The last test, where column 3 is omitted, has a very low $P$ value, so you can conclude that the distributions of (good, fair, poor) are different for conditions a and b. Note, however, that condition c does not have much data, and that's largely why the other two results are nonsignificant. You could use a similar strategy to do pairwise comparisons of the rows.
Post hoc $\chi^2$ test with R I like this question because too often, people do omnibus tests and then don't ask more specific questions about what is happening. If the goal is to compare "treatments" a, b, and c, I would suggest
32,282
Post hoc $\chi^2$ test with R
In case anyone still comes across this ancient thread - the procedure suggested by Guilherme is now implemented in the chisq.posthoc.test package, which also offers specific p-values based on the residuals.
Post hoc $\chi^2$ test with R
In case anyone still comes across this ancient thread - the procedure suggested by Guilherme is now implemented in the chisq.posthoc.test package, which also offers specific p-values based on the resi
Post hoc $\chi^2$ test with R In case anyone still comes across this ancient thread - the procedure suggested by Guilherme is now implemented in the chisq.posthoc.test package, which also offers specific p-values based on the residuals.
Post hoc $\chi^2$ test with R In case anyone still comes across this ancient thread - the procedure suggested by Guilherme is now implemented in the chisq.posthoc.test package, which also offers specific p-values based on the resi
32,283
Post hoc $\chi^2$ test with R
Another way of doing this is by means of pearson standardized residuals, as suggested by Agresti, A. (2007) in his book Categorical Data Analysis section 3.3 Following-up chi-squared tests. The Pearson standardized residuals ($e_{ij}$) measures how large is the deviation from each cell to the null hypothesis (in this case, independence between row and column's). Positive residuals indicate positive association between row and column variables. Negative residuals indicate negative association between row and column variables. Once $e_{ij} \sim N(0,1)$, $e_{ij} > |2|$ are indicative of association. They are obtained this way: tab2 <- chisq.test(mat) tab2$stdres a b c good 1,0164 -0,71661 -0,60995 fair 1,9643 -1,66201 -0,66782 poor -3,3512 2,68760 1,41203 The category poor is negatively associated with "a", therefore, "a" is associated with few poor people. The category poor is associated positively with "b", therefore, "b" is associated with large number of poor people.
Post hoc $\chi^2$ test with R
Another way of doing this is by means of pearson standardized residuals, as suggested by Agresti, A. (2007) in his book Categorical Data Analysis section 3.3 Following-up chi-squared tests. The Pearso
Post hoc $\chi^2$ test with R Another way of doing this is by means of pearson standardized residuals, as suggested by Agresti, A. (2007) in his book Categorical Data Analysis section 3.3 Following-up chi-squared tests. The Pearson standardized residuals ($e_{ij}$) measures how large is the deviation from each cell to the null hypothesis (in this case, independence between row and column's). Positive residuals indicate positive association between row and column variables. Negative residuals indicate negative association between row and column variables. Once $e_{ij} \sim N(0,1)$, $e_{ij} > |2|$ are indicative of association. They are obtained this way: tab2 <- chisq.test(mat) tab2$stdres a b c good 1,0164 -0,71661 -0,60995 fair 1,9643 -1,66201 -0,66782 poor -3,3512 2,68760 1,41203 The category poor is negatively associated with "a", therefore, "a" is associated with few poor people. The category poor is associated positively with "b", therefore, "b" is associated with large number of poor people.
Post hoc $\chi^2$ test with R Another way of doing this is by means of pearson standardized residuals, as suggested by Agresti, A. (2007) in his book Categorical Data Analysis section 3.3 Following-up chi-squared tests. The Pearso
32,284
Why does the EM algorithm have to be iterative?
When you've found your objective function for the EM algorithm I assume you treated the number of units with $x_i=0$, which I'll call $y$, as your latent parameter. In this case, I'm (again) assuming $Q$ represents a reduced form of the expected value over $y$ of the likelihood given $\lambda_{-1}$. This is not the same as the full likelihood, because that $\lambda_{-1}$ is treadted as given. Therefore you cannot use $Q$ for the full likelihood, as it this does not contain information about how changing $\lambda$ changes the distribution of $y$ (and you want to select the most likely values of $y$ as well when you maximize the full likelihood). This is why the full maximum likelihood for the zero truncated Poisson differs from your $Q$ function, and why you get a different (and incorrect) answer when you maximize $f(\lambda)=Q(\lambda,\lambda)$. Numerically, maximizing $f(\lambda)$ will necessarily result in an objective function at least as large as your EM result, and probably larger as there is no guarantee that the EM algorithm will converge to a maximum of $f$ - it's only supposed to converge to a maximum of the likelihood function!
Why does the EM algorithm have to be iterative?
When you've found your objective function for the EM algorithm I assume you treated the number of units with $x_i=0$, which I'll call $y$, as your latent parameter. In this case, I'm (again) assuming
Why does the EM algorithm have to be iterative? When you've found your objective function for the EM algorithm I assume you treated the number of units with $x_i=0$, which I'll call $y$, as your latent parameter. In this case, I'm (again) assuming $Q$ represents a reduced form of the expected value over $y$ of the likelihood given $\lambda_{-1}$. This is not the same as the full likelihood, because that $\lambda_{-1}$ is treadted as given. Therefore you cannot use $Q$ for the full likelihood, as it this does not contain information about how changing $\lambda$ changes the distribution of $y$ (and you want to select the most likely values of $y$ as well when you maximize the full likelihood). This is why the full maximum likelihood for the zero truncated Poisson differs from your $Q$ function, and why you get a different (and incorrect) answer when you maximize $f(\lambda)=Q(\lambda,\lambda)$. Numerically, maximizing $f(\lambda)$ will necessarily result in an objective function at least as large as your EM result, and probably larger as there is no guarantee that the EM algorithm will converge to a maximum of $f$ - it's only supposed to converge to a maximum of the likelihood function!
Why does the EM algorithm have to be iterative? When you've found your objective function for the EM algorithm I assume you treated the number of units with $x_i=0$, which I'll call $y$, as your latent parameter. In this case, I'm (again) assuming
32,285
Survival analysis in R with left-truncated data
I'm assuming that time from diagnosis is your underlying time variable. For simplicity I also assume that the event can only occur once. You can treat your data as being left-censored. This is different from being left-truncated, however. For left-truncated data we only include in the study patients conditional on them not having experienced the event at the time of inclusion. This would in your case amount to throwing away the patients that have had the event before 2000. Thus, we are modelling survival conditional on survival until inclusion. This is different from left-censoring. Left-censoring occurs when we only know the upper limit of the time of an event. This is exactly what you suggest yourself, if I understand you correctly. In this case, we include all individuals regardless of their survival times, but for some individuals we only know an upper bound of their survival time. Chapter III of Statistical Models Based on Counting Processes by PK Andersen et al. provides a good explanation of the above along with some examples of both cases.
Survival analysis in R with left-truncated data
I'm assuming that time from diagnosis is your underlying time variable. For simplicity I also assume that the event can only occur once. You can treat your data as being left-censored. This is differe
Survival analysis in R with left-truncated data I'm assuming that time from diagnosis is your underlying time variable. For simplicity I also assume that the event can only occur once. You can treat your data as being left-censored. This is different from being left-truncated, however. For left-truncated data we only include in the study patients conditional on them not having experienced the event at the time of inclusion. This would in your case amount to throwing away the patients that have had the event before 2000. Thus, we are modelling survival conditional on survival until inclusion. This is different from left-censoring. Left-censoring occurs when we only know the upper limit of the time of an event. This is exactly what you suggest yourself, if I understand you correctly. In this case, we include all individuals regardless of their survival times, but for some individuals we only know an upper bound of their survival time. Chapter III of Statistical Models Based on Counting Processes by PK Andersen et al. provides a good explanation of the above along with some examples of both cases.
Survival analysis in R with left-truncated data I'm assuming that time from diagnosis is your underlying time variable. For simplicity I also assume that the event can only occur once. You can treat your data as being left-censored. This is differe
32,286
Survival analysis in R with left-truncated data
You are likely to run afoul of immortal time bias, which means that the cohort diagnosed pre-2000 is effectively immortal, until post-2000 when the outcome can occur. Per Rothman and Greenland, the correct approach is indeed to exclude (truncate) the pre-2000 years of observation from the analysis, or else risk biasing between cohort estimates toward the null hypothesis of no difference in hazard. The survival command Surv does not seem to follow the syntax you use. What about creating a new variable where the value 0 corresponds to the Beginning of (Study) Time (e.g. year = 2000?), 1 corresponds to 1 unit of time in, etc? You will want to read up on: Rothman, K. J. and Greenland, S. (1998). Modern Epidemiology, chapter Cohort Studies—Immortal Person Time. Lippincott-Raven, 2nd edition.
Survival analysis in R with left-truncated data
You are likely to run afoul of immortal time bias, which means that the cohort diagnosed pre-2000 is effectively immortal, until post-2000 when the outcome can occur. Per Rothman and Greenland, the co
Survival analysis in R with left-truncated data You are likely to run afoul of immortal time bias, which means that the cohort diagnosed pre-2000 is effectively immortal, until post-2000 when the outcome can occur. Per Rothman and Greenland, the correct approach is indeed to exclude (truncate) the pre-2000 years of observation from the analysis, or else risk biasing between cohort estimates toward the null hypothesis of no difference in hazard. The survival command Surv does not seem to follow the syntax you use. What about creating a new variable where the value 0 corresponds to the Beginning of (Study) Time (e.g. year = 2000?), 1 corresponds to 1 unit of time in, etc? You will want to read up on: Rothman, K. J. and Greenland, S. (1998). Modern Epidemiology, chapter Cohort Studies—Immortal Person Time. Lippincott-Raven, 2nd edition.
Survival analysis in R with left-truncated data You are likely to run afoul of immortal time bias, which means that the cohort diagnosed pre-2000 is effectively immortal, until post-2000 when the outcome can occur. Per Rothman and Greenland, the co
32,287
UMVUE for normal distribution $\sigma$
Although the question was posted almost 4 years ago, I would like to answer this question. English is not my mother tongue and I am learning it so please don't mind my awkward sentences. To solve this problem, we notice that $(n-1)S^2/ \sigma^2$ has a chisquare distribution with $n-1$ degree of freedom, while $S^2= \sum^n_{i=1}{(X-\bar{X})^2\over n-1}={{\sum^n_{i=1}X^2}-n \bar{X}^2\over n-1}$ and $X$ has a normal distribution with mean $\mu$ and variance $\sigma^2$. Note that $S$ contains ${\sum^n_{i=1}X^2}$ and ${\sum^n_{i=1}X}$. Let's evaluate $E[S]$. To simplify let $q=(n-1)S^2/ \sigma^2$, then $S=\sqrt{q \sigma^2 /(n-1)}$. $$E[S]=\int^{\infty}_0 \sqrt{ \sigma^2 \over (n-1)} q^{1 \over2}f_q dq \\=\int^{\infty}_0 \sqrt{ \sigma^2 \over (n-1)} q^{1 \over2} { q^{{n-1 \over 2} -1} e^{-q \over 2} \over \Gamma({n-1 \over 2}) 2^{n-1 \over 2}} dq \\ = \sqrt{ \sigma^2 \over (n-1)} \int^{\infty}_0 { q^{{n \over 2} -1} e^{-q \over 2} \over \Gamma({n-1 \over 2}) 2^{n-1 \over 2}} dq \\= \sqrt{ \sigma^2 \over (n-1)} { \Gamma({n \over 2}) 2^{1 \over 2} \over \Gamma({n-1 \over 2}) } $$ After some rearranging you can get the desired result. It would be appreciated if someone corrects any grammatical or mathematical mistakes.
UMVUE for normal distribution $\sigma$
Although the question was posted almost 4 years ago, I would like to answer this question. English is not my mother tongue and I am learning it so please don't mind my awkward sentences. To solve thi
UMVUE for normal distribution $\sigma$ Although the question was posted almost 4 years ago, I would like to answer this question. English is not my mother tongue and I am learning it so please don't mind my awkward sentences. To solve this problem, we notice that $(n-1)S^2/ \sigma^2$ has a chisquare distribution with $n-1$ degree of freedom, while $S^2= \sum^n_{i=1}{(X-\bar{X})^2\over n-1}={{\sum^n_{i=1}X^2}-n \bar{X}^2\over n-1}$ and $X$ has a normal distribution with mean $\mu$ and variance $\sigma^2$. Note that $S$ contains ${\sum^n_{i=1}X^2}$ and ${\sum^n_{i=1}X}$. Let's evaluate $E[S]$. To simplify let $q=(n-1)S^2/ \sigma^2$, then $S=\sqrt{q \sigma^2 /(n-1)}$. $$E[S]=\int^{\infty}_0 \sqrt{ \sigma^2 \over (n-1)} q^{1 \over2}f_q dq \\=\int^{\infty}_0 \sqrt{ \sigma^2 \over (n-1)} q^{1 \over2} { q^{{n-1 \over 2} -1} e^{-q \over 2} \over \Gamma({n-1 \over 2}) 2^{n-1 \over 2}} dq \\ = \sqrt{ \sigma^2 \over (n-1)} \int^{\infty}_0 { q^{{n \over 2} -1} e^{-q \over 2} \over \Gamma({n-1 \over 2}) 2^{n-1 \over 2}} dq \\= \sqrt{ \sigma^2 \over (n-1)} { \Gamma({n \over 2}) 2^{1 \over 2} \over \Gamma({n-1 \over 2}) } $$ After some rearranging you can get the desired result. It would be appreciated if someone corrects any grammatical or mathematical mistakes.
UMVUE for normal distribution $\sigma$ Although the question was posted almost 4 years ago, I would like to answer this question. English is not my mother tongue and I am learning it so please don't mind my awkward sentences. To solve thi
32,288
How would one formally prove that the OOB error in random forest is unbiased?
I do not know if this is the final answer, but those things can't fit a comment. The statement that OOB errors are unbiased is often used, but I never saw a demonstration. After many searchings, I finally gave after reading carefully the well-known page of Breiman for RF Section: The out-of-bag (oob) error estimate. In case you did not noticed (as I missed for some time), the last proposition is the important one: This has proven to be unbiased in many tests. So, no sign of formal derivation. More than that, it seems to be proved that for the case when you have more variables than instances this estimator is biased. See here. For in-the-bag error there is a formal derivation. The in-the-bag error is the bootstrap error and there is plenty of literature starting with "An Introduction to the Bootsrap, by Efron and Tibshirani". However the cleanest demonstration I saw is here. If you want to start to find a proof, I think a good starting point is the comparison of this estimate with N-fold cross validation. In ESTL is stated that there is an identity in the limit, as the number of samples goes to infinity.
How would one formally prove that the OOB error in random forest is unbiased?
I do not know if this is the final answer, but those things can't fit a comment. The statement that OOB errors are unbiased is often used, but I never saw a demonstration. After many searchings, I fi
How would one formally prove that the OOB error in random forest is unbiased? I do not know if this is the final answer, but those things can't fit a comment. The statement that OOB errors are unbiased is often used, but I never saw a demonstration. After many searchings, I finally gave after reading carefully the well-known page of Breiman for RF Section: The out-of-bag (oob) error estimate. In case you did not noticed (as I missed for some time), the last proposition is the important one: This has proven to be unbiased in many tests. So, no sign of formal derivation. More than that, it seems to be proved that for the case when you have more variables than instances this estimator is biased. See here. For in-the-bag error there is a formal derivation. The in-the-bag error is the bootstrap error and there is plenty of literature starting with "An Introduction to the Bootsrap, by Efron and Tibshirani". However the cleanest demonstration I saw is here. If you want to start to find a proof, I think a good starting point is the comparison of this estimate with N-fold cross validation. In ESTL is stated that there is an identity in the limit, as the number of samples goes to infinity.
How would one formally prove that the OOB error in random forest is unbiased? I do not know if this is the final answer, but those things can't fit a comment. The statement that OOB errors are unbiased is often used, but I never saw a demonstration. After many searchings, I fi
32,289
How would one formally prove that the OOB error in random forest is unbiased?
Why do you expect the oob error to be unbiased? There's (at least) 1 training case less available for the trees used in the surrogate forest compared to the "original" forest. I'd expect this to lead to a small pessimistic bias roughly comparable to leave-one-out cross-validation. There are roughly $\frac{1}{e} \approx \frac{1}{3}$ of the number of trees of the "original" forest in the surrogate forest that is actually evaluated with the left-out case. Thus, I'd expect higher variance in the prediction, which will cause further pessimistic bias. Both thoughts are closely related to the learning curve of the classifier and application/data in question: the first to the average performance as function of training sample size and the second to the variance around this average curve. All in all, I'd expect you'll at most be able to show formally that oob is an unbiased estimator of the performance of random forests containing $\frac{1}{e} \approx \frac{1}{3}$ of the number of trees of the "original" forest, and being trained on $n - 1$ cases of the original training data. Note also that Breiman uses "unbiased" for out-of-bootstrap the same way as he uses it for cross validation, where we also have a (small) pessimistic bias. Coming from an experimental field, I'm OK with saying that both are practically unbiased as the bias is usually much less of a problem than the variance (you're probably not using random forests if you have the luxury of having plenty of cases).
How would one formally prove that the OOB error in random forest is unbiased?
Why do you expect the oob error to be unbiased? There's (at least) 1 training case less available for the trees used in the surrogate forest compared to the "original" forest. I'd expect this to lea
How would one formally prove that the OOB error in random forest is unbiased? Why do you expect the oob error to be unbiased? There's (at least) 1 training case less available for the trees used in the surrogate forest compared to the "original" forest. I'd expect this to lead to a small pessimistic bias roughly comparable to leave-one-out cross-validation. There are roughly $\frac{1}{e} \approx \frac{1}{3}$ of the number of trees of the "original" forest in the surrogate forest that is actually evaluated with the left-out case. Thus, I'd expect higher variance in the prediction, which will cause further pessimistic bias. Both thoughts are closely related to the learning curve of the classifier and application/data in question: the first to the average performance as function of training sample size and the second to the variance around this average curve. All in all, I'd expect you'll at most be able to show formally that oob is an unbiased estimator of the performance of random forests containing $\frac{1}{e} \approx \frac{1}{3}$ of the number of trees of the "original" forest, and being trained on $n - 1$ cases of the original training data. Note also that Breiman uses "unbiased" for out-of-bootstrap the same way as he uses it for cross validation, where we also have a (small) pessimistic bias. Coming from an experimental field, I'm OK with saying that both are practically unbiased as the bias is usually much less of a problem than the variance (you're probably not using random forests if you have the luxury of having plenty of cases).
How would one formally prove that the OOB error in random forest is unbiased? Why do you expect the oob error to be unbiased? There's (at least) 1 training case less available for the trees used in the surrogate forest compared to the "original" forest. I'd expect this to lea
32,290
Confused about Cholesky and eigen decomposition
1) Pretty much yes. The reason is that the $x_i$'s are going to end up being a linear combination of the $z_i$'s. That works out nicely for Gaussian deviates because any linear combination of Gaussian deviates is, itself, a Gaussian deviate. Unfortunately, this is not necessarily true of other distributions. 2) It's a little puzzling, I know, but they are equivalent. Let $\Sigma$ be your covariance matrix and suppose you have both the Cholesky factorization, $\Sigma=L L^T$ and the eigendecomposition, $\Sigma=U \lambda U^T$. The covariance of $L z$ is given by: $$ \begin{array}{} E[L z (L z)^T] & = & E[L z z^T L^T] \\ & = & L \ E[z z^T] \ L^T \\ & = & L \ I \ L^T \\ & = & L L^T \\ & = & \Sigma \end{array} $$ Similarly, the covariance of $U \lambda^\frac{1}{2} z$ is given by: $$ \begin{array}{} E[U \lambda^\frac{1}{2} z (U \lambda^\frac{1}{2} z)^T] & = & E[U \lambda^\frac{1}{2} z z^T \lambda^\frac{1}{2} U^T] \\ & = & U \lambda^\frac{1}{2} \ E[z z^T] \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \ I \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \lambda^\frac{1}{2} U^T \\ & = & U \lambda U^T \\ & = & \Sigma \end{array} $$ For purposes of computation, I suggest you stick with the Cholesky factorization unless your covariance matrix is ill-conditioned/nearly singular/has a high condition number. Then it's probably best to switch to the eigendecomposition.
Confused about Cholesky and eigen decomposition
1) Pretty much yes. The reason is that the $x_i$'s are going to end up being a linear combination of the $z_i$'s. That works out nicely for Gaussian deviates because any linear combination of Gaussi
Confused about Cholesky and eigen decomposition 1) Pretty much yes. The reason is that the $x_i$'s are going to end up being a linear combination of the $z_i$'s. That works out nicely for Gaussian deviates because any linear combination of Gaussian deviates is, itself, a Gaussian deviate. Unfortunately, this is not necessarily true of other distributions. 2) It's a little puzzling, I know, but they are equivalent. Let $\Sigma$ be your covariance matrix and suppose you have both the Cholesky factorization, $\Sigma=L L^T$ and the eigendecomposition, $\Sigma=U \lambda U^T$. The covariance of $L z$ is given by: $$ \begin{array}{} E[L z (L z)^T] & = & E[L z z^T L^T] \\ & = & L \ E[z z^T] \ L^T \\ & = & L \ I \ L^T \\ & = & L L^T \\ & = & \Sigma \end{array} $$ Similarly, the covariance of $U \lambda^\frac{1}{2} z$ is given by: $$ \begin{array}{} E[U \lambda^\frac{1}{2} z (U \lambda^\frac{1}{2} z)^T] & = & E[U \lambda^\frac{1}{2} z z^T \lambda^\frac{1}{2} U^T] \\ & = & U \lambda^\frac{1}{2} \ E[z z^T] \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \ I \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \lambda^\frac{1}{2} U^T \\ & = & U \lambda U^T \\ & = & \Sigma \end{array} $$ For purposes of computation, I suggest you stick with the Cholesky factorization unless your covariance matrix is ill-conditioned/nearly singular/has a high condition number. Then it's probably best to switch to the eigendecomposition.
Confused about Cholesky and eigen decomposition 1) Pretty much yes. The reason is that the $x_i$'s are going to end up being a linear combination of the $z_i$'s. That works out nicely for Gaussian deviates because any linear combination of Gaussi
32,291
Given independence, is the median of a product equal to the product of the medians?
Counterexample: Consider $X_i\sim\text{Unif}(0,1)$, $i=1,2$. Their common median is $\frac{1}{2}$. Let $Y=X_1\, X_2$. The median of $Y$ is about $0.1867$, which is smaller than $(\frac{1}{2})^2\,\text{:}$ The log of a uniform is the negative of a standard exponential. The sum of two exponential random variables is gamma-distributed with shape 2, which (for scale 1) has median 1.67834... Hence the median of the log of the product of two uniforms is -1.67834. Exponentiation is monotonic, so the median of the product of two uniforms is $\exp(-1.67834...)\approx 0.1867$ More directly, it's relatively easy to derive the density of the product ($f(y) = \log(1/y),\quad 0<y<1$), which means the median is found by solving $m - m\log m =\frac{1}{2}$ for $m$ (which has two solutions, but only one in $(0,1)$ ). Additional question: Does a similar relationship exist for α-trimmed means? Yes, in the sense that it's also not true in general for trimmed means.
Given independence, is the median of a product equal to the product of the medians?
Counterexample: Consider $X_i\sim\text{Unif}(0,1)$, $i=1,2$. Their common median is $\frac{1}{2}$. Let $Y=X_1\, X_2$. The median of $Y$ is about $0.1867$, which is smaller than $(\frac{1}{2})^2\,\text
Given independence, is the median of a product equal to the product of the medians? Counterexample: Consider $X_i\sim\text{Unif}(0,1)$, $i=1,2$. Their common median is $\frac{1}{2}$. Let $Y=X_1\, X_2$. The median of $Y$ is about $0.1867$, which is smaller than $(\frac{1}{2})^2\,\text{:}$ The log of a uniform is the negative of a standard exponential. The sum of two exponential random variables is gamma-distributed with shape 2, which (for scale 1) has median 1.67834... Hence the median of the log of the product of two uniforms is -1.67834. Exponentiation is monotonic, so the median of the product of two uniforms is $\exp(-1.67834...)\approx 0.1867$ More directly, it's relatively easy to derive the density of the product ($f(y) = \log(1/y),\quad 0<y<1$), which means the median is found by solving $m - m\log m =\frac{1}{2}$ for $m$ (which has two solutions, but only one in $(0,1)$ ). Additional question: Does a similar relationship exist for α-trimmed means? Yes, in the sense that it's also not true in general for trimmed means.
Given independence, is the median of a product equal to the product of the medians? Counterexample: Consider $X_i\sim\text{Unif}(0,1)$, $i=1,2$. Their common median is $\frac{1}{2}$. Let $Y=X_1\, X_2$. The median of $Y$ is about $0.1867$, which is smaller than $(\frac{1}{2})^2\,\text
32,292
Given independence, is the median of a product equal to the product of the medians?
I suspect, but have not proven, that sufficient conditions for the relationship to hold are: 1) independence, 2) X and Y both have symmetric distributions, and 3) At least one of the distributions of X and Y is centred on zero. I don't think you need condition 2). Let's say X has median zero. Then we have 4 cases: x>0, y>0 x>0, y<0 x<0, y>0 x<0, y<0 x*y > 0 will be true in cases 1 and 4. If X has median 0, then p(x>0) = 0.5 If X and Y are independent, then p(x>0, y>0) = p(x>0) * p(y>0) (for all 4 combinations) so p(x*y>0) = p(x>0)*p(y>0) + p(x<0)*p(y<0) = 0.5 (p(y>0)+p(y<0)) = 0.5 => the median of x*y is also 0
Given independence, is the median of a product equal to the product of the medians?
I suspect, but have not proven, that sufficient conditions for the relationship to hold are: 1) independence, 2) X and Y both have symmetric distributions, and 3) At least one of the distributions of
Given independence, is the median of a product equal to the product of the medians? I suspect, but have not proven, that sufficient conditions for the relationship to hold are: 1) independence, 2) X and Y both have symmetric distributions, and 3) At least one of the distributions of X and Y is centred on zero. I don't think you need condition 2). Let's say X has median zero. Then we have 4 cases: x>0, y>0 x>0, y<0 x<0, y>0 x<0, y<0 x*y > 0 will be true in cases 1 and 4. If X has median 0, then p(x>0) = 0.5 If X and Y are independent, then p(x>0, y>0) = p(x>0) * p(y>0) (for all 4 combinations) so p(x*y>0) = p(x>0)*p(y>0) + p(x<0)*p(y<0) = 0.5 (p(y>0)+p(y<0)) = 0.5 => the median of x*y is also 0
Given independence, is the median of a product equal to the product of the medians? I suspect, but have not proven, that sufficient conditions for the relationship to hold are: 1) independence, 2) X and Y both have symmetric distributions, and 3) At least one of the distributions of
32,293
How to learn Bayesian Network Structure from the dataset?
The score function measures whether the DAG structure that has been learnt is a good fit to the dataset. Of course, you can define the score function in several ways, depending on the dataset, and the ultimate objective of learning the DAG structure. One commonly used score function is the log-posterior. Given dataset $D$ and a vector $\mathbf{X}$ of variables, the log posterior score function $S(D,G)$ is defined as $$ S(D,G) := \log{p_{pr}(G)} + \log{p(D|G)} $$ where $p_{pr}$ is the prior over the DAGs. Let the set of parameters be $\theta \in \Theta$. $p(D|G)$ is the marginal likelihood $$ p(D|G)= \int_{\Theta}{p(D|G, \theta) \cdot p_{pr}(\theta)d\theta} $$ The bnlearn R Package defines several score functions depending on the nature of the data (whether it is categorical, continuous or mixed). Categorical data (multinomial distribution): the multinomial log-likelihood; the Akaike Information Criterion (AIC); the Bayesian Information Criterion (BIC); a score equivalent Dirichlet posterior density (BDe); a sparse Dirichlet posterior density (BDs); a Dirichlet posterior density based on Jeffrey's prior (BDJ); a modified Bayesian Dirichlet for mixtures of interventional and observational data; the K2 score; Continuous data (multivariate normal distribution): the multivariate Gaussian log-likelihood; the corresponding Akaike Information Criterion (AIC); the corresponding Bayesian Information Criterion (BIC); a score equivalent Gaussian posterior density (BGe); Mixed data (conditional Gaussian distribution): the conditional Gaussian log-likelihood; the corresponding Akaike Information Criterion (AIC); the corresponding Bayesian Information Criterion (BIC). For $n$ variables, the number of possible DAGs is super-exponential. Here is a link to the integer sequence. As you can see, the number grows very fast. https://oeis.org/A003024
How to learn Bayesian Network Structure from the dataset?
The score function measures whether the DAG structure that has been learnt is a good fit to the dataset. Of course, you can define the score function in several ways, depending on the dataset, and the
How to learn Bayesian Network Structure from the dataset? The score function measures whether the DAG structure that has been learnt is a good fit to the dataset. Of course, you can define the score function in several ways, depending on the dataset, and the ultimate objective of learning the DAG structure. One commonly used score function is the log-posterior. Given dataset $D$ and a vector $\mathbf{X}$ of variables, the log posterior score function $S(D,G)$ is defined as $$ S(D,G) := \log{p_{pr}(G)} + \log{p(D|G)} $$ where $p_{pr}$ is the prior over the DAGs. Let the set of parameters be $\theta \in \Theta$. $p(D|G)$ is the marginal likelihood $$ p(D|G)= \int_{\Theta}{p(D|G, \theta) \cdot p_{pr}(\theta)d\theta} $$ The bnlearn R Package defines several score functions depending on the nature of the data (whether it is categorical, continuous or mixed). Categorical data (multinomial distribution): the multinomial log-likelihood; the Akaike Information Criterion (AIC); the Bayesian Information Criterion (BIC); a score equivalent Dirichlet posterior density (BDe); a sparse Dirichlet posterior density (BDs); a Dirichlet posterior density based on Jeffrey's prior (BDJ); a modified Bayesian Dirichlet for mixtures of interventional and observational data; the K2 score; Continuous data (multivariate normal distribution): the multivariate Gaussian log-likelihood; the corresponding Akaike Information Criterion (AIC); the corresponding Bayesian Information Criterion (BIC); a score equivalent Gaussian posterior density (BGe); Mixed data (conditional Gaussian distribution): the conditional Gaussian log-likelihood; the corresponding Akaike Information Criterion (AIC); the corresponding Bayesian Information Criterion (BIC). For $n$ variables, the number of possible DAGs is super-exponential. Here is a link to the integer sequence. As you can see, the number grows very fast. https://oeis.org/A003024
How to learn Bayesian Network Structure from the dataset? The score function measures whether the DAG structure that has been learnt is a good fit to the dataset. Of course, you can define the score function in several ways, depending on the dataset, and the
32,294
How to learn Bayesian Network Structure from the dataset?
There are a number of packages you can use in R. One example that I am familiar with is bnlearn. A large number of the algorithms in this package use local search. This means that for the most part the procedure is: Generate a random DAG structure Score the structure using some methodology (in bnlearn it is by default AIC or BIC) Score all neighbors of the randomly selected structure (meaning the same structure as the original structure but changed by one arch). Select the neighbor with a better score as the next DAG structure to explore. Stop when you have reached the optimal score. This algorithm may converge to a local maximum.
How to learn Bayesian Network Structure from the dataset?
There are a number of packages you can use in R. One example that I am familiar with is bnlearn. A large number of the algorithms in this package use local search. This means that for the most part t
How to learn Bayesian Network Structure from the dataset? There are a number of packages you can use in R. One example that I am familiar with is bnlearn. A large number of the algorithms in this package use local search. This means that for the most part the procedure is: Generate a random DAG structure Score the structure using some methodology (in bnlearn it is by default AIC or BIC) Score all neighbors of the randomly selected structure (meaning the same structure as the original structure but changed by one arch). Select the neighbor with a better score as the next DAG structure to explore. Stop when you have reached the optimal score. This algorithm may converge to a local maximum.
How to learn Bayesian Network Structure from the dataset? There are a number of packages you can use in R. One example that I am familiar with is bnlearn. A large number of the algorithms in this package use local search. This means that for the most part t
32,295
Extreme value theory for count data
For any recent visitors, there's been new developments in this area by Hitz, Davis and Samorodnitsky (arXiv:1707.05033). Taking a peaks-over-threshold approach instead of block maxima, the Discrete Generalised Pareto Distribution is derived as the $\operatorname{floor}$ of a GPD, and discrete Maximum Domains of Attraction (DMDA) are introduced by relating them to the classical MDAs. The whole thing is linked to, but different from, Zipf's Law. In terms of the paper's terminology, the Poisson distribution is in the DMDA of a Gumbel distribution $(\xi = 0)$, as are the Negative Binomial and Geometric distributions.
Extreme value theory for count data
For any recent visitors, there's been new developments in this area by Hitz, Davis and Samorodnitsky (arXiv:1707.05033). Taking a peaks-over-threshold approach instead of block maxima, the Discrete Ge
Extreme value theory for count data For any recent visitors, there's been new developments in this area by Hitz, Davis and Samorodnitsky (arXiv:1707.05033). Taking a peaks-over-threshold approach instead of block maxima, the Discrete Generalised Pareto Distribution is derived as the $\operatorname{floor}$ of a GPD, and discrete Maximum Domains of Attraction (DMDA) are introduced by relating them to the classical MDAs. The whole thing is linked to, but different from, Zipf's Law. In terms of the paper's terminology, the Poisson distribution is in the DMDA of a Gumbel distribution $(\xi = 0)$, as are the Negative Binomial and Geometric distributions.
Extreme value theory for count data For any recent visitors, there's been new developments in this area by Hitz, Davis and Samorodnitsky (arXiv:1707.05033). Taking a peaks-over-threshold approach instead of block maxima, the Discrete Ge
32,296
Extreme value theory for count data
I don't know a definitive answer for your primary question. Although I found the following two references: Anderson, C. W., “Extreme value theory for a class of discrete distributions with applications to some stochastic processes”, Journal of Applied Probability, vol 7, 1970, pp. 99–113. Anderson, C. W., “Local limit theorems for the maxima of discrete random variables”, Mathematical Proceedings of the Cambridge Philosophical Society, vol 88, 1980, pp. 161– 165. For your secondary question, the CDF of the Poisson is $\frac{\Gamma(\lfloor k+1\rfloor,\lambda)}{\lfloor k\rfloor!}$ so $P(\max\limits_N X_n \leq M) = (\frac{\Gamma(\lfloor k+1\rfloor,\lambda)}{\lfloor k\rfloor!})^N$. Apply the difference operator (lag1) and you get the PMF of the max.
Extreme value theory for count data
I don't know a definitive answer for your primary question. Although I found the following two references: Anderson, C. W., “Extreme value theory for a class of discrete distributions with application
Extreme value theory for count data I don't know a definitive answer for your primary question. Although I found the following two references: Anderson, C. W., “Extreme value theory for a class of discrete distributions with applications to some stochastic processes”, Journal of Applied Probability, vol 7, 1970, pp. 99–113. Anderson, C. W., “Local limit theorems for the maxima of discrete random variables”, Mathematical Proceedings of the Cambridge Philosophical Society, vol 88, 1980, pp. 161– 165. For your secondary question, the CDF of the Poisson is $\frac{\Gamma(\lfloor k+1\rfloor,\lambda)}{\lfloor k\rfloor!}$ so $P(\max\limits_N X_n \leq M) = (\frac{\Gamma(\lfloor k+1\rfloor,\lambda)}{\lfloor k\rfloor!})^N$. Apply the difference operator (lag1) and you get the PMF of the max.
Extreme value theory for count data I don't know a definitive answer for your primary question. Although I found the following two references: Anderson, C. W., “Extreme value theory for a class of discrete distributions with application
32,297
Extreme value theory for count data
The Poisson does not fall within the MDA of any EV distribution (not possible to find shifting and scaling sequences that provide a non-degenerate limit). Consequence of a theorem in Leadbetter, Lindgren and Rootzen (1983).
Extreme value theory for count data
The Poisson does not fall within the MDA of any EV distribution (not possible to find shifting and scaling sequences that provide a non-degenerate limit). Consequence of a theorem in Leadbetter, Lind
Extreme value theory for count data The Poisson does not fall within the MDA of any EV distribution (not possible to find shifting and scaling sequences that provide a non-degenerate limit). Consequence of a theorem in Leadbetter, Lindgren and Rootzen (1983).
Extreme value theory for count data The Poisson does not fall within the MDA of any EV distribution (not possible to find shifting and scaling sequences that provide a non-degenerate limit). Consequence of a theorem in Leadbetter, Lind
32,298
Understanding Bayesian Predictive Distributions
Suppose that $X_1,\dots,X_n,X_{n+1}$ are conditionally independent given that $\Theta=\theta$. Then, $$ f_{X_{n+1}\mid X_1,\dots,X_n}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1},\Theta\mid X_1,\dots,X_n}(x_{n+1},\theta\mid x_1,\dots,x_n)\,d\theta $$ $$ = \int f_{X_{n+1}\mid\Theta,X_1,\dots,X_n}(x_{n+1}\mid\theta,x_1,\dots,x_n) f_{\Theta\mid X_1,\dots,X_n}(\theta\mid x_1,\dots,x_n) \, d\theta $$ $$ = \int f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta) f_{\Theta\mid X_1,\dots,X_n}(\theta\mid x_1,\dots,x_n) \, d\theta \, , $$ in which the first equality follows from the law of total probability, the second follows from the product rule, and the third from the assumed conditional independence: given the value of $\Theta$, we don't need the values of $X_1,\dots,X_n$ to determine the distribution of $X_{n+1}$. The simulation scheme is correct: for $i=1,\dots,N$, draw $\theta^{(i)}$ from the distribution of $\Theta\mid X_1=x_1,\dots,X_n=x_n$, then draw $x_{n+1}^{(i)}$ from the distribution of $X_{n+1}\mid\Theta=\theta^{(i)}$. This gives you a sample $\{x_{n+1}^{(i)}\}_{i=1}^N$ from the distribution of $X_{n+1}\mid X_1=x_1,\dots,X_n=x_n$.
Understanding Bayesian Predictive Distributions
Suppose that $X_1,\dots,X_n,X_{n+1}$ are conditionally independent given that $\Theta=\theta$. Then, $$ f_{X_{n+1}\mid X_1,\dots,X_n}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1},\Theta\mid X_1,\dot
Understanding Bayesian Predictive Distributions Suppose that $X_1,\dots,X_n,X_{n+1}$ are conditionally independent given that $\Theta=\theta$. Then, $$ f_{X_{n+1}\mid X_1,\dots,X_n}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1},\Theta\mid X_1,\dots,X_n}(x_{n+1},\theta\mid x_1,\dots,x_n)\,d\theta $$ $$ = \int f_{X_{n+1}\mid\Theta,X_1,\dots,X_n}(x_{n+1}\mid\theta,x_1,\dots,x_n) f_{\Theta\mid X_1,\dots,X_n}(\theta\mid x_1,\dots,x_n) \, d\theta $$ $$ = \int f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta) f_{\Theta\mid X_1,\dots,X_n}(\theta\mid x_1,\dots,x_n) \, d\theta \, , $$ in which the first equality follows from the law of total probability, the second follows from the product rule, and the third from the assumed conditional independence: given the value of $\Theta$, we don't need the values of $X_1,\dots,X_n$ to determine the distribution of $X_{n+1}$. The simulation scheme is correct: for $i=1,\dots,N$, draw $\theta^{(i)}$ from the distribution of $\Theta\mid X_1=x_1,\dots,X_n=x_n$, then draw $x_{n+1}^{(i)}$ from the distribution of $X_{n+1}\mid\Theta=\theta^{(i)}$. This gives you a sample $\{x_{n+1}^{(i)}\}_{i=1}^N$ from the distribution of $X_{n+1}\mid X_1=x_1,\dots,X_n=x_n$.
Understanding Bayesian Predictive Distributions Suppose that $X_1,\dots,X_n,X_{n+1}$ are conditionally independent given that $\Theta=\theta$. Then, $$ f_{X_{n+1}\mid X_1,\dots,X_n}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1},\Theta\mid X_1,\dot
32,299
Understanding Bayesian Predictive Distributions
I'll try to go over the intuition behind generating the posterior predictive distribution step-by-step. Let $y$ be a vector of observed data that come from a probability distribution $p(y|\theta)$ and let $ \tilde y$ be a vector of future (or out-of-sample) values we want to predict. We assume that $ \tilde y$ comes from the same distribution as $ y$. It might be tempting to use our best estimate of $\theta$---such as the MLE or MAP estimate---to obtain information about this distribution. However, doing so would inevitably ignore our uncertainty about $ \theta$. Thus, the appropriate way to procede is to average over the posterior distribution of $ \theta$, namely $ p(\theta|y)$. Notice also that $ \tilde y$ is independent of $ y$ given $ \theta$, as it is assumed to be an independent sample drawn from the same distribution as $ y$. Thus, $ \displaystyle p(\tilde y| \theta, y) = \frac{p(\tilde y, y|\theta )p(\theta)}{p(\theta, y)} = \frac{p(\tilde y|\theta )p(y |\theta) p(\theta)}{p(y| \theta)p(\theta)} = p(\tilde y |\theta).$ The posterior predictive distribution of $ \tilde y$ is thus, $ \displaystyle p(\tilde y|y ) = \int_\Theta p(\tilde y | \theta,y) p(\theta | y) d\theta = \int_\Theta p(\tilde y | \theta) p(\theta | y) d\theta $ where $ \Theta$ is the support of $ \theta$. Now, how do we obtain the samples from $ p(\tilde y|y)$? The method you describe is sometimes called the method of composition, which works as follows: for s = 1,2,...,S do draw $\theta^{(s)}$ from $ p(\theta|y)$ draw $\tilde y^{(s)}$ from $ p(\tilde y|\theta^{(s)})$ where, in most situations, we have already the draws from $ p(\theta|y)$, so that only the second step is required. The reason why this works is quite simple: First note that $ p(\tilde y, \theta | y) = p(\tilde y| \theta, y)p(\theta | y)$. Thus, sampling a parameter vector $\theta^{(s)}$ from $ p(\theta|y)$ and, then, using this vector to sample $ \tilde y^{(s)}$ from $ p(\tilde y | \theta^{(s)}) = p(\tilde y | \theta^{(s)}, y)$ yields samples from the joint distribution $ p(\tilde y, \theta|y)$. It follows that, the sampled values $\tilde y^{(s)}, s=1,2,...,S$ are samples from the marginal distribution, $p(\tilde y|y)$.
Understanding Bayesian Predictive Distributions
I'll try to go over the intuition behind generating the posterior predictive distribution step-by-step. Let $y$ be a vector of observed data that come from a probability distribution $p(y|\theta)$ and
Understanding Bayesian Predictive Distributions I'll try to go over the intuition behind generating the posterior predictive distribution step-by-step. Let $y$ be a vector of observed data that come from a probability distribution $p(y|\theta)$ and let $ \tilde y$ be a vector of future (or out-of-sample) values we want to predict. We assume that $ \tilde y$ comes from the same distribution as $ y$. It might be tempting to use our best estimate of $\theta$---such as the MLE or MAP estimate---to obtain information about this distribution. However, doing so would inevitably ignore our uncertainty about $ \theta$. Thus, the appropriate way to procede is to average over the posterior distribution of $ \theta$, namely $ p(\theta|y)$. Notice also that $ \tilde y$ is independent of $ y$ given $ \theta$, as it is assumed to be an independent sample drawn from the same distribution as $ y$. Thus, $ \displaystyle p(\tilde y| \theta, y) = \frac{p(\tilde y, y|\theta )p(\theta)}{p(\theta, y)} = \frac{p(\tilde y|\theta )p(y |\theta) p(\theta)}{p(y| \theta)p(\theta)} = p(\tilde y |\theta).$ The posterior predictive distribution of $ \tilde y$ is thus, $ \displaystyle p(\tilde y|y ) = \int_\Theta p(\tilde y | \theta,y) p(\theta | y) d\theta = \int_\Theta p(\tilde y | \theta) p(\theta | y) d\theta $ where $ \Theta$ is the support of $ \theta$. Now, how do we obtain the samples from $ p(\tilde y|y)$? The method you describe is sometimes called the method of composition, which works as follows: for s = 1,2,...,S do draw $\theta^{(s)}$ from $ p(\theta|y)$ draw $\tilde y^{(s)}$ from $ p(\tilde y|\theta^{(s)})$ where, in most situations, we have already the draws from $ p(\theta|y)$, so that only the second step is required. The reason why this works is quite simple: First note that $ p(\tilde y, \theta | y) = p(\tilde y| \theta, y)p(\theta | y)$. Thus, sampling a parameter vector $\theta^{(s)}$ from $ p(\theta|y)$ and, then, using this vector to sample $ \tilde y^{(s)}$ from $ p(\tilde y | \theta^{(s)}) = p(\tilde y | \theta^{(s)}, y)$ yields samples from the joint distribution $ p(\tilde y, \theta|y)$. It follows that, the sampled values $\tilde y^{(s)}, s=1,2,...,S$ are samples from the marginal distribution, $p(\tilde y|y)$.
Understanding Bayesian Predictive Distributions I'll try to go over the intuition behind generating the posterior predictive distribution step-by-step. Let $y$ be a vector of observed data that come from a probability distribution $p(y|\theta)$ and
32,300
Understanding Bayesian Predictive Distributions
To address your first question: yes, the observations are not independent if you don't know the value of $\theta$. Say, you've observed that $\tilde{y}_1$ has rather extreme value. It may be an indication that the unknown value of $\theta$ itself is extreme, and, thus, you should expect other observations to be extreme as well.
Understanding Bayesian Predictive Distributions
To address your first question: yes, the observations are not independent if you don't know the value of $\theta$. Say, you've observed that $\tilde{y}_1$ has rather extreme value. It may be an indica
Understanding Bayesian Predictive Distributions To address your first question: yes, the observations are not independent if you don't know the value of $\theta$. Say, you've observed that $\tilde{y}_1$ has rather extreme value. It may be an indication that the unknown value of $\theta$ itself is extreme, and, thus, you should expect other observations to be extreme as well.
Understanding Bayesian Predictive Distributions To address your first question: yes, the observations are not independent if you don't know the value of $\theta$. Say, you've observed that $\tilde{y}_1$ has rather extreme value. It may be an indica