idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
17,601
The connection between Bayesian statistics and generative modeling
@Tristan: Hope you don't mind my reworking of you answer as I am working on how to make the general point as transparent as possible. To me, the primary insight in statistics is to conceptualize repeated observations that vary - as being generated by a probability generating model, such as Normal(mu,sigma). Early in the 1800,s the probability generating models entertained were usually just for errors of measurement with the role of parameters, such as mu and sigma and priors for them muddled. Frequentist approaches took the parameters as fixed and unknown and so the probability generating models then only involved possible observations. Bayesian approaches (with proper priors) have probability generating models for both possible unknown parameters and possible observations. These joint probability generating models comprehensively account for all of the - to put it more generally - possible unknowns (such as parameters) and knowns (such as observations). As in the link from Rubin you gave, conceptually Bayes theorem states only keep the possible unknowns that (in the simulation) actually generated possible knowns that were equal (very close) to the actual knowns (in your study). This actually was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 > Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlightenment. Journal of the Royal Statistical Society: Series A 173(3):469-482. . It is equivalent but perhaps more transparent that posterior = prior(possible unknowns| possible knowns=knowns) than posterior ~ prior(possible unknowns)*p(possible knowns=knowns|possible unknowns) Nothing much new for missing values in the former as one just adds possible unknowns for a probability model generating missing values and treats missing as just one of the possible knowns (i.e. the 3rd observation was missing). Recently, approximate Bayesian computation (ABC) has taken this constructive two-stage simulation approach seriously when p(possible knowns=knowns|possible unknowns) cannot be worked out. But even when this can be worked out and the posterior easily obtainable from MCMC sampling (or even when the posterior is directly available due to the prior being conjugate) Rubin’s point about this two-stage sampling construction enabling easier understanding, should not be overlooked. For instance, I am sure it would have caught what @Zen did here Bayesians: slaves of the likelihood function? because one would needed to draw a possible unknown c from a prior (stage one) and then draw a possible known (data) given that c (stage 2) which would not have been a random generation as p(possible knowns|c) would not have been be a probability except for one and only one c. From @Zen “Unfortunatelly, in general, this is not a valid description of a statistical model. The problem is that, by definition, $f_{X_i\mid C}(\,\cdot\mid c)$ must be a probability density for almost every possible value of $c$, which is, in general, clearly false.”
The connection between Bayesian statistics and generative modeling
@Tristan: Hope you don't mind my reworking of you answer as I am working on how to make the general point as transparent as possible. To me, the primary insight in statistics is to conceptualize repea
The connection between Bayesian statistics and generative modeling @Tristan: Hope you don't mind my reworking of you answer as I am working on how to make the general point as transparent as possible. To me, the primary insight in statistics is to conceptualize repeated observations that vary - as being generated by a probability generating model, such as Normal(mu,sigma). Early in the 1800,s the probability generating models entertained were usually just for errors of measurement with the role of parameters, such as mu and sigma and priors for them muddled. Frequentist approaches took the parameters as fixed and unknown and so the probability generating models then only involved possible observations. Bayesian approaches (with proper priors) have probability generating models for both possible unknown parameters and possible observations. These joint probability generating models comprehensively account for all of the - to put it more generally - possible unknowns (such as parameters) and knowns (such as observations). As in the link from Rubin you gave, conceptually Bayes theorem states only keep the possible unknowns that (in the simulation) actually generated possible knowns that were equal (very close) to the actual knowns (in your study). This actually was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 > Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlightenment. Journal of the Royal Statistical Society: Series A 173(3):469-482. . It is equivalent but perhaps more transparent that posterior = prior(possible unknowns| possible knowns=knowns) than posterior ~ prior(possible unknowns)*p(possible knowns=knowns|possible unknowns) Nothing much new for missing values in the former as one just adds possible unknowns for a probability model generating missing values and treats missing as just one of the possible knowns (i.e. the 3rd observation was missing). Recently, approximate Bayesian computation (ABC) has taken this constructive two-stage simulation approach seriously when p(possible knowns=knowns|possible unknowns) cannot be worked out. But even when this can be worked out and the posterior easily obtainable from MCMC sampling (or even when the posterior is directly available due to the prior being conjugate) Rubin’s point about this two-stage sampling construction enabling easier understanding, should not be overlooked. For instance, I am sure it would have caught what @Zen did here Bayesians: slaves of the likelihood function? because one would needed to draw a possible unknown c from a prior (stage one) and then draw a possible known (data) given that c (stage 2) which would not have been a random generation as p(possible knowns|c) would not have been be a probability except for one and only one c. From @Zen “Unfortunatelly, in general, this is not a valid description of a statistical model. The problem is that, by definition, $f_{X_i\mid C}(\,\cdot\mid c)$ must be a probability density for almost every possible value of $c$, which is, in general, clearly false.”
The connection between Bayesian statistics and generative modeling @Tristan: Hope you don't mind my reworking of you answer as I am working on how to make the general point as transparent as possible. To me, the primary insight in statistics is to conceptualize repea
17,602
What books provide an overview of computational statistics as it applies to computer science?
I'd suggest Christopher Bishop's "Pattern Recognition and Machine Learning". You can see some of it, including a sample chapter, at https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book
What books provide an overview of computational statistics as it applies to computer science?
I'd suggest Christopher Bishop's "Pattern Recognition and Machine Learning". You can see some of it, including a sample chapter, at https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book
What books provide an overview of computational statistics as it applies to computer science? I'd suggest Christopher Bishop's "Pattern Recognition and Machine Learning". You can see some of it, including a sample chapter, at https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book
What books provide an overview of computational statistics as it applies to computer science? I'd suggest Christopher Bishop's "Pattern Recognition and Machine Learning". You can see some of it, including a sample chapter, at https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book
17,603
What books provide an overview of computational statistics as it applies to computer science?
You might want to read the extremely popular question on Stack Overflow on what statistics a programmer or computer scientist should know.
What books provide an overview of computational statistics as it applies to computer science?
You might want to read the extremely popular question on Stack Overflow on what statistics a programmer or computer scientist should know.
What books provide an overview of computational statistics as it applies to computer science? You might want to read the extremely popular question on Stack Overflow on what statistics a programmer or computer scientist should know.
What books provide an overview of computational statistics as it applies to computer science? You might want to read the extremely popular question on Stack Overflow on what statistics a programmer or computer scientist should know.
17,604
What books provide an overview of computational statistics as it applies to computer science?
Here is a very nice book from James E. Gentle, Computational Statistics (Springer, 2009), which covers both computational and statistical aspects of data analysis. Gentle also authored other great books, check his publications. Another great book is the Handbook of Computational Statistics, from Gentle et al. (Springer, 2004); it is circulating as PDF somewhere on the web, so just try looking at it on Google.
What books provide an overview of computational statistics as it applies to computer science?
Here is a very nice book from James E. Gentle, Computational Statistics (Springer, 2009), which covers both computational and statistical aspects of data analysis. Gentle also authored other great boo
What books provide an overview of computational statistics as it applies to computer science? Here is a very nice book from James E. Gentle, Computational Statistics (Springer, 2009), which covers both computational and statistical aspects of data analysis. Gentle also authored other great books, check his publications. Another great book is the Handbook of Computational Statistics, from Gentle et al. (Springer, 2004); it is circulating as PDF somewhere on the web, so just try looking at it on Google.
What books provide an overview of computational statistics as it applies to computer science? Here is a very nice book from James E. Gentle, Computational Statistics (Springer, 2009), which covers both computational and statistical aspects of data analysis. Gentle also authored other great boo
17,605
What books provide an overview of computational statistics as it applies to computer science?
You've mentioned some ML techniques, so two quite nice books (quite because unfortunately my favorite is in Polish): http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/1420067184 http://ai.stanford.edu/~nilsson/mlbook.html For numeric stuff like random number generation: http://www.nr.com/
What books provide an overview of computational statistics as it applies to computer science?
You've mentioned some ML techniques, so two quite nice books (quite because unfortunately my favorite is in Polish): http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/14200
What books provide an overview of computational statistics as it applies to computer science? You've mentioned some ML techniques, so two quite nice books (quite because unfortunately my favorite is in Polish): http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/1420067184 http://ai.stanford.edu/~nilsson/mlbook.html For numeric stuff like random number generation: http://www.nr.com/
What books provide an overview of computational statistics as it applies to computer science? You've mentioned some ML techniques, so two quite nice books (quite because unfortunately my favorite is in Polish): http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/14200
17,606
What books provide an overview of computational statistics as it applies to computer science?
I picked up a copy of Probability and Statistics for Computer Scientists - Michael Baron on sale with another statistics book (I honestly bought it because of the name - I wanted a book that would take some kind of look at statistics from a computer science perspective, even if it wasn't perfect). I haven't had a chance to read it or work any problems in it yet, but it seems like a solid book. The preface of the book says that it's for upper level undergraduate students and beginning graduate students, and I would agree with this. Some understanding of probability and statistics are necessary to grasp the contents of this book. Topics include probability, discrete random variables, continuous distributions, Monte Carlo methods, stochastic processes, queuing systems, statistical inference, and regression.
What books provide an overview of computational statistics as it applies to computer science?
I picked up a copy of Probability and Statistics for Computer Scientists - Michael Baron on sale with another statistics book (I honestly bought it because of the name - I wanted a book that would tak
What books provide an overview of computational statistics as it applies to computer science? I picked up a copy of Probability and Statistics for Computer Scientists - Michael Baron on sale with another statistics book (I honestly bought it because of the name - I wanted a book that would take some kind of look at statistics from a computer science perspective, even if it wasn't perfect). I haven't had a chance to read it or work any problems in it yet, but it seems like a solid book. The preface of the book says that it's for upper level undergraduate students and beginning graduate students, and I would agree with this. Some understanding of probability and statistics are necessary to grasp the contents of this book. Topics include probability, discrete random variables, continuous distributions, Monte Carlo methods, stochastic processes, queuing systems, statistical inference, and regression.
What books provide an overview of computational statistics as it applies to computer science? I picked up a copy of Probability and Statistics for Computer Scientists - Michael Baron on sale with another statistics book (I honestly bought it because of the name - I wanted a book that would tak
17,607
What books provide an overview of computational statistics as it applies to computer science?
Although it's not specifically computational statistics, A Handbook of Statistical Analyses Using R - Brian S. Everitt and Torsten Hothorn covers a lot of topics that I've seen covered in basic and intermediate statistics books - inference, ANOVA, linear regression, logistic regression, density estimation, recursive partitioning, principal component analysis, and cluster analysis - using the R language. This might be of interest to those interested in programming. However, unlike other books, the emphasis is on using the R language to carry out these statistical functions. Other books I've seen use combinations of algebra and calculus to demonstrate statistics. This book actually focuses on how to analyze data using the R language. And to make it even more useful, the data sets the authors use are in CRAN - the R Repository.
What books provide an overview of computational statistics as it applies to computer science?
Although it's not specifically computational statistics, A Handbook of Statistical Analyses Using R - Brian S. Everitt and Torsten Hothorn covers a lot of topics that I've seen covered in basic and in
What books provide an overview of computational statistics as it applies to computer science? Although it's not specifically computational statistics, A Handbook of Statistical Analyses Using R - Brian S. Everitt and Torsten Hothorn covers a lot of topics that I've seen covered in basic and intermediate statistics books - inference, ANOVA, linear regression, logistic regression, density estimation, recursive partitioning, principal component analysis, and cluster analysis - using the R language. This might be of interest to those interested in programming. However, unlike other books, the emphasis is on using the R language to carry out these statistical functions. Other books I've seen use combinations of algebra and calculus to demonstrate statistics. This book actually focuses on how to analyze data using the R language. And to make it even more useful, the data sets the authors use are in CRAN - the R Repository.
What books provide an overview of computational statistics as it applies to computer science? Although it's not specifically computational statistics, A Handbook of Statistical Analyses Using R - Brian S. Everitt and Torsten Hothorn covers a lot of topics that I've seen covered in basic and in
17,608
What books provide an overview of computational statistics as it applies to computer science?
Statistical Computing with R - Maria L. Rizzo covers a lot of the topics in Probability and Statistics for Computer Scientists - basic probability and statistics, random variables, Bayesian statistics, Markov chains, visualization of multivariate data, Monte Carlo methods, Permutation tests, probability density estimation, and numerical methods. The equations and formulas used are presented both as mathematical formulas as well as in R code. I would say that a basic knowledge of probability, statistics, calculus, and maybe discrete mathematics would be advisable for anyone who wants to read this book. A programming background would also be helpful, but there are some references for the R language, operators, and syntax.
What books provide an overview of computational statistics as it applies to computer science?
Statistical Computing with R - Maria L. Rizzo covers a lot of the topics in Probability and Statistics for Computer Scientists - basic probability and statistics, random variables, Bayesian statistics
What books provide an overview of computational statistics as it applies to computer science? Statistical Computing with R - Maria L. Rizzo covers a lot of the topics in Probability and Statistics for Computer Scientists - basic probability and statistics, random variables, Bayesian statistics, Markov chains, visualization of multivariate data, Monte Carlo methods, Permutation tests, probability density estimation, and numerical methods. The equations and formulas used are presented both as mathematical formulas as well as in R code. I would say that a basic knowledge of probability, statistics, calculus, and maybe discrete mathematics would be advisable for anyone who wants to read this book. A programming background would also be helpful, but there are some references for the R language, operators, and syntax.
What books provide an overview of computational statistics as it applies to computer science? Statistical Computing with R - Maria L. Rizzo covers a lot of the topics in Probability and Statistics for Computer Scientists - basic probability and statistics, random variables, Bayesian statistics
17,609
What books provide an overview of computational statistics as it applies to computer science?
As a computer engineer coming to data analysis myself, a really readable book that covers things from a pretty unintimidating & readable perspective (at the cost of not covering as much as any of the other books suggested here) was Programming Collective Intelligence by Toby Segaran. I found it a lot more approachable than, for example, Bishop's book, which a great reference but goes into more depth that you probably want at a first pass. On amazon: http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325
What books provide an overview of computational statistics as it applies to computer science?
As a computer engineer coming to data analysis myself, a really readable book that covers things from a pretty unintimidating & readable perspective (at the cost of not covering as much as any of the
What books provide an overview of computational statistics as it applies to computer science? As a computer engineer coming to data analysis myself, a really readable book that covers things from a pretty unintimidating & readable perspective (at the cost of not covering as much as any of the other books suggested here) was Programming Collective Intelligence by Toby Segaran. I found it a lot more approachable than, for example, Bishop's book, which a great reference but goes into more depth that you probably want at a first pass. On amazon: http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325
What books provide an overview of computational statistics as it applies to computer science? As a computer engineer coming to data analysis myself, a really readable book that covers things from a pretty unintimidating & readable perspective (at the cost of not covering as much as any of the
17,610
What books provide an overview of computational statistics as it applies to computer science?
CRAN has several good examples of books pertaining to statistical programming. Some of them will not pertain to machine learning and MCMC, but each entry is annotated, so you should have a rough idea of what each book contains to dive a bit further. http://www.r-project.org/doc/bib/R-books.html
What books provide an overview of computational statistics as it applies to computer science?
CRAN has several good examples of books pertaining to statistical programming. Some of them will not pertain to machine learning and MCMC, but each entry is annotated, so you should have a rough idea
What books provide an overview of computational statistics as it applies to computer science? CRAN has several good examples of books pertaining to statistical programming. Some of them will not pertain to machine learning and MCMC, but each entry is annotated, so you should have a rough idea of what each book contains to dive a bit further. http://www.r-project.org/doc/bib/R-books.html
What books provide an overview of computational statistics as it applies to computer science? CRAN has several good examples of books pertaining to statistical programming. Some of them will not pertain to machine learning and MCMC, but each entry is annotated, so you should have a rough idea
17,611
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
I have found the answer to this question: https://www.quora.com/Why-does-each-filter-learn-different-features-in-a-convolutional-neural-network It says here: "... (optimization) algorithm finds that loss does not decrease if two filters have similar weights and biases, so it’ll eventually change one of the filter(‘s weights and biases) in order to reduce loss thereby learning a new feature." Thank you for the answers. Appreciate it :)
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
I have found the answer to this question: https://www.quora.com/Why-does-each-filter-learn-different-features-in-a-convolutional-neural-network It says here: "... (optimization) algorithm finds that l
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? I have found the answer to this question: https://www.quora.com/Why-does-each-filter-learn-different-features-in-a-convolutional-neural-network It says here: "... (optimization) algorithm finds that loss does not decrease if two filters have similar weights and biases, so it’ll eventually change one of the filter(‘s weights and biases) in order to reduce loss thereby learning a new feature." Thank you for the answers. Appreciate it :)
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? I have found the answer to this question: https://www.quora.com/Why-does-each-filter-learn-different-features-in-a-convolutional-neural-network It says here: "... (optimization) algorithm finds that l
17,612
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
I had the same confusion in understanding this fact. The confusion arises to the beginners because the book explicity doesn't mention that filters are different. since these filters are applied similarly Filters are applied similarly but the value of cell in matrix is different from each other filters. So they extract different features from the image. wouldn't they just learn the same parameters during training No, they don't learn the same parameter since the filters are different now. So the use of multiple filter is not redundant.
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
I had the same confusion in understanding this fact. The confusion arises to the beginners because the book explicity doesn't mention that filters are different. since these filters are applied sim
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? I had the same confusion in understanding this fact. The confusion arises to the beginners because the book explicity doesn't mention that filters are different. since these filters are applied similarly Filters are applied similarly but the value of cell in matrix is different from each other filters. So they extract different features from the image. wouldn't they just learn the same parameters during training No, they don't learn the same parameter since the filters are different now. So the use of multiple filter is not redundant.
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? I had the same confusion in understanding this fact. The confusion arises to the beginners because the book explicity doesn't mention that filters are different. since these filters are applied sim
17,613
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
Also since the weights of the filters are basically random values, each filter will most likely converge to its nearby local minimum. This I guess ensures that the filters aren't exactly the same , but there might be some closely related features in some parts of filters.
Wouldn't multiple filters in a convolutional layer learn the same parameter during training?
Also since the weights of the filters are basically random values, each filter will most likely converge to its nearby local minimum. This I guess ensures that the filters aren't exactly the same , bu
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? Also since the weights of the filters are basically random values, each filter will most likely converge to its nearby local minimum. This I guess ensures that the filters aren't exactly the same , but there might be some closely related features in some parts of filters.
Wouldn't multiple filters in a convolutional layer learn the same parameter during training? Also since the weights of the filters are basically random values, each filter will most likely converge to its nearby local minimum. This I guess ensures that the filters aren't exactly the same , bu
17,614
Can support vector machine be used in large data?
As you mention, storing the kernel matrix requires memory that scales quadratically with the number of data points. Training time for traditional SVM algorithms also scales superlinearly with the number of data points. So, these algorithms aren't feasible for large data sets. One possible trick is to reformulate a kernelized SVM as a linear SVM. Each element $K_{ij}$ of the kernel matrix represents the dot product between data points $x_i$ and $x_j$ after mapping them (possibly nonlinearly) into a feature space: $K_{ij} = \Phi(x_i) \cdot \Phi(x_j)$. The feature space mapping $\Phi$ is defined implicitly by the kernel function, and kernelized SVMs don't explicitly compute feature space representations. This is computationally efficient for small to medium size datasets, as the feature space can be very high dimensional, or even infinite dimensional. But, as above, this becomes infeasible for large datasets. Instead, we can explicitly map the data nonlinearly into feature space, then efficiently train a linear SVM on the feature space representations. The feature space mapping can be constructed to approximate a given kernel function, but use fewer dimensions than the 'full' feature space mapping. For large datasets, this can still give us rich feature space representations, but with many fewer dimensions than data points. One approach to kernel approximation uses the Nyström approximation (Williams and Seeger 2001). This is a way to approximate the eigenvalues/eigenvectors of a large matrix using a smaller submatrix. Another approach uses randomized features, and is somtimes called 'random kitchen sinks' (Rahimi and Recht 2007). Another trick for training SVMs on large datasets is to approximate the optimization problem with a set of smaller subproblems. For example, using stochastic gradient descent on the primal problem is one approach (among many others). Much work has been done on the optimization front. Menon (2009) gives a good survey. References Williams and Seeger (2001). Using the Nystroem method to speed up kernel machines. Rahimi and Recht (2007). Random features for large-scale kernel machines. Menon (2009). Large-scale support vector machines: Algorithms and theory.
Can support vector machine be used in large data?
As you mention, storing the kernel matrix requires memory that scales quadratically with the number of data points. Training time for traditional SVM algorithms also scales superlinearly with the numb
Can support vector machine be used in large data? As you mention, storing the kernel matrix requires memory that scales quadratically with the number of data points. Training time for traditional SVM algorithms also scales superlinearly with the number of data points. So, these algorithms aren't feasible for large data sets. One possible trick is to reformulate a kernelized SVM as a linear SVM. Each element $K_{ij}$ of the kernel matrix represents the dot product between data points $x_i$ and $x_j$ after mapping them (possibly nonlinearly) into a feature space: $K_{ij} = \Phi(x_i) \cdot \Phi(x_j)$. The feature space mapping $\Phi$ is defined implicitly by the kernel function, and kernelized SVMs don't explicitly compute feature space representations. This is computationally efficient for small to medium size datasets, as the feature space can be very high dimensional, or even infinite dimensional. But, as above, this becomes infeasible for large datasets. Instead, we can explicitly map the data nonlinearly into feature space, then efficiently train a linear SVM on the feature space representations. The feature space mapping can be constructed to approximate a given kernel function, but use fewer dimensions than the 'full' feature space mapping. For large datasets, this can still give us rich feature space representations, but with many fewer dimensions than data points. One approach to kernel approximation uses the Nyström approximation (Williams and Seeger 2001). This is a way to approximate the eigenvalues/eigenvectors of a large matrix using a smaller submatrix. Another approach uses randomized features, and is somtimes called 'random kitchen sinks' (Rahimi and Recht 2007). Another trick for training SVMs on large datasets is to approximate the optimization problem with a set of smaller subproblems. For example, using stochastic gradient descent on the primal problem is one approach (among many others). Much work has been done on the optimization front. Menon (2009) gives a good survey. References Williams and Seeger (2001). Using the Nystroem method to speed up kernel machines. Rahimi and Recht (2007). Random features for large-scale kernel machines. Menon (2009). Large-scale support vector machines: Algorithms and theory.
Can support vector machine be used in large data? As you mention, storing the kernel matrix requires memory that scales quadratically with the number of data points. Training time for traditional SVM algorithms also scales superlinearly with the numb
17,615
Median absolute deviation (MAD) and SD of different distributions
To address the question in comments: I would like to know if there is a possible range of values of the constant (I assume the question is intended to be about the median deviation from median.) The ratio of SD to MAD can be made arbitrarily large. Take some distribution with a given ratio of SD to MAD. Hold the middle $50\%+\epsilon$ of the distribution fixed (which means MAD is unchanged). Move the tails out further. SD increases. Keep moving it beyond any given finite bound. The ratio of SD to MAD can easily be made as near to $\sqrt{\frac{1}{2}}$ as desired by (for example) putting $25\%+\epsilon$ at $\pm 1$ and $50\%-2\epsilon$ at 0. I think that would be as small as it goes.
Median absolute deviation (MAD) and SD of different distributions
To address the question in comments: I would like to know if there is a possible range of values of the constant (I assume the question is intended to be about the median deviation from median.) Th
Median absolute deviation (MAD) and SD of different distributions To address the question in comments: I would like to know if there is a possible range of values of the constant (I assume the question is intended to be about the median deviation from median.) The ratio of SD to MAD can be made arbitrarily large. Take some distribution with a given ratio of SD to MAD. Hold the middle $50\%+\epsilon$ of the distribution fixed (which means MAD is unchanged). Move the tails out further. SD increases. Keep moving it beyond any given finite bound. The ratio of SD to MAD can easily be made as near to $\sqrt{\frac{1}{2}}$ as desired by (for example) putting $25\%+\epsilon$ at $\pm 1$ and $50\%-2\epsilon$ at 0. I think that would be as small as it goes.
Median absolute deviation (MAD) and SD of different distributions To address the question in comments: I would like to know if there is a possible range of values of the constant (I assume the question is intended to be about the median deviation from median.) Th
17,616
Median absolute deviation (MAD) and SD of different distributions
For any given distribution with density $f(x;\theta)$, the median absolute deviation is given by $\text{MAD}_\theta=G^{-1}_\theta(1/2)$ where $G_\theta$ is the cdf of $|X-\text{MED}_\theta|$ and $\text{MED}_\theta=F^{-1}_\theta(1/2)$ where $F_\theta$ is the cdf of $X$. In cases when $\theta=\sigma$, i.e., when the standard deviation is the only parameter, $\text{MAD}_\theta$ is therefore a deterministic function of $\sigma$. In cases when $\theta=(\mu,\sigma)$ and $\mu$ is a location parameter, i.e., when $$f(x;\theta)=g(\{x-\mu\}/\sigma)/\sigma$$ Then the distribution of $|X-\text{MED}_\theta|$ is the same as the distribution of $|\{X-\mu\}-\{\text{MED}_\theta-\mu\}|$, and hence is independent from $\mu$. Therefore $G_\theta$ only depends on $\sigma$ and $\text{MAD}_\theta$ is again a deterministic function of $\sigma$.
Median absolute deviation (MAD) and SD of different distributions
For any given distribution with density $f(x;\theta)$, the median absolute deviation is given by $\text{MAD}_\theta=G^{-1}_\theta(1/2)$ where $G_\theta$ is the cdf of $|X-\text{MED}_\theta|$ and $\tex
Median absolute deviation (MAD) and SD of different distributions For any given distribution with density $f(x;\theta)$, the median absolute deviation is given by $\text{MAD}_\theta=G^{-1}_\theta(1/2)$ where $G_\theta$ is the cdf of $|X-\text{MED}_\theta|$ and $\text{MED}_\theta=F^{-1}_\theta(1/2)$ where $F_\theta$ is the cdf of $X$. In cases when $\theta=\sigma$, i.e., when the standard deviation is the only parameter, $\text{MAD}_\theta$ is therefore a deterministic function of $\sigma$. In cases when $\theta=(\mu,\sigma)$ and $\mu$ is a location parameter, i.e., when $$f(x;\theta)=g(\{x-\mu\}/\sigma)/\sigma$$ Then the distribution of $|X-\text{MED}_\theta|$ is the same as the distribution of $|\{X-\mu\}-\{\text{MED}_\theta-\mu\}|$, and hence is independent from $\mu$. Therefore $G_\theta$ only depends on $\sigma$ and $\text{MAD}_\theta$ is again a deterministic function of $\sigma$.
Median absolute deviation (MAD) and SD of different distributions For any given distribution with density $f(x;\theta)$, the median absolute deviation is given by $\text{MAD}_\theta=G^{-1}_\theta(1/2)$ where $G_\theta$ is the cdf of $|X-\text{MED}_\theta|$ and $\tex
17,617
Why does the standard error of the intercept increase the further $\bar x$ is from 0?
Because the regression line fit by ordinary least squares will necessarily go through the mean of your data (i.e., $(\bar x, \bar y)$)—at least as long as you don't suppress the intercept—uncertainty about the true value of the slope has no effect on the vertical position of the line at the mean of $x$ (i.e., at $\hat y_{\bar x}$). This translates into less vertical uncertainty at $\bar x$ than you have the further away from $\bar x$ you are. If the intercept, where $x=0$ is $\bar x$, then this will minimize your uncertainty about the true value of $\beta_0$. In mathematical terms, this translates into the smallest possible value of the standard error for $\hat\beta_0$. Here is a quick example in R: set.seed(1) # this makes the example exactly reproducible x0 = rnorm(20, mean=0, sd=1) # the mean of x varies from 0 to 10 x5 = rnorm(20, mean=5, sd=1) x10 = rnorm(20, mean=10, sd=1) y0 = 5 + 1*x0 + rnorm(20) # all data come from the same y5 = 5 + 1*x5 + rnorm(20) # data generating process y10 = 5 + 1*x10 + rnorm(20) model0 = lm(y0~x0) # all models are fit the same way model5 = lm(y5~x5) model10 = lm(y10~x10) This figure is a bit busy, but you can see the data from several different studies where the distribution of $x$ was closer or further from $0$. The slopes differ a little from study to study, but are largely similar. (Notice they all go through the circled X that I used to mark $(\bar x, \bar y)$.) Nonetheless, the uncertainty about the true value of those slopes causes the uncertainty about $\hat y$ to expand the further you get from $\bar x$, meaning that the $SE(\hat\beta_0)$ is very wide for the data that were sampled in the neighborhood of $x=10$, and very narrow for the study in which the data were sampled near $x=0$. Edit in response to comment: Unfortunately, centering your data after you have them will not help you if you want to know the likely $y$ value at some $x$ value $x_\text{new}$. Instead, you need to center your data collection on the point you care about in the first place. To understand these issues more fully, it may help you to read my answer here: Linear regression prediction interval.
Why does the standard error of the intercept increase the further $\bar x$ is from 0?
Because the regression line fit by ordinary least squares will necessarily go through the mean of your data (i.e., $(\bar x, \bar y)$)—at least as long as you don't suppress the intercept—uncertainty
Why does the standard error of the intercept increase the further $\bar x$ is from 0? Because the regression line fit by ordinary least squares will necessarily go through the mean of your data (i.e., $(\bar x, \bar y)$)—at least as long as you don't suppress the intercept—uncertainty about the true value of the slope has no effect on the vertical position of the line at the mean of $x$ (i.e., at $\hat y_{\bar x}$). This translates into less vertical uncertainty at $\bar x$ than you have the further away from $\bar x$ you are. If the intercept, where $x=0$ is $\bar x$, then this will minimize your uncertainty about the true value of $\beta_0$. In mathematical terms, this translates into the smallest possible value of the standard error for $\hat\beta_0$. Here is a quick example in R: set.seed(1) # this makes the example exactly reproducible x0 = rnorm(20, mean=0, sd=1) # the mean of x varies from 0 to 10 x5 = rnorm(20, mean=5, sd=1) x10 = rnorm(20, mean=10, sd=1) y0 = 5 + 1*x0 + rnorm(20) # all data come from the same y5 = 5 + 1*x5 + rnorm(20) # data generating process y10 = 5 + 1*x10 + rnorm(20) model0 = lm(y0~x0) # all models are fit the same way model5 = lm(y5~x5) model10 = lm(y10~x10) This figure is a bit busy, but you can see the data from several different studies where the distribution of $x$ was closer or further from $0$. The slopes differ a little from study to study, but are largely similar. (Notice they all go through the circled X that I used to mark $(\bar x, \bar y)$.) Nonetheless, the uncertainty about the true value of those slopes causes the uncertainty about $\hat y$ to expand the further you get from $\bar x$, meaning that the $SE(\hat\beta_0)$ is very wide for the data that were sampled in the neighborhood of $x=10$, and very narrow for the study in which the data were sampled near $x=0$. Edit in response to comment: Unfortunately, centering your data after you have them will not help you if you want to know the likely $y$ value at some $x$ value $x_\text{new}$. Instead, you need to center your data collection on the point you care about in the first place. To understand these issues more fully, it may help you to read my answer here: Linear regression prediction interval.
Why does the standard error of the intercept increase the further $\bar x$ is from 0? Because the regression line fit by ordinary least squares will necessarily go through the mean of your data (i.e., $(\bar x, \bar y)$)—at least as long as you don't suppress the intercept—uncertainty
17,618
Specifying a covariance structure: pros and cons
Basically, you must specify a covariance structure in GLM. If by "assuming no covariance", you mean "all off-diagonal entries in the covariance matrix are zero", then all you did was assume one very specific covariance structure. (You could be even more specific, e.g., by assuming that all variances are equal.) This is really a variation on "I don't subscribe to any philosophy; I'm a pragmatist." - "You just described the philosophy you subscribe to." As such, I would say that the advantage of thinking about the covariance structure is the chance of using a model that is more appropriate to your data. Just as you should include known functional relationships for the expected value (or the mean) of your observations, you should account for any structure you know in the covariance. And of course, the "disadvantage" is that you need to actually think about all this. Much easier to just use your software's default setting. But this is kind of like always driving in the first gear because your car was in first gear when you bought it and understanding the gear shift takes effort. Not recommended.
Specifying a covariance structure: pros and cons
Basically, you must specify a covariance structure in GLM. If by "assuming no covariance", you mean "all off-diagonal entries in the covariance matrix are zero", then all you did was assume one very s
Specifying a covariance structure: pros and cons Basically, you must specify a covariance structure in GLM. If by "assuming no covariance", you mean "all off-diagonal entries in the covariance matrix are zero", then all you did was assume one very specific covariance structure. (You could be even more specific, e.g., by assuming that all variances are equal.) This is really a variation on "I don't subscribe to any philosophy; I'm a pragmatist." - "You just described the philosophy you subscribe to." As such, I would say that the advantage of thinking about the covariance structure is the chance of using a model that is more appropriate to your data. Just as you should include known functional relationships for the expected value (or the mean) of your observations, you should account for any structure you know in the covariance. And of course, the "disadvantage" is that you need to actually think about all this. Much easier to just use your software's default setting. But this is kind of like always driving in the first gear because your car was in first gear when you bought it and understanding the gear shift takes effort. Not recommended.
Specifying a covariance structure: pros and cons Basically, you must specify a covariance structure in GLM. If by "assuming no covariance", you mean "all off-diagonal entries in the covariance matrix are zero", then all you did was assume one very s
17,619
Specifying a covariance structure: pros and cons
Here's another incomplete answer that isn't even directly about GLM...In my very limited experience with structural equation modeling (SEM), I've picked up a couple ideas that I hope might add something to the discussion. Please bear in mind throughout that I'm speaking from (limited) experience with SEM, not GLM per se, and I'm fairly ignorant of whether and where this distinction might become important. I'm more of a stats user than a statistician, so I'm also not sure that these ideas will apply to all or even most data; I've only found that they have applied to most of my own. First, I'd echo @StephanKolassa's emphasis on the importance of modeling what you already know. You acknowledge this as an aside, but I think the benefits that you're asking about are benefits of modeling what you know. As such, they meaningfully reflect that your resultant model possesses the information about the covariance structure that you've added. In SEM, I have found (through limited experience, not through theoretical study): Benefits Modeling the covariance structure improves goodness of fit (GoF) if the covariance is much stronger than its standard error (i.e., if the symmetric pathway is significant). This means you usually won't improve GoF by modeling near-zero correlations, and multicollinearity can cause problems for GoF because it inflates standard errors. Haven't tried holding out data to predict yet, but my intuition is that fixing the covariances to zero in your model is analogous to predicting a DV by combining a set of separate, single-IV, linear regression equations. Unlike this approach, multiple regression accounts for covariance in the IVs when producing a model of equations to predict the DV. This certainly improves interpretability by separating direct effects from indirect effects that occur entirely within the included set of IVs. Honestly, I'm not sure whether this necessarily improves prediction of the DV though. Being a stats user and not a statistician, I threw together the following simulation testing function to give an incomplete answer (apparently, "Yes, predictive accuracy improves when the model incorporates IV covariance") in this hopefully analogous case... simtestit=function(Sample.Size=100,Iterations=1000,IV.r=.3,DV.x.r=.4,DV.z.r=.4) { require(psych); output=matrix(NA,nrow=Iterations,ncol=6); for(i in 1:Iterations) { x=rnorm(Sample.Size); z=rnorm(Sample.Size)+x*IV.r y=rnorm(Sample.Size)+x*DV.x.r+z*DV.z.r y.predicted=x*lm(y~x+z)$coefficients[2]+z*lm(y~x+z)$coefficients[3] bizarro.y.predicted=x*lm(y~x)$coefficients[2]+z*lm(y~z)$coefficients[2] output[i,]=c(cor(y.predicted,y)^2,cor(bizarro.y.predicted,y)^2, cor(y.predicted,y)^2>cor(bizarro.y.predicted,y)^2,cor(x,z),cor(x,y),cor(y,z))} list(output=output,percent.of.predictions.improved=100*sum(output[,3])/Iterations, mean.improvement=fisherz2r(mean(fisherz(output[,1])-fisherz(output[,2]))))} # Wrapping the function in str( ) gives you the gist without filling your whole screen str(simtestit()) This function generates random samples ($N =$ Iterations, $n$ = Sample.Size) from three normally distributed variables: z $=$ x $+$ random noise, and y $=$ x $+$ z $+$ random noise. The user can influence their correlations somewhat by overriding the defaults for the last three arguments, but the random noise affects the sample correlations too, so this simulates the way sampling error affects estimates of true correlation parameters. The function computes predictions of y based on regression coefficients for x and z derived from: ($1$) multiple regression (y.predicted), and... ($2$) two separate, bivariate linear regressions (bizarro.y.predicted). The output matrix contains Iterations rows and six columns: the $R^2$s of $1$ and $2$, a true-false test of whether $1 > 2$, and the bivariate $r$s for the three unique combinations of x, y, & z. This function produces a three-element list, the first of which is the output matrix. By default, this is 1,000 rows long, so I recommend wrapping simtestit() in the str( ) function or removing this element from the list in the function itself unless you're interested in the individual sample stats for some reason. The the percentage of iterations in which $R^2$ was improved by using ($1$) multiple regression to account for the covariance of the IVs, and the mean of these improvements across the iterations (in the scale of $r$, using a Fisher transformation via the psych package). The function defaults to a short sim test of fairly typical circumstances for a maximally basic multiple regression. It permits the user to change individual sample sizes and variable correlations to suit the study and prior theories of relationship strength. I haven't tested all possible settings, but every time I've run the function, 100% of the iterations have produced higher $R^2$ with multiple regression. The mean improvement in $R^2$ seems to be greater when the covariance of the IVs (which can be manipulated incompletely by entering an argument for IV.r) is larger. Since you're probably more familiar with your GLM function than I am (which is not at all), you could probably change this function or use the basic idea to compare GLM predictions across however many IVs you want without too much trouble. Assuming that would (or does) turn out the same way, it would seem that the basic answer to your second question is probably yes, but how much depends on how strongly the IVs covary. Differences in sampling error between the held-out data and the data used to fit the model could overwhelm the improvement in its predictive accuracy within the latter dataset, because again, the improvement seems to be small unless IV correlations are strong (at least, in the maximally basic case with only two IVs). Specifying a free path for covariance between IVs in the model means asking the model fitting function to estimate this pathway's coefficient, which represents the extent of covariance between IVs. If your GLM function allows you to specify a model in which the the covariance between the IVs is freely estimated rather than fixed to zero, then your problem is a hopefully simple matter of figuring out how to do this and how to get your function to output that estimate. If your function estimates IV covariances by default, your problem simplifies further to just the latter matter (as is the case with lm( )). Costs Yes, freely estimating covariance between IVs means the model fitting algorithm has to do some work to estimate that pathway's coefficient. Not specifying that pathway in the model usually means fixing the coefficient to zero, which means the model fitting algorithm doesn't need to estimate the coefficient. Estimating additional covariance parameters means the overall model will require more time to fit. In models that already take a long time to estimate, the extra time can be substantial, especially if you have a lot of IVs. Yes, a freely-estimated covariance structure implies parameter estimates. Populations have covariance parameters, so if you're estimating population covariances, you're estimating parameters. However, if your model fits much better because you're choosing to estimate a non-trivial correlation rather than fixing it to zero, you can probably expect the Akaike and Bayesian information criteria to improve, just like other criteria that incorporate GoF. I'm not familiar with the deviance information criterion (the DIC to which you're referring, right?), but judging from its Wikipedia page, it also seems to incorporate GoF and a penalty for model complexity. Therefore the GoF should just need to improve proportionally more than the model's complexity increases to improve the DIC. If this doesn't happen overall, criteria like these that penalize for model complexity will worsen as you estimate more IV covariances. This could be a problem if, for instance, your IVs don't correlate, but the covariance structure is freely estimated anyway because you think the IVs might correlate, or because that's your function's default setting. If you have prior theoretical reasons to assume a correlation is zero and you don't want your model to test this assumption, this is one case where you might be justified in fixing the path to zero. If your prior theory is approximately right, indices that penalize for model complexity will improve if you fix pathways to your prior theory instead of having the model fitting algorithm estimate them freely. Dunno which function you're working with, but once again, I'm sure I'm unfamiliar with it, so I'm sure this answer could be improved, especially my answer to the second benefit question (for one thing, a mathematical proof of what I'm answering by simulation about multiple regression is probably available somewhere out there). I'm not even familiar with GLM in general (assuming you do mean generalized, not general linear modeling, as the tag suggests), so I hope someone will comment on or edit this answer if the distinctions from SEM invalidate my answers to your questions at all. Nonetheless, it seems we've been waiting ten months for the gurus to speak up, so if this doesn't get them to do it, it'll just have to do by itself, I suppose. Let me know if you have a particular GLM function in mind that you'd like me to mess with in R though. I may be able to figure out how to answer #3 more directly for your application if you can specify a GLM function of interest in R. I'm no expert with simulation testing either, but I think your other four questions could be sim tested (more directly) too.
Specifying a covariance structure: pros and cons
Here's another incomplete answer that isn't even directly about GLM...In my very limited experience with structural equation modeling (SEM), I've picked up a couple ideas that I hope might add somethi
Specifying a covariance structure: pros and cons Here's another incomplete answer that isn't even directly about GLM...In my very limited experience with structural equation modeling (SEM), I've picked up a couple ideas that I hope might add something to the discussion. Please bear in mind throughout that I'm speaking from (limited) experience with SEM, not GLM per se, and I'm fairly ignorant of whether and where this distinction might become important. I'm more of a stats user than a statistician, so I'm also not sure that these ideas will apply to all or even most data; I've only found that they have applied to most of my own. First, I'd echo @StephanKolassa's emphasis on the importance of modeling what you already know. You acknowledge this as an aside, but I think the benefits that you're asking about are benefits of modeling what you know. As such, they meaningfully reflect that your resultant model possesses the information about the covariance structure that you've added. In SEM, I have found (through limited experience, not through theoretical study): Benefits Modeling the covariance structure improves goodness of fit (GoF) if the covariance is much stronger than its standard error (i.e., if the symmetric pathway is significant). This means you usually won't improve GoF by modeling near-zero correlations, and multicollinearity can cause problems for GoF because it inflates standard errors. Haven't tried holding out data to predict yet, but my intuition is that fixing the covariances to zero in your model is analogous to predicting a DV by combining a set of separate, single-IV, linear regression equations. Unlike this approach, multiple regression accounts for covariance in the IVs when producing a model of equations to predict the DV. This certainly improves interpretability by separating direct effects from indirect effects that occur entirely within the included set of IVs. Honestly, I'm not sure whether this necessarily improves prediction of the DV though. Being a stats user and not a statistician, I threw together the following simulation testing function to give an incomplete answer (apparently, "Yes, predictive accuracy improves when the model incorporates IV covariance") in this hopefully analogous case... simtestit=function(Sample.Size=100,Iterations=1000,IV.r=.3,DV.x.r=.4,DV.z.r=.4) { require(psych); output=matrix(NA,nrow=Iterations,ncol=6); for(i in 1:Iterations) { x=rnorm(Sample.Size); z=rnorm(Sample.Size)+x*IV.r y=rnorm(Sample.Size)+x*DV.x.r+z*DV.z.r y.predicted=x*lm(y~x+z)$coefficients[2]+z*lm(y~x+z)$coefficients[3] bizarro.y.predicted=x*lm(y~x)$coefficients[2]+z*lm(y~z)$coefficients[2] output[i,]=c(cor(y.predicted,y)^2,cor(bizarro.y.predicted,y)^2, cor(y.predicted,y)^2>cor(bizarro.y.predicted,y)^2,cor(x,z),cor(x,y),cor(y,z))} list(output=output,percent.of.predictions.improved=100*sum(output[,3])/Iterations, mean.improvement=fisherz2r(mean(fisherz(output[,1])-fisherz(output[,2]))))} # Wrapping the function in str( ) gives you the gist without filling your whole screen str(simtestit()) This function generates random samples ($N =$ Iterations, $n$ = Sample.Size) from three normally distributed variables: z $=$ x $+$ random noise, and y $=$ x $+$ z $+$ random noise. The user can influence their correlations somewhat by overriding the defaults for the last three arguments, but the random noise affects the sample correlations too, so this simulates the way sampling error affects estimates of true correlation parameters. The function computes predictions of y based on regression coefficients for x and z derived from: ($1$) multiple regression (y.predicted), and... ($2$) two separate, bivariate linear regressions (bizarro.y.predicted). The output matrix contains Iterations rows and six columns: the $R^2$s of $1$ and $2$, a true-false test of whether $1 > 2$, and the bivariate $r$s for the three unique combinations of x, y, & z. This function produces a three-element list, the first of which is the output matrix. By default, this is 1,000 rows long, so I recommend wrapping simtestit() in the str( ) function or removing this element from the list in the function itself unless you're interested in the individual sample stats for some reason. The the percentage of iterations in which $R^2$ was improved by using ($1$) multiple regression to account for the covariance of the IVs, and the mean of these improvements across the iterations (in the scale of $r$, using a Fisher transformation via the psych package). The function defaults to a short sim test of fairly typical circumstances for a maximally basic multiple regression. It permits the user to change individual sample sizes and variable correlations to suit the study and prior theories of relationship strength. I haven't tested all possible settings, but every time I've run the function, 100% of the iterations have produced higher $R^2$ with multiple regression. The mean improvement in $R^2$ seems to be greater when the covariance of the IVs (which can be manipulated incompletely by entering an argument for IV.r) is larger. Since you're probably more familiar with your GLM function than I am (which is not at all), you could probably change this function or use the basic idea to compare GLM predictions across however many IVs you want without too much trouble. Assuming that would (or does) turn out the same way, it would seem that the basic answer to your second question is probably yes, but how much depends on how strongly the IVs covary. Differences in sampling error between the held-out data and the data used to fit the model could overwhelm the improvement in its predictive accuracy within the latter dataset, because again, the improvement seems to be small unless IV correlations are strong (at least, in the maximally basic case with only two IVs). Specifying a free path for covariance between IVs in the model means asking the model fitting function to estimate this pathway's coefficient, which represents the extent of covariance between IVs. If your GLM function allows you to specify a model in which the the covariance between the IVs is freely estimated rather than fixed to zero, then your problem is a hopefully simple matter of figuring out how to do this and how to get your function to output that estimate. If your function estimates IV covariances by default, your problem simplifies further to just the latter matter (as is the case with lm( )). Costs Yes, freely estimating covariance between IVs means the model fitting algorithm has to do some work to estimate that pathway's coefficient. Not specifying that pathway in the model usually means fixing the coefficient to zero, which means the model fitting algorithm doesn't need to estimate the coefficient. Estimating additional covariance parameters means the overall model will require more time to fit. In models that already take a long time to estimate, the extra time can be substantial, especially if you have a lot of IVs. Yes, a freely-estimated covariance structure implies parameter estimates. Populations have covariance parameters, so if you're estimating population covariances, you're estimating parameters. However, if your model fits much better because you're choosing to estimate a non-trivial correlation rather than fixing it to zero, you can probably expect the Akaike and Bayesian information criteria to improve, just like other criteria that incorporate GoF. I'm not familiar with the deviance information criterion (the DIC to which you're referring, right?), but judging from its Wikipedia page, it also seems to incorporate GoF and a penalty for model complexity. Therefore the GoF should just need to improve proportionally more than the model's complexity increases to improve the DIC. If this doesn't happen overall, criteria like these that penalize for model complexity will worsen as you estimate more IV covariances. This could be a problem if, for instance, your IVs don't correlate, but the covariance structure is freely estimated anyway because you think the IVs might correlate, or because that's your function's default setting. If you have prior theoretical reasons to assume a correlation is zero and you don't want your model to test this assumption, this is one case where you might be justified in fixing the path to zero. If your prior theory is approximately right, indices that penalize for model complexity will improve if you fix pathways to your prior theory instead of having the model fitting algorithm estimate them freely. Dunno which function you're working with, but once again, I'm sure I'm unfamiliar with it, so I'm sure this answer could be improved, especially my answer to the second benefit question (for one thing, a mathematical proof of what I'm answering by simulation about multiple regression is probably available somewhere out there). I'm not even familiar with GLM in general (assuming you do mean generalized, not general linear modeling, as the tag suggests), so I hope someone will comment on or edit this answer if the distinctions from SEM invalidate my answers to your questions at all. Nonetheless, it seems we've been waiting ten months for the gurus to speak up, so if this doesn't get them to do it, it'll just have to do by itself, I suppose. Let me know if you have a particular GLM function in mind that you'd like me to mess with in R though. I may be able to figure out how to answer #3 more directly for your application if you can specify a GLM function of interest in R. I'm no expert with simulation testing either, but I think your other four questions could be sim tested (more directly) too.
Specifying a covariance structure: pros and cons Here's another incomplete answer that isn't even directly about GLM...In my very limited experience with structural equation modeling (SEM), I've picked up a couple ideas that I hope might add somethi
17,620
Similarity of two discrete fourier tranforms?
Spectral coherence, if used correctly would do it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want to weight the coherences at frequencies that have a high energy in the power spectral density. That way, you would be measuring the similarities at the frequencies that dominate the time series instead of weighting the coherence with a large weight, when the content of that frequency in the time series is negligible. So, in simple words- the basic idea is to find the frequencies at which the amplitude(energy) in the signals are high(interpret as the frequencies that dominantly constitute each signal) and then to compare the similarities at these frequencies with a higher weight and compare the signals at the rest of the frequencies with a lower weight. The area which deals with questions of this kind is called cross-spectral analysis. http://www.atmos.washington.edu/~dennis/552_Notes_6c.pdf is an excellent introduction to cross-spectral analysis. Optimal Lag: Also look at my answer over here: How to correlate two time series, with possible time differences This deals finding the optimal lag, using the spectral coherence. R has functions to compute the power spectral densities, auto and cross correlations, Fourier transforms and coherence. You have to right code to find the optimal lag to obtain the max. weighted coherence. That said, a code for weighting the coherence vector using the spectral density must also be written. Following which you can sum up the weighted elements and average it to get the similarity observed at the optimal lag.
Similarity of two discrete fourier tranforms?
Spectral coherence, if used correctly would do it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want
Similarity of two discrete fourier tranforms? Spectral coherence, if used correctly would do it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want to weight the coherences at frequencies that have a high energy in the power spectral density. That way, you would be measuring the similarities at the frequencies that dominate the time series instead of weighting the coherence with a large weight, when the content of that frequency in the time series is negligible. So, in simple words- the basic idea is to find the frequencies at which the amplitude(energy) in the signals are high(interpret as the frequencies that dominantly constitute each signal) and then to compare the similarities at these frequencies with a higher weight and compare the signals at the rest of the frequencies with a lower weight. The area which deals with questions of this kind is called cross-spectral analysis. http://www.atmos.washington.edu/~dennis/552_Notes_6c.pdf is an excellent introduction to cross-spectral analysis. Optimal Lag: Also look at my answer over here: How to correlate two time series, with possible time differences This deals finding the optimal lag, using the spectral coherence. R has functions to compute the power spectral densities, auto and cross correlations, Fourier transforms and coherence. You have to right code to find the optimal lag to obtain the max. weighted coherence. That said, a code for weighting the coherence vector using the spectral density must also be written. Following which you can sum up the weighted elements and average it to get the similarity observed at the optimal lag.
Similarity of two discrete fourier tranforms? Spectral coherence, if used correctly would do it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want
17,621
Similarity of two discrete fourier tranforms?
Have you tried another approach for climate signal detection/modelling, like a wavelet analysis? The big problem that can arise with the DFT in climate analysis is actually what you mention: the oscillations are not perfectly periodic and they usually have different time spans so they can actually have many different oscillation ranges, which is pretty confusing from a Fourier Transform perspective. A wavelet analysis is more suitable to climate signals because they allow you to check different time spans of oscillation; just as different frequencies are played at different times by a musical instrument, you can check different frequencies in different time spans with the wavelet transform. If you are interested, this paper by Lau & Weng (1995) should erase most of your doubts about this method. The most interesting part is that the wavelet transform of a model versus that of the data are almost directly comparable, because you can directly compare the time span that your model predicts, leaving out all of the spurious oscillation ranges that it doesn't. PS: I have to add that I wanted to post this as a comment, because it is not actually what the OP's is asking for, but my comment would have been too large and decided to post it as an answer that might come in handy as an alternative approach to that of DFT's.
Similarity of two discrete fourier tranforms?
Have you tried another approach for climate signal detection/modelling, like a wavelet analysis? The big problem that can arise with the DFT in climate analysis is actually what you mention: the oscil
Similarity of two discrete fourier tranforms? Have you tried another approach for climate signal detection/modelling, like a wavelet analysis? The big problem that can arise with the DFT in climate analysis is actually what you mention: the oscillations are not perfectly periodic and they usually have different time spans so they can actually have many different oscillation ranges, which is pretty confusing from a Fourier Transform perspective. A wavelet analysis is more suitable to climate signals because they allow you to check different time spans of oscillation; just as different frequencies are played at different times by a musical instrument, you can check different frequencies in different time spans with the wavelet transform. If you are interested, this paper by Lau & Weng (1995) should erase most of your doubts about this method. The most interesting part is that the wavelet transform of a model versus that of the data are almost directly comparable, because you can directly compare the time span that your model predicts, leaving out all of the spurious oscillation ranges that it doesn't. PS: I have to add that I wanted to post this as a comment, because it is not actually what the OP's is asking for, but my comment would have been too large and decided to post it as an answer that might come in handy as an alternative approach to that of DFT's.
Similarity of two discrete fourier tranforms? Have you tried another approach for climate signal detection/modelling, like a wavelet analysis? The big problem that can arise with the DFT in climate analysis is actually what you mention: the oscil
17,622
Similarity of two discrete fourier tranforms?
I voted for and second the use of wavelet and spectrogram based analysis as an alternative to dft. If you can decompose your series into localized time-frequency bins, it reduces the fourier problems of aperiodicity and non-stationarity, as well as provides a nice profile of discretized data to compare. Once the data is mapped to a three dimensional set of spectral energy vs. time and frequency, euclidean distance can be used to compare profiles. A perfect match would approach the lower bound distance of zero.* You can look into time series data mining and speech recognition areas for similar approaches. *note that the wavelet binning process will filter the information content somewhat- If there can be no loss in the compared data, it might be more suitable to compare using euclidean distance in the time domain
Similarity of two discrete fourier tranforms?
I voted for and second the use of wavelet and spectrogram based analysis as an alternative to dft. If you can decompose your series into localized time-frequency bins, it reduces the fourier problems
Similarity of two discrete fourier tranforms? I voted for and second the use of wavelet and spectrogram based analysis as an alternative to dft. If you can decompose your series into localized time-frequency bins, it reduces the fourier problems of aperiodicity and non-stationarity, as well as provides a nice profile of discretized data to compare. Once the data is mapped to a three dimensional set of spectral energy vs. time and frequency, euclidean distance can be used to compare profiles. A perfect match would approach the lower bound distance of zero.* You can look into time series data mining and speech recognition areas for similar approaches. *note that the wavelet binning process will filter the information content somewhat- If there can be no loss in the compared data, it might be more suitable to compare using euclidean distance in the time domain
Similarity of two discrete fourier tranforms? I voted for and second the use of wavelet and spectrogram based analysis as an alternative to dft. If you can decompose your series into localized time-frequency bins, it reduces the fourier problems
17,623
Computationally efficient estimation of multivariate mode
The method that would fit the bill for what you want to do is the mean-shift algorithm. Essentially, mean-shift relies on moving along the direction of the gradient, which is estimated non-parametrically with the "shadow", $K'$ of a given kernel $K$. To wit, if the density $f(x)$ is estimated by $K$, then $\nabla f(x)$ is estimated by $K'$. Details of estimating the gradient of a kernel density are described in Fukunaga and Hostetler (1975), which also happened to introduce the mean-shift algorithm. A very detailed exposition on the algorithm is also given in this blog entry. REFERENCES: K. Fukunaga and L. Hostetler, "The estimation of the gradient of a density function, with applications in pattern recognition, " IEEE Transactions on Information Theory 21(1), January 1975.
Computationally efficient estimation of multivariate mode
The method that would fit the bill for what you want to do is the mean-shift algorithm. Essentially, mean-shift relies on moving along the direction of the gradient, which is estimated non-parametrica
Computationally efficient estimation of multivariate mode The method that would fit the bill for what you want to do is the mean-shift algorithm. Essentially, mean-shift relies on moving along the direction of the gradient, which is estimated non-parametrically with the "shadow", $K'$ of a given kernel $K$. To wit, if the density $f(x)$ is estimated by $K$, then $\nabla f(x)$ is estimated by $K'$. Details of estimating the gradient of a kernel density are described in Fukunaga and Hostetler (1975), which also happened to introduce the mean-shift algorithm. A very detailed exposition on the algorithm is also given in this blog entry. REFERENCES: K. Fukunaga and L. Hostetler, "The estimation of the gradient of a density function, with applications in pattern recognition, " IEEE Transactions on Information Theory 21(1), January 1975.
Computationally efficient estimation of multivariate mode The method that would fit the bill for what you want to do is the mean-shift algorithm. Essentially, mean-shift relies on moving along the direction of the gradient, which is estimated non-parametrica
17,624
Computationally efficient estimation of multivariate mode
If your main interest is 2-Dimensional problems, I would say that the kernel density estimation is a good choice because it has nice asymptotical properties (note that I am not saying that it is the best). See for example Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics 33: 1065–1076. de Valpine, P. (2004). Monte Carlo state space likelihoods by weighted posterior kernel density estimation. Journal of the American Statistical Association 99: 523-536. For higher dimensions (4+) this method is really slow due to the well-known difficulty in estimating the optimal bandwidth matrix, see. Now, the problem with the command ks in the package KDE is, as you mentioned, that it evaluates the density in a specific grid which can be very limiting. This issue can be solved if you use the package KDE for estimating the bandwidth matrix, using for example Hscv, implement the Kernel density estimator and then optimise this function using the command optim. This is shown below using simulated data and a Gaussian kernel in R. rm(list=ls()) # Required packages library(mvtnorm) library(ks) # simulated data set.seed(1) dat = rmvnorm(1000,c(0,0),diag(2)) # Bandwidth matrix H.scv=Hlscv(dat) # [Implementation of the KDE](http://en.wikipedia.org/wiki/Kernel_density_estimation) H.eig = eigen(H.scv) H.sqrt = H.eig$vectors %*% diag(sqrt(H.eig$values)) %*% solve(H.eig$vectors) H = solve(H.sqrt) dH = det(H.scv) Gkde = function(par){ return( -log(mean(dmvnorm(t(H%*%t(par-dat)),rep(0,2),diag(2),log=FALSE)/sqrt(dH)))) } # Optimisation Max = optim(c(0,0),Gkde)$par Max Shape-restricted estimators tend to be faster, for example Cule, M. L., Samworth, R. J. and Stewart, M. I. (2010). Maximum likelihood estimation of a multi-dimensional log-concave density. Journal Royal Statistical Society B 72: 545–600. But they are too peaked for this purpose. The problem in high dimensions is difficult to attack independently of the method used due to the nature of the question itself. For example, the method propopsed in another answer (mean-shift) is nice but it is known that estimating the derivative of a density is even more difficult that estimating the density itself in terms of the errors (I am not criticising this, just pointing out how difficult this problem is). Then you will probably need thousands of observations for accuarately estimating the mode in dimensions higher than $4$ in non-toy problems. Other methods that you may consider using are: fitting a multivariate finite mixture of normals (or other flexible distributions) or Abraham, C., Biau, G. and Cadre, B. (2003). Simple estimation of the mode of a multivariate density. The Canadian Journal of Statistics 31: 23–34. I hope this helps.
Computationally efficient estimation of multivariate mode
If your main interest is 2-Dimensional problems, I would say that the kernel density estimation is a good choice because it has nice asymptotical properties (note that I am not saying that it is the b
Computationally efficient estimation of multivariate mode If your main interest is 2-Dimensional problems, I would say that the kernel density estimation is a good choice because it has nice asymptotical properties (note that I am not saying that it is the best). See for example Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics 33: 1065–1076. de Valpine, P. (2004). Monte Carlo state space likelihoods by weighted posterior kernel density estimation. Journal of the American Statistical Association 99: 523-536. For higher dimensions (4+) this method is really slow due to the well-known difficulty in estimating the optimal bandwidth matrix, see. Now, the problem with the command ks in the package KDE is, as you mentioned, that it evaluates the density in a specific grid which can be very limiting. This issue can be solved if you use the package KDE for estimating the bandwidth matrix, using for example Hscv, implement the Kernel density estimator and then optimise this function using the command optim. This is shown below using simulated data and a Gaussian kernel in R. rm(list=ls()) # Required packages library(mvtnorm) library(ks) # simulated data set.seed(1) dat = rmvnorm(1000,c(0,0),diag(2)) # Bandwidth matrix H.scv=Hlscv(dat) # [Implementation of the KDE](http://en.wikipedia.org/wiki/Kernel_density_estimation) H.eig = eigen(H.scv) H.sqrt = H.eig$vectors %*% diag(sqrt(H.eig$values)) %*% solve(H.eig$vectors) H = solve(H.sqrt) dH = det(H.scv) Gkde = function(par){ return( -log(mean(dmvnorm(t(H%*%t(par-dat)),rep(0,2),diag(2),log=FALSE)/sqrt(dH)))) } # Optimisation Max = optim(c(0,0),Gkde)$par Max Shape-restricted estimators tend to be faster, for example Cule, M. L., Samworth, R. J. and Stewart, M. I. (2010). Maximum likelihood estimation of a multi-dimensional log-concave density. Journal Royal Statistical Society B 72: 545–600. But they are too peaked for this purpose. The problem in high dimensions is difficult to attack independently of the method used due to the nature of the question itself. For example, the method propopsed in another answer (mean-shift) is nice but it is known that estimating the derivative of a density is even more difficult that estimating the density itself in terms of the errors (I am not criticising this, just pointing out how difficult this problem is). Then you will probably need thousands of observations for accuarately estimating the mode in dimensions higher than $4$ in non-toy problems. Other methods that you may consider using are: fitting a multivariate finite mixture of normals (or other flexible distributions) or Abraham, C., Biau, G. and Cadre, B. (2003). Simple estimation of the mode of a multivariate density. The Canadian Journal of Statistics 31: 23–34. I hope this helps.
Computationally efficient estimation of multivariate mode If your main interest is 2-Dimensional problems, I would say that the kernel density estimation is a good choice because it has nice asymptotical properties (note that I am not saying that it is the b
17,625
Computationally efficient estimation of multivariate mode
Recently we have published a paper suggesting a fast consistent mode estimator. P.S. Ruzankin and A.V. Logachov (2019). A fast mode estimator in multidimensional space. Statistics & Probability Letters Our estimator has time complexity $O(dn)$, where $d$ is the dimensionality and $n$ is the number of the observed points. Though our method may be not as precise as the other ones already mentioned here, we write out complete proofs for consistency and strong consistency. I would also suggest the new minimal variance mode estimators from my recent paper P.S. Ruzankin (2020). A class of nonparametric mode estimators. Communications in Statistics - Simulation and Computation Those estimators have time complexity $O(dn^2)$ for $n$ points in ${\mathbb R}^d$. Please see Section 2.3 there. The estimators have precision similar to that of the known algorithms.
Computationally efficient estimation of multivariate mode
Recently we have published a paper suggesting a fast consistent mode estimator. P.S. Ruzankin and A.V. Logachov (2019). A fast mode estimator in multidimensional space. Statistics & Probability Lette
Computationally efficient estimation of multivariate mode Recently we have published a paper suggesting a fast consistent mode estimator. P.S. Ruzankin and A.V. Logachov (2019). A fast mode estimator in multidimensional space. Statistics & Probability Letters Our estimator has time complexity $O(dn)$, where $d$ is the dimensionality and $n$ is the number of the observed points. Though our method may be not as precise as the other ones already mentioned here, we write out complete proofs for consistency and strong consistency. I would also suggest the new minimal variance mode estimators from my recent paper P.S. Ruzankin (2020). A class of nonparametric mode estimators. Communications in Statistics - Simulation and Computation Those estimators have time complexity $O(dn^2)$ for $n$ points in ${\mathbb R}^d$. Please see Section 2.3 there. The estimators have precision similar to that of the known algorithms.
Computationally efficient estimation of multivariate mode Recently we have published a paper suggesting a fast consistent mode estimator. P.S. Ruzankin and A.V. Logachov (2019). A fast mode estimator in multidimensional space. Statistics & Probability Lette
17,626
Why are rlm() regression coefficient estimates different than lm() in R?
The difference is that rlm() fits models using your choice of a number of different $M$-estimators, while lm() uses ordinary least squares. In general the $M$-estimator for a regression coefficient minimizes $$ \sum_{i=1}^{n} \rho \left( \frac{Y_i - {\bf X}_{i} {\boldsymbol \beta}}{\sigma} \right) $$ as a function of ${\boldsymbol \beta}$, where $Y_i$ is the $i$'th response, and ${\bf X}_{i}$ is the predictors for individual $i$. Least squares is a special case of this where $$ \rho(x) = x^2 $$ However, the default setting for rlm(), which you appear to be using, is the Huber $M$-estimator, which uses $$ \rho(x) = \begin{cases} \frac{1}{2} x^2 &\mbox{if } |x| \leq k\\ k |x| - \frac{1}{2} k^2 & \mbox{if } |x| > k. \end{cases} $$ where $k$ is a constant. The default in rlm() is $k = 1.345$. These two estimators are minimizing different criteria, so it is no surprise that the estimates are different. Edit: From the QQ plot shown above, it looks like you have a very long tailed error distribution. This is the kind of situation the Huber M-estimator is designed for and, in that situation, can give quite different estimates: When the errors are normally distributed, the estimates will be pretty similar since, under the normal distribution, most of the Huber $ρ$ function will fall under the $|x|<k$ situation, which is equivalent to least squares. In the long tailed situation you have, many fall into the $|x|>k$ situation, which is a departure from OLS, which would explain the discrepancy.
Why are rlm() regression coefficient estimates different than lm() in R?
The difference is that rlm() fits models using your choice of a number of different $M$-estimators, while lm() uses ordinary least squares. In general the $M$-estimator for a regression coefficient m
Why are rlm() regression coefficient estimates different than lm() in R? The difference is that rlm() fits models using your choice of a number of different $M$-estimators, while lm() uses ordinary least squares. In general the $M$-estimator for a regression coefficient minimizes $$ \sum_{i=1}^{n} \rho \left( \frac{Y_i - {\bf X}_{i} {\boldsymbol \beta}}{\sigma} \right) $$ as a function of ${\boldsymbol \beta}$, where $Y_i$ is the $i$'th response, and ${\bf X}_{i}$ is the predictors for individual $i$. Least squares is a special case of this where $$ \rho(x) = x^2 $$ However, the default setting for rlm(), which you appear to be using, is the Huber $M$-estimator, which uses $$ \rho(x) = \begin{cases} \frac{1}{2} x^2 &\mbox{if } |x| \leq k\\ k |x| - \frac{1}{2} k^2 & \mbox{if } |x| > k. \end{cases} $$ where $k$ is a constant. The default in rlm() is $k = 1.345$. These two estimators are minimizing different criteria, so it is no surprise that the estimates are different. Edit: From the QQ plot shown above, it looks like you have a very long tailed error distribution. This is the kind of situation the Huber M-estimator is designed for and, in that situation, can give quite different estimates: When the errors are normally distributed, the estimates will be pretty similar since, under the normal distribution, most of the Huber $ρ$ function will fall under the $|x|<k$ situation, which is equivalent to least squares. In the long tailed situation you have, many fall into the $|x|>k$ situation, which is a departure from OLS, which would explain the discrepancy.
Why are rlm() regression coefficient estimates different than lm() in R? The difference is that rlm() fits models using your choice of a number of different $M$-estimators, while lm() uses ordinary least squares. In general the $M$-estimator for a regression coefficient m
17,627
Run-time analysis of common machine learning algorithms
Here are some superficial tables: The Computational Mathematics of Statistical Data Mining. PPT Table 1 in: Chu, C. T., Kim, S. K., Lin, Y. A., Yu, Y., Bradski, G. R., Ng, A. Y., & Olukotun, K. (2006). Mapreduce for machine learning on multicore. Neural Information Processing Systems (pp. 281–288). PDF
Run-time analysis of common machine learning algorithms
Here are some superficial tables: The Computational Mathematics of Statistical Data Mining. PPT Table 1 in: Chu, C. T., Kim, S. K., Lin, Y. A., Yu, Y., Bradski, G. R., Ng, A. Y., & Olukotun, K. (20
Run-time analysis of common machine learning algorithms Here are some superficial tables: The Computational Mathematics of Statistical Data Mining. PPT Table 1 in: Chu, C. T., Kim, S. K., Lin, Y. A., Yu, Y., Bradski, G. R., Ng, A. Y., & Olukotun, K. (2006). Mapreduce for machine learning on multicore. Neural Information Processing Systems (pp. 281–288). PDF
Run-time analysis of common machine learning algorithms Here are some superficial tables: The Computational Mathematics of Statistical Data Mining. PPT Table 1 in: Chu, C. T., Kim, S. K., Lin, Y. A., Yu, Y., Bradski, G. R., Ng, A. Y., & Olukotun, K. (20
17,628
Run-time analysis of common machine learning algorithms
I wrote the following article: computational complexity of machine learning algorithms where I keep track of the (theoretical) complexity of learning algorithms and run time analysis based on log-log regressions.
Run-time analysis of common machine learning algorithms
I wrote the following article: computational complexity of machine learning algorithms where I keep track of the (theoretical) complexity of learning algorithms and run time analysis based on log-log
Run-time analysis of common machine learning algorithms I wrote the following article: computational complexity of machine learning algorithms where I keep track of the (theoretical) complexity of learning algorithms and run time analysis based on log-log regressions.
Run-time analysis of common machine learning algorithms I wrote the following article: computational complexity of machine learning algorithms where I keep track of the (theoretical) complexity of learning algorithms and run time analysis based on log-log
17,629
Updating linear regression efficiently when adding observations and/or predictors in R
If the algorithm you are looking for is indeed something like Applied Statistics 274, 1992, Vol 41(2) then you could just use biglm as it does not require you to keep your data in a file.
Updating linear regression efficiently when adding observations and/or predictors in R
If the algorithm you are looking for is indeed something like Applied Statistics 274, 1992, Vol 41(2) then you could just use biglm as it does not require you to keep your data in a file.
Updating linear regression efficiently when adding observations and/or predictors in R If the algorithm you are looking for is indeed something like Applied Statistics 274, 1992, Vol 41(2) then you could just use biglm as it does not require you to keep your data in a file.
Updating linear regression efficiently when adding observations and/or predictors in R If the algorithm you are looking for is indeed something like Applied Statistics 274, 1992, Vol 41(2) then you could just use biglm as it does not require you to keep your data in a file.
17,630
Updating linear regression efficiently when adding observations and/or predictors in R
There is rank one QR update function in matlab here that saves you a factor $p$ in the complexity of updating the coefficients of a p-variate linear regression. Despite searching for days a couple of months ago, i've not been able to find an equivalent in R (beware there are many qr.update functions in cran but when you look under the hood they're just fake --i.e. they call lm.update all the same). Update: try in the source of the package 'leaps'. In the R-source, you will find a function 'leaps.forward', which calls a FORTRAN routine 'forwrd', located inthe /src of the package which seems to implement rank 1 QR update.
Updating linear regression efficiently when adding observations and/or predictors in R
There is rank one QR update function in matlab here that saves you a factor $p$ in the complexity of updating the coefficients of a p-variate linear regression. Despite searching for days a couple of
Updating linear regression efficiently when adding observations and/or predictors in R There is rank one QR update function in matlab here that saves you a factor $p$ in the complexity of updating the coefficients of a p-variate linear regression. Despite searching for days a couple of months ago, i've not been able to find an equivalent in R (beware there are many qr.update functions in cran but when you look under the hood they're just fake --i.e. they call lm.update all the same). Update: try in the source of the package 'leaps'. In the R-source, you will find a function 'leaps.forward', which calls a FORTRAN routine 'forwrd', located inthe /src of the package which seems to implement rank 1 QR update.
Updating linear regression efficiently when adding observations and/or predictors in R There is rank one QR update function in matlab here that saves you a factor $p$ in the complexity of updating the coefficients of a p-variate linear regression. Despite searching for days a couple of
17,631
Updating linear regression efficiently when adding observations and/or predictors in R
Why don't you try the update capability of the linear model object update.lm( lm.obj, formula, data, weights, subset, na.action) Take a look at this links For a general explanation of the update function: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html For a particular explanation about update.lm: http://www.science.oregonstate.edu/~shenr/Rhelp/update.lm.html
Updating linear regression efficiently when adding observations and/or predictors in R
Why don't you try the update capability of the linear model object update.lm( lm.obj, formula, data, weights, subset, na.action) Take a look at this links For a general explanation of the update fun
Updating linear regression efficiently when adding observations and/or predictors in R Why don't you try the update capability of the linear model object update.lm( lm.obj, formula, data, weights, subset, na.action) Take a look at this links For a general explanation of the update function: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html For a particular explanation about update.lm: http://www.science.oregonstate.edu/~shenr/Rhelp/update.lm.html
Updating linear regression efficiently when adding observations and/or predictors in R Why don't you try the update capability of the linear model object update.lm( lm.obj, formula, data, weights, subset, na.action) Take a look at this links For a general explanation of the update fun
17,632
Updating linear regression efficiently when adding observations and/or predictors in R
I've been also looking since long time for an equivalent to the matlab qr update, leaps seems a nice way! In R, you could look at the recresid() function in package strucchange, that will give recursive residuals when you add an observation (not variable!). My guess is that this will require little modification to obtain recursive betas (the betar in the code?).
Updating linear regression efficiently when adding observations and/or predictors in R
I've been also looking since long time for an equivalent to the matlab qr update, leaps seems a nice way! In R, you could look at the recresid() function in package strucchange, that will give recursi
Updating linear regression efficiently when adding observations and/or predictors in R I've been also looking since long time for an equivalent to the matlab qr update, leaps seems a nice way! In R, you could look at the recresid() function in package strucchange, that will give recursive residuals when you add an observation (not variable!). My guess is that this will require little modification to obtain recursive betas (the betar in the code?).
Updating linear regression efficiently when adding observations and/or predictors in R I've been also looking since long time for an equivalent to the matlab qr update, leaps seems a nice way! In R, you could look at the recresid() function in package strucchange, that will give recursi
17,633
Free public interest data hosting? [closed]
I did a quick search for google projects that may fit your needs, and I came up with two hits, which I have not tested: Google Fusion Tables and Google Public Data
Free public interest data hosting? [closed]
I did a quick search for google projects that may fit your needs, and I came up with two hits, which I have not tested: Google Fusion Tables and Google Public Data
Free public interest data hosting? [closed] I did a quick search for google projects that may fit your needs, and I came up with two hits, which I have not tested: Google Fusion Tables and Google Public Data
Free public interest data hosting? [closed] I did a quick search for google projects that may fit your needs, and I came up with two hits, which I have not tested: Google Fusion Tables and Google Public Data
17,634
Free public interest data hosting? [closed]
How about the UCI Machine Learning Repository? Here is their data donation policy.
Free public interest data hosting? [closed]
How about the UCI Machine Learning Repository? Here is their data donation policy.
Free public interest data hosting? [closed] How about the UCI Machine Learning Repository? Here is their data donation policy.
Free public interest data hosting? [closed] How about the UCI Machine Learning Repository? Here is their data donation policy.
17,635
Free public interest data hosting? [closed]
You should also take a look at Infochimps. I've never used the site personally, but it's designed for precisely this.
Free public interest data hosting? [closed]
You should also take a look at Infochimps. I've never used the site personally, but it's designed for precisely this.
Free public interest data hosting? [closed] You should also take a look at Infochimps. I've never used the site personally, but it's designed for precisely this.
Free public interest data hosting? [closed] You should also take a look at Infochimps. I've never used the site personally, but it's designed for precisely this.
17,636
Free public interest data hosting? [closed]
In my opinion the best option is datahub.io — it is run by the Open Knowledge Foundation (okfn) and runs on The Comprehensive Kerbal Archive Network, ckan. There's also github, which offers different options, but is still quite suitable. I use both.
Free public interest data hosting? [closed]
In my opinion the best option is datahub.io — it is run by the Open Knowledge Foundation (okfn) and runs on The Comprehensive Kerbal Archive Network, ckan. There's also github, which offers differen
Free public interest data hosting? [closed] In my opinion the best option is datahub.io — it is run by the Open Knowledge Foundation (okfn) and runs on The Comprehensive Kerbal Archive Network, ckan. There's also github, which offers different options, but is still quite suitable. I use both.
Free public interest data hosting? [closed] In my opinion the best option is datahub.io — it is run by the Open Knowledge Foundation (okfn) and runs on The Comprehensive Kerbal Archive Network, ckan. There's also github, which offers differen
17,637
What is the role of feed forward layer in Transformer Neural Network architecture?
The feed-forward layer is weights that is trained during training and the exact same matrix is applied to each respective token position. Since it is applied without any communcation with or inference by other token positions it is a highly parallelizable part of the model. The role and purpose is to process the output from one attention layer in a way to better fit the input for the next attention layer.
What is the role of feed forward layer in Transformer Neural Network architecture?
The feed-forward layer is weights that is trained during training and the exact same matrix is applied to each respective token position. Since it is applied without any communcation with or inference
What is the role of feed forward layer in Transformer Neural Network architecture? The feed-forward layer is weights that is trained during training and the exact same matrix is applied to each respective token position. Since it is applied without any communcation with or inference by other token positions it is a highly parallelizable part of the model. The role and purpose is to process the output from one attention layer in a way to better fit the input for the next attention layer.
What is the role of feed forward layer in Transformer Neural Network architecture? The feed-forward layer is weights that is trained during training and the exact same matrix is applied to each respective token position. Since it is applied without any communcation with or inference
17,638
What is the role of feed forward layer in Transformer Neural Network architecture?
Consider encoder part of transformer. If there is no feed-forward layer, self-attention is simply performing re-averaging of value vectors. In order to add more model function, i.e. element-wise non-linearity transformation of incoming vectors, to transformer, we add feed-forward layer to encoder part of transformer.
What is the role of feed forward layer in Transformer Neural Network architecture?
Consider encoder part of transformer. If there is no feed-forward layer, self-attention is simply performing re-averaging of value vectors. In order to add more model function, i.e. element-wise non-l
What is the role of feed forward layer in Transformer Neural Network architecture? Consider encoder part of transformer. If there is no feed-forward layer, self-attention is simply performing re-averaging of value vectors. In order to add more model function, i.e. element-wise non-linearity transformation of incoming vectors, to transformer, we add feed-forward layer to encoder part of transformer.
What is the role of feed forward layer in Transformer Neural Network architecture? Consider encoder part of transformer. If there is no feed-forward layer, self-attention is simply performing re-averaging of value vectors. In order to add more model function, i.e. element-wise non-l
17,639
What is the role of feed forward layer in Transformer Neural Network architecture?
Here is my version, as @avata has said self attention blocks are simply performing re-average of values. Imagine in bert you have 144 self attention block (12 in each layer). If there is no FFN all will act the same and similar. Adding FFN make each of them behave like a separate small model that can be trained (get parameters). Then the whole process become like training a "stacked ensemble learning" where each model get different weight. This is not the best analogy; but the purpose of FFN is to parameterize self-attention modules. Each of FFN has 3072 hidden dimension in the Bert-base; this means a lot of parameters for learning Bert corresponds to FFN block. Therefore, there has been effort to optimize these modules (either by replacing them or by reducing their size)
What is the role of feed forward layer in Transformer Neural Network architecture?
Here is my version, as @avata has said self attention blocks are simply performing re-average of values. Imagine in bert you have 144 self attention block (12 in each layer). If there is no FFN all wi
What is the role of feed forward layer in Transformer Neural Network architecture? Here is my version, as @avata has said self attention blocks are simply performing re-average of values. Imagine in bert you have 144 self attention block (12 in each layer). If there is no FFN all will act the same and similar. Adding FFN make each of them behave like a separate small model that can be trained (get parameters). Then the whole process become like training a "stacked ensemble learning" where each model get different weight. This is not the best analogy; but the purpose of FFN is to parameterize self-attention modules. Each of FFN has 3072 hidden dimension in the Bert-base; this means a lot of parameters for learning Bert corresponds to FFN block. Therefore, there has been effort to optimize these modules (either by replacing them or by reducing their size)
What is the role of feed forward layer in Transformer Neural Network architecture? Here is my version, as @avata has said self attention blocks are simply performing re-average of values. Imagine in bert you have 144 self attention block (12 in each layer). If there is no FFN all wi
17,640
How to construct a cross-entropy loss for general regression targets?
Suppose that we are trying to infer the parametric distribution $p(y|\Theta(X))$, where $\Theta(X)$ is a vector output inverse link function with $[\theta_1,\theta_2,...,\theta_M]$. We have a neural network at hand with some topology we decided. The number of outputs at the output layer matches the number of parameters we would like to infer (it may be less if we don't care about all the parameters, as we will see in the examples below). In the hidden layers we may use whatever activation function we like. What's crucial are the output activation functions for each parameter as they have to be compatible with the support of the parameters. Some example correspondence: Linear activation: $\mu$, mean of Gaussian distribution Logistic activation: $\mu$, mean of Bernoulli distribution Softplus activation: $\sigma$, standard deviation of Gaussian distribution, shape parameters of Gamma distribution Definition of cross entropy: $$H(p,q) = -E_p[\log q(y)] = -\int p(y) \log q(y) dy$$ where $p$ is ideal truth, and $q$ is our model. Empirical estimate: $$H(p,q) \approx -\frac{1}{N}\sum_{i=1}^N \log q(y_i)$$ where $N$ is number of independent data points coming from $p$. Version for conditional distribution: $$H(p,q) \approx -\frac{1}{N}\sum_{i=1}^N \log q(y_i|\Theta(X_i))$$ Now suppose that the network output is $\Theta(W,X_i)$ for a given input vector $X_i$ and all network weights $W$, then the training procedure for expected cross entropy is: $$W_{opt} = \arg \min_W -\frac{1}{N}\sum_{i=1}^N \log q(y_i|\Theta(W,X_i))$$ which is equivalent to Maximum Likelihood Estimation of the network parameters. Some examples: Regression: Gaussian distribution with heteroscedasticity $$\mu = \theta_1 : \text{linear activation}$$ $$\sigma = \theta_2: \text{softplus activation*}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\frac{1} {\theta_2(W,X_i)\sqrt{2\pi}}e^{-\frac{(y_i-\theta_1(W,X_i))^2}{2\theta_2(W,X_i)^2}}]$$ under homoscedasticity we don't need $\theta_2$ as it doesn't affect the optimization and the expression simplifies to (after we throw away irrelevant constants): $$\text{loss} = \frac{1}{N}\sum_{i=1}^N (y_i-\theta_1(W,X_i))^2$$ Binary classification: Bernoulli distribution $$\mu = \theta_1 : \text{logistic activation}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\theta_1(W,X_i)^{y_i}(1-\theta_1(W,X_i))^{(1-y_i)}]$$ $$= -\frac{1}{N}\sum_{i=1}^N y_i\log [\theta_1(W,X_i)] + (1-y_i)\log [1-\theta_1(W,X_i)]$$ with $y_i \in \{0,1\}$. Regression: Gamma response $$\alpha \text{(shape)} = \theta_1 : \text{softplus activation*}$$ $$\beta \text{(rate)} = \theta_2: \text{softplus activation*}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\frac{\theta_2(W,X_i)^{\theta_1(W,X_i)}}{\Gamma(\theta_1(W,X_i))} y_i^{\theta_1(W,X_i)-1}e^{-\theta_2(W,X_i)y_i}]$$ Multiclass classification: Categorical distribution Some constraints cannot be handled directly by plain vanilla neural network toolboxes (but these days they seem to do very advanced tricks). This is one of those cases: $$\mu_1 = \theta_1 : \text{logistic activation}$$ $$\mu_2 = \theta_2 : \text{logistic activation}$$ ... $$\mu_K = \theta_K : \text{logistic activation}$$ We have a constraint $\sum \theta_i = 1$. So we fix it before we plug them into the distribution: $$\theta_i' = \frac{\theta_i}{\sum_{j=1}^K \theta_j}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\Pi_{j=1}^K\theta_i'(W,X_i)^{y_{i,j}}]$$ Note that $y$ is a vector quantity in this case. Another approach is the Softmax. *ReLU is unfortunately not a particularly good activation function for $(0,\infty)$ due to two reasons. First of all it has a dead derivative zone on the left quadrant which causes optimization algorithms to get trapped. Secondly at exactly 0 value, many distributions would go singular for the value of the parameter. For this reason it is usually common practice to add a small value $\epsilon$ to assist off-the shelf optimizers and for numerical stability. As suggested by @Sycorax Softplus activation is a much better replacement as it doesn't have a dead derivative zone. Summary: Plug the network output to the parameters of the distribution and take the -log then minimize the network weights. This is equivalent to Maximum Likelihood Estimation of the parameters.
How to construct a cross-entropy loss for general regression targets?
Suppose that we are trying to infer the parametric distribution $p(y|\Theta(X))$, where $\Theta(X)$ is a vector output inverse link function with $[\theta_1,\theta_2,...,\theta_M]$. We have a neural n
How to construct a cross-entropy loss for general regression targets? Suppose that we are trying to infer the parametric distribution $p(y|\Theta(X))$, where $\Theta(X)$ is a vector output inverse link function with $[\theta_1,\theta_2,...,\theta_M]$. We have a neural network at hand with some topology we decided. The number of outputs at the output layer matches the number of parameters we would like to infer (it may be less if we don't care about all the parameters, as we will see in the examples below). In the hidden layers we may use whatever activation function we like. What's crucial are the output activation functions for each parameter as they have to be compatible with the support of the parameters. Some example correspondence: Linear activation: $\mu$, mean of Gaussian distribution Logistic activation: $\mu$, mean of Bernoulli distribution Softplus activation: $\sigma$, standard deviation of Gaussian distribution, shape parameters of Gamma distribution Definition of cross entropy: $$H(p,q) = -E_p[\log q(y)] = -\int p(y) \log q(y) dy$$ where $p$ is ideal truth, and $q$ is our model. Empirical estimate: $$H(p,q) \approx -\frac{1}{N}\sum_{i=1}^N \log q(y_i)$$ where $N$ is number of independent data points coming from $p$. Version for conditional distribution: $$H(p,q) \approx -\frac{1}{N}\sum_{i=1}^N \log q(y_i|\Theta(X_i))$$ Now suppose that the network output is $\Theta(W,X_i)$ for a given input vector $X_i$ and all network weights $W$, then the training procedure for expected cross entropy is: $$W_{opt} = \arg \min_W -\frac{1}{N}\sum_{i=1}^N \log q(y_i|\Theta(W,X_i))$$ which is equivalent to Maximum Likelihood Estimation of the network parameters. Some examples: Regression: Gaussian distribution with heteroscedasticity $$\mu = \theta_1 : \text{linear activation}$$ $$\sigma = \theta_2: \text{softplus activation*}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\frac{1} {\theta_2(W,X_i)\sqrt{2\pi}}e^{-\frac{(y_i-\theta_1(W,X_i))^2}{2\theta_2(W,X_i)^2}}]$$ under homoscedasticity we don't need $\theta_2$ as it doesn't affect the optimization and the expression simplifies to (after we throw away irrelevant constants): $$\text{loss} = \frac{1}{N}\sum_{i=1}^N (y_i-\theta_1(W,X_i))^2$$ Binary classification: Bernoulli distribution $$\mu = \theta_1 : \text{logistic activation}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\theta_1(W,X_i)^{y_i}(1-\theta_1(W,X_i))^{(1-y_i)}]$$ $$= -\frac{1}{N}\sum_{i=1}^N y_i\log [\theta_1(W,X_i)] + (1-y_i)\log [1-\theta_1(W,X_i)]$$ with $y_i \in \{0,1\}$. Regression: Gamma response $$\alpha \text{(shape)} = \theta_1 : \text{softplus activation*}$$ $$\beta \text{(rate)} = \theta_2: \text{softplus activation*}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\frac{\theta_2(W,X_i)^{\theta_1(W,X_i)}}{\Gamma(\theta_1(W,X_i))} y_i^{\theta_1(W,X_i)-1}e^{-\theta_2(W,X_i)y_i}]$$ Multiclass classification: Categorical distribution Some constraints cannot be handled directly by plain vanilla neural network toolboxes (but these days they seem to do very advanced tricks). This is one of those cases: $$\mu_1 = \theta_1 : \text{logistic activation}$$ $$\mu_2 = \theta_2 : \text{logistic activation}$$ ... $$\mu_K = \theta_K : \text{logistic activation}$$ We have a constraint $\sum \theta_i = 1$. So we fix it before we plug them into the distribution: $$\theta_i' = \frac{\theta_i}{\sum_{j=1}^K \theta_j}$$ $$\text{loss} = -\frac{1}{N}\sum_{i=1}^N \log [\Pi_{j=1}^K\theta_i'(W,X_i)^{y_{i,j}}]$$ Note that $y$ is a vector quantity in this case. Another approach is the Softmax. *ReLU is unfortunately not a particularly good activation function for $(0,\infty)$ due to two reasons. First of all it has a dead derivative zone on the left quadrant which causes optimization algorithms to get trapped. Secondly at exactly 0 value, many distributions would go singular for the value of the parameter. For this reason it is usually common practice to add a small value $\epsilon$ to assist off-the shelf optimizers and for numerical stability. As suggested by @Sycorax Softplus activation is a much better replacement as it doesn't have a dead derivative zone. Summary: Plug the network output to the parameters of the distribution and take the -log then minimize the network weights. This is equivalent to Maximum Likelihood Estimation of the parameters.
How to construct a cross-entropy loss for general regression targets? Suppose that we are trying to infer the parametric distribution $p(y|\Theta(X))$, where $\Theta(X)$ is a vector output inverse link function with $[\theta_1,\theta_2,...,\theta_M]$. We have a neural n
17,641
How to construct a cross-entropy loss for general regression targets?
I'm going to answer for targets whose distribution family is an exponential family. This is typically justified as the minimum assumptive distribution. Let us denote the observed distributions to be $X_1, X_2, \dots$, the predictive distributions produced by the model to be $Y_1, Y_2, \dots$. Every exponential family admits two important parametrizations: natural and expectation. Let the expectation parameters of the observed distributions be $\chi_i$, and the natural parameters of the predictive distributions be $\eta_i$. How does one move from an assumed probability distribution for the target variable to defining a cross-entropy loss for your network? The cross entropy of an exponential family is $$H^\times(X; Y) = -\chi^\intercal \eta + g(\eta) - E_{x\sim X}\left(h(x)\right). $$ where $h$ is the carrier measure and $g$ the log-normalizer of the exponential family. We typically just want the gradient of the cross entropy with respect to the predictions, which is is just $$\frac{dH^\times(X; Y)}{d\eta} = g'(\eta)-\chi. $$ $g'(\eta)$ is just the expectation parameters of the prediction. What does the function require as inputs? We require the pair $(\eta_i, \chi_i)$. Let's go through your examples: Categorical cross-entropy loss for one-hot targets. The one-hot vector (without the final element) are the expectation parameters. The natural parameters are log-odds (See Nielsen and Nock for a good reference to conversions). To optimize the cross entropy, you let the gradient be the difference of one-hot vectors. Gaussian-distributed target distribution (with known variance). The cross entropy is simply a paraboloid, and therefore corresponds to MSE. Its gradient is linear, and is simply the difference of the observed and predicted means. A less common example such as a gamma distributed target, or a heavy-tailed target. Same thing: the optimization is done as a difference of expectation parameters. For the gamma distribution, the expectation parameters are $(\frac{k}{\lambda}, \psi(k) - \log \lambda)$ where $k$ is the shape and $\lambda$ is the rate. The relationship between minimizing cross entropy and maximizing log-likelihood is a good question. Minimizing log-likelihood is the special case where the target is a sample $x$ (or delta distribution) rather than a distribution $X$. I think for the optimization you do the same thing as above except you just use $\chi=x$. The log-likelihood calculation is just the log-density of the predictive distribution evaluated at $x$.
How to construct a cross-entropy loss for general regression targets?
I'm going to answer for targets whose distribution family is an exponential family. This is typically justified as the minimum assumptive distribution. Let us denote the observed distributions to be
How to construct a cross-entropy loss for general regression targets? I'm going to answer for targets whose distribution family is an exponential family. This is typically justified as the minimum assumptive distribution. Let us denote the observed distributions to be $X_1, X_2, \dots$, the predictive distributions produced by the model to be $Y_1, Y_2, \dots$. Every exponential family admits two important parametrizations: natural and expectation. Let the expectation parameters of the observed distributions be $\chi_i$, and the natural parameters of the predictive distributions be $\eta_i$. How does one move from an assumed probability distribution for the target variable to defining a cross-entropy loss for your network? The cross entropy of an exponential family is $$H^\times(X; Y) = -\chi^\intercal \eta + g(\eta) - E_{x\sim X}\left(h(x)\right). $$ where $h$ is the carrier measure and $g$ the log-normalizer of the exponential family. We typically just want the gradient of the cross entropy with respect to the predictions, which is is just $$\frac{dH^\times(X; Y)}{d\eta} = g'(\eta)-\chi. $$ $g'(\eta)$ is just the expectation parameters of the prediction. What does the function require as inputs? We require the pair $(\eta_i, \chi_i)$. Let's go through your examples: Categorical cross-entropy loss for one-hot targets. The one-hot vector (without the final element) are the expectation parameters. The natural parameters are log-odds (See Nielsen and Nock for a good reference to conversions). To optimize the cross entropy, you let the gradient be the difference of one-hot vectors. Gaussian-distributed target distribution (with known variance). The cross entropy is simply a paraboloid, and therefore corresponds to MSE. Its gradient is linear, and is simply the difference of the observed and predicted means. A less common example such as a gamma distributed target, or a heavy-tailed target. Same thing: the optimization is done as a difference of expectation parameters. For the gamma distribution, the expectation parameters are $(\frac{k}{\lambda}, \psi(k) - \log \lambda)$ where $k$ is the shape and $\lambda$ is the rate. The relationship between minimizing cross entropy and maximizing log-likelihood is a good question. Minimizing log-likelihood is the special case where the target is a sample $x$ (or delta distribution) rather than a distribution $X$. I think for the optimization you do the same thing as above except you just use $\chi=x$. The log-likelihood calculation is just the log-density of the predictive distribution evaluated at $x$.
How to construct a cross-entropy loss for general regression targets? I'm going to answer for targets whose distribution family is an exponential family. This is typically justified as the minimum assumptive distribution. Let us denote the observed distributions to be
17,642
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
State-of-the-art algorithms may differ from what is used in production in the industry. Also, the latter can invest in fine-tuning more basic (and often more interpretable) approaches to make them work better than what academics would. Example 1: According to TechCrunch, Nuance will start using "deep learning tech" in its Dragon speech recognition products this september. Example 2: Chiticariu, Laura, Yunyao Li, and Frederick R. Reiss. "Rule-Based Information Extraction is Dead! Long Live Rule-Based Information Extraction Systems!." In EMNLP, no. October, pp. 827-832. 2013. https://scholar.google.com/scholar?cluster=12856773132046965379&hl=en&as_sdt=0,22 ; http://www.aclweb.org/website/old_anthology/D/D13/D13-1079.pdf With that being said: Which of the ensemble learning algorithms is considered to be state-of-the-art nowadays One of the state-of-the-art systems for image classification gets some nice gain with ensemble (just like most other systems I far as I know): He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015). https://scholar.google.com/scholar?cluster=17704431389020559554&hl=en&as_sdt=0,22 ; https://arxiv.org/pdf/1512.03385v1.pdf
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
State-of-the-art algorithms may differ from what is used in production in the industry. Also, the latter can invest in fine-tuning more basic (and often more interpretable) approaches to make them wo
State-of-the-art ensemble learning algorithm in pattern recognition tasks? State-of-the-art algorithms may differ from what is used in production in the industry. Also, the latter can invest in fine-tuning more basic (and often more interpretable) approaches to make them work better than what academics would. Example 1: According to TechCrunch, Nuance will start using "deep learning tech" in its Dragon speech recognition products this september. Example 2: Chiticariu, Laura, Yunyao Li, and Frederick R. Reiss. "Rule-Based Information Extraction is Dead! Long Live Rule-Based Information Extraction Systems!." In EMNLP, no. October, pp. 827-832. 2013. https://scholar.google.com/scholar?cluster=12856773132046965379&hl=en&as_sdt=0,22 ; http://www.aclweb.org/website/old_anthology/D/D13/D13-1079.pdf With that being said: Which of the ensemble learning algorithms is considered to be state-of-the-art nowadays One of the state-of-the-art systems for image classification gets some nice gain with ensemble (just like most other systems I far as I know): He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015). https://scholar.google.com/scholar?cluster=17704431389020559554&hl=en&as_sdt=0,22 ; https://arxiv.org/pdf/1512.03385v1.pdf
State-of-the-art ensemble learning algorithm in pattern recognition tasks? State-of-the-art algorithms may differ from what is used in production in the industry. Also, the latter can invest in fine-tuning more basic (and often more interpretable) approaches to make them wo
17,643
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
I guess one could say that deep learning is pretty much state-of-the-art in most of the subdomains of computer vision (classification, detection, super-resolution, edge detection,...) except for very specific task like SLAM where deep learning is not yet on par with existing methods. Often to get a few extra percent to win competition networks averaging is used but networks are getting so good that it does not matter that much anymore. In production it is totally different. Big companies are usually relying on old algorithms that have proven to be effective and that the experts in place have knowledge of and years of practice using them. Plus integrating a new algorithm in the supply chain requires a lot of time. I think some cameras companies still use Viola Jones detector for face detection and I know for a fact that SIFT is being heavily used in a lot of applications in industry. They are also still a bit of scepticism towards deep learning methods that are considered dangerous black boxes. But the impressive results of those algorithms are slowy making people change their minds about it. Start-ups are more willing to use such solutions as they have to have innovative solutions to get funded. I would say that in twenty years most of the computer vision based products will use deep learning even if something more effective is discovered in between. To add to Franck's answer deep learning is changing so fast that ResNets of Kaiming He are not State of the art anymore Densely connected Convolutional Networks and Wide and Deep networks with SGD restarting are now SOTA on EDIT CIFAR and SVHN and probably Imagenet too and even this could change in a few days with ILSVRC 2016 results on the 16th of September. If you are interested in more state of the art results on MS-COCO the most challenging detection dataset existing will be released at ECCV in October.
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
I guess one could say that deep learning is pretty much state-of-the-art in most of the subdomains of computer vision (classification, detection, super-resolution, edge detection,...) except for very
State-of-the-art ensemble learning algorithm in pattern recognition tasks? I guess one could say that deep learning is pretty much state-of-the-art in most of the subdomains of computer vision (classification, detection, super-resolution, edge detection,...) except for very specific task like SLAM where deep learning is not yet on par with existing methods. Often to get a few extra percent to win competition networks averaging is used but networks are getting so good that it does not matter that much anymore. In production it is totally different. Big companies are usually relying on old algorithms that have proven to be effective and that the experts in place have knowledge of and years of practice using them. Plus integrating a new algorithm in the supply chain requires a lot of time. I think some cameras companies still use Viola Jones detector for face detection and I know for a fact that SIFT is being heavily used in a lot of applications in industry. They are also still a bit of scepticism towards deep learning methods that are considered dangerous black boxes. But the impressive results of those algorithms are slowy making people change their minds about it. Start-ups are more willing to use such solutions as they have to have innovative solutions to get funded. I would say that in twenty years most of the computer vision based products will use deep learning even if something more effective is discovered in between. To add to Franck's answer deep learning is changing so fast that ResNets of Kaiming He are not State of the art anymore Densely connected Convolutional Networks and Wide and Deep networks with SGD restarting are now SOTA on EDIT CIFAR and SVHN and probably Imagenet too and even this could change in a few days with ILSVRC 2016 results on the 16th of September. If you are interested in more state of the art results on MS-COCO the most challenging detection dataset existing will be released at ECCV in October.
State-of-the-art ensemble learning algorithm in pattern recognition tasks? I guess one could say that deep learning is pretty much state-of-the-art in most of the subdomains of computer vision (classification, detection, super-resolution, edge detection,...) except for very
17,644
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
There are a lot of what-ifs involved with your question, and usually finding the best model involves testing most of these on the data. Just because a model in theory could produce more accurate results does not mean it will always produce a model with the lowest error. That being said... Neural Net ensembles can be very accurate, as long as you can accept the black box. Varying by both number of nodes and number of layers can cover a lot of variance in the data, with introducing this many modelling factors it could be easy to overfit the data. Random Forests have rarely produced the most accurate results, but boosted trees can model complex relationships like in the AI tasks you discussed without much risk of overfitting. One would think, well why not just ensemble all of these models together, but this model compromises on the possible strengths of the individual models. Again this would likely lead to some overfitting issues. Models that are computationally efficient is a different matter, and I would not start with a very complicated neural net. Using a neural net as a benchmark, in my experience it has been most efficient using boosted trees. This is based on my experience, and a reasonable understanding of the theory underlying each of the modelling types discussed.
State-of-the-art ensemble learning algorithm in pattern recognition tasks?
There are a lot of what-ifs involved with your question, and usually finding the best model involves testing most of these on the data. Just because a model in theory could produce more accurate resul
State-of-the-art ensemble learning algorithm in pattern recognition tasks? There are a lot of what-ifs involved with your question, and usually finding the best model involves testing most of these on the data. Just because a model in theory could produce more accurate results does not mean it will always produce a model with the lowest error. That being said... Neural Net ensembles can be very accurate, as long as you can accept the black box. Varying by both number of nodes and number of layers can cover a lot of variance in the data, with introducing this many modelling factors it could be easy to overfit the data. Random Forests have rarely produced the most accurate results, but boosted trees can model complex relationships like in the AI tasks you discussed without much risk of overfitting. One would think, well why not just ensemble all of these models together, but this model compromises on the possible strengths of the individual models. Again this would likely lead to some overfitting issues. Models that are computationally efficient is a different matter, and I would not start with a very complicated neural net. Using a neural net as a benchmark, in my experience it has been most efficient using boosted trees. This is based on my experience, and a reasonable understanding of the theory underlying each of the modelling types discussed.
State-of-the-art ensemble learning algorithm in pattern recognition tasks? There are a lot of what-ifs involved with your question, and usually finding the best model involves testing most of these on the data. Just because a model in theory could produce more accurate resul
17,645
Why not always use ensemble learning?
In general it is not true that it will always perform better. There are several ensemble methods, each with its own advantages/weaknesses. Which one to use and then depends on the problem at hand. For example, if you have models with high variance (they over-fit your data), then you are likely to benefit from using bagging. If you have biased models, it is better to combine use them with Boosting. There are also different strategies to form ensembles. The topic is just too wide to cover it in one answer. But my point is: if you use the wrong ensemble method for your setting, you are not going to do better. For example, using Bagging with a biased model is not going to help. Also, if you need to work in a probabilistic setting, ensemble methods may not work either. It is known that Boosting (in its most popular forms like AdaBoost) delivers poor probability estimates. That is, if you would like to have a model that allows you to reason about your data, not only classification, you might be better off with a graphical model.
Why not always use ensemble learning?
In general it is not true that it will always perform better. There are several ensemble methods, each with its own advantages/weaknesses. Which one to use and then depends on the problem at hand. For
Why not always use ensemble learning? In general it is not true that it will always perform better. There are several ensemble methods, each with its own advantages/weaknesses. Which one to use and then depends on the problem at hand. For example, if you have models with high variance (they over-fit your data), then you are likely to benefit from using bagging. If you have biased models, it is better to combine use them with Boosting. There are also different strategies to form ensembles. The topic is just too wide to cover it in one answer. But my point is: if you use the wrong ensemble method for your setting, you are not going to do better. For example, using Bagging with a biased model is not going to help. Also, if you need to work in a probabilistic setting, ensemble methods may not work either. It is known that Boosting (in its most popular forms like AdaBoost) delivers poor probability estimates. That is, if you would like to have a model that allows you to reason about your data, not only classification, you might be better off with a graphical model.
Why not always use ensemble learning? In general it is not true that it will always perform better. There are several ensemble methods, each with its own advantages/weaknesses. Which one to use and then depends on the problem at hand. For
17,646
Why not always use ensemble learning?
You should ! If you refer to ensemble of models (also called blending, staging, stacking). Unfortunately, the sense behind ensemble is quite vague. Models themselves based on ensemble (Random Forest...) may not performed better than others (Linear Regression, Neural networks). In his article: Stacked generalization (1992) David H. Wolpert stated: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. And this remains true until today. However, these model are harder to tune and ship than a single model. They also have a longer prediction time, since you need the predictions of possibly hundreds of models. And, even though they have better predicitions, the increment in accuracy may not be worth the hassle. After seeing various questions about ensemble learning, I wrote why does model staging work exploring the reasons of their success.
Why not always use ensemble learning?
You should ! If you refer to ensemble of models (also called blending, staging, stacking). Unfortunately, the sense behind ensemble is quite vague. Models themselves based on ensemble (Random Forest..
Why not always use ensemble learning? You should ! If you refer to ensemble of models (also called blending, staging, stacking). Unfortunately, the sense behind ensemble is quite vague. Models themselves based on ensemble (Random Forest...) may not performed better than others (Linear Regression, Neural networks). In his article: Stacked generalization (1992) David H. Wolpert stated: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. And this remains true until today. However, these model are harder to tune and ship than a single model. They also have a longer prediction time, since you need the predictions of possibly hundreds of models. And, even though they have better predicitions, the increment in accuracy may not be worth the hassle. After seeing various questions about ensemble learning, I wrote why does model staging work exploring the reasons of their success.
Why not always use ensemble learning? You should ! If you refer to ensemble of models (also called blending, staging, stacking). Unfortunately, the sense behind ensemble is quite vague. Models themselves based on ensemble (Random Forest..
17,647
Is it wrong to use line plots for discrete data?
Connected line plots have proven too useful to limit to a single interpretation. A few prominent uses: Interpolated values. The case you mention where both variables are continuous and every interpolated point along the line as a meaningful interpretation. Rate of change. Even when the in-between values aren't meaningful, the slope of each line segment is a good representation of the rate of change. Note that for this interpretation, the X and Y values must be spaced appropriately, which is not the case in the wage plot you cite. Profile Comparison. When comparing small multiples or overlaid measures, lines can be useful even for categorical factors. In this case, the lines serve to connect groups of responses for limited pattern recognition. Here's an example from peltiertech.com with the factor on the Y (instead of the X) axis for label readability:
Is it wrong to use line plots for discrete data?
Connected line plots have proven too useful to limit to a single interpretation. A few prominent uses: Interpolated values. The case you mention where both variables are continuous and every interpol
Is it wrong to use line plots for discrete data? Connected line plots have proven too useful to limit to a single interpretation. A few prominent uses: Interpolated values. The case you mention where both variables are continuous and every interpolated point along the line as a meaningful interpretation. Rate of change. Even when the in-between values aren't meaningful, the slope of each line segment is a good representation of the rate of change. Note that for this interpretation, the X and Y values must be spaced appropriately, which is not the case in the wage plot you cite. Profile Comparison. When comparing small multiples or overlaid measures, lines can be useful even for categorical factors. In this case, the lines serve to connect groups of responses for limited pattern recognition. Here's an example from peltiertech.com with the factor on the Y (instead of the X) axis for label readability:
Is it wrong to use line plots for discrete data? Connected line plots have proven too useful to limit to a single interpretation. A few prominent uses: Interpolated values. The case you mention where both variables are continuous and every interpol
17,648
Is it wrong to use line plots for discrete data?
Well, the donuts might be related to the weight :-) While I see your point, I think this example is not so bad because time (on the horizontal axis, which is what the lines refer to) is continuous. The meaning of the line, to me, isn't so much that, at each time of day you ate a certain number of donuts, but that the number of donuts per day changes in some regular way. Thus, we might add something like a loess smoother to the line, and it would make sense. It is at least reasonable to think of donuts eaten at each hour, or even each minute (although this would be more sensible with a variable where the count per day was higher) What is more worrisome is when the horizontal axis is discrete (and especially when it is nominal) but lines are drawn. This really makes no sense. E.g. if you are looking at (say) the % voting for Obama among (say) residents of different regions of the USA, it makes no sense to draw a line between Northeast and Midwest; especially since the order of the regions is arbitrary, but changing the order would change the lines. Yet I have seen graphs like this.
Is it wrong to use line plots for discrete data?
Well, the donuts might be related to the weight :-) While I see your point, I think this example is not so bad because time (on the horizontal axis, which is what the lines refer to) is continuous. Th
Is it wrong to use line plots for discrete data? Well, the donuts might be related to the weight :-) While I see your point, I think this example is not so bad because time (on the horizontal axis, which is what the lines refer to) is continuous. The meaning of the line, to me, isn't so much that, at each time of day you ate a certain number of donuts, but that the number of donuts per day changes in some regular way. Thus, we might add something like a loess smoother to the line, and it would make sense. It is at least reasonable to think of donuts eaten at each hour, or even each minute (although this would be more sensible with a variable where the count per day was higher) What is more worrisome is when the horizontal axis is discrete (and especially when it is nominal) but lines are drawn. This really makes no sense. E.g. if you are looking at (say) the % voting for Obama among (say) residents of different regions of the USA, it makes no sense to draw a line between Northeast and Midwest; especially since the order of the regions is arbitrary, but changing the order would change the lines. Yet I have seen graphs like this.
Is it wrong to use line plots for discrete data? Well, the donuts might be related to the weight :-) While I see your point, I think this example is not so bad because time (on the horizontal axis, which is what the lines refer to) is continuous. Th
17,649
How to interpret autocorrelation
Those plots are showing you the $\textit{correlation of the series with itself, lagged by x time units}$. So imagine taking your time series of length $T$, copying it, and deleting the first observation of copy#1 and the last observation of copy#2. Now you have two series of length $T-1$ for which you calculate a correlation coefficient. This is the value of of the vertical axis at $x=1$ in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags $x$ and this defines the plot. The answer to your question of what is needed to report a pattern is dependent on what pattern you would like to report. But quantitatively speaking, you have exactly what I just described: the correlation coefficient at different lags of the series. You can extract these numerical values by issuing the command acf(x.ts,100)$acf. In terms of what lag to use, this is again a matter of context. It is often the case that there will be specific lags of interest. Say, for example, you may believe the fish species migrates to and from an area every ~30 days. This may lead you to hypothesize a correlation in the time series at lags of 30. In this case, you would have support for your hypothesis.
How to interpret autocorrelation
Those plots are showing you the $\textit{correlation of the series with itself, lagged by x time units}$. So imagine taking your time series of length $T$, copying it, and deleting the first observati
How to interpret autocorrelation Those plots are showing you the $\textit{correlation of the series with itself, lagged by x time units}$. So imagine taking your time series of length $T$, copying it, and deleting the first observation of copy#1 and the last observation of copy#2. Now you have two series of length $T-1$ for which you calculate a correlation coefficient. This is the value of of the vertical axis at $x=1$ in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags $x$ and this defines the plot. The answer to your question of what is needed to report a pattern is dependent on what pattern you would like to report. But quantitatively speaking, you have exactly what I just described: the correlation coefficient at different lags of the series. You can extract these numerical values by issuing the command acf(x.ts,100)$acf. In terms of what lag to use, this is again a matter of context. It is often the case that there will be specific lags of interest. Say, for example, you may believe the fish species migrates to and from an area every ~30 days. This may lead you to hypothesize a correlation in the time series at lags of 30. In this case, you would have support for your hypothesis.
How to interpret autocorrelation Those plots are showing you the $\textit{correlation of the series with itself, lagged by x time units}$. So imagine taking your time series of length $T$, copying it, and deleting the first observati
17,650
Interpretation of the area under the PR curve
One axis of ROC and PR curves is the same, that is TPR: how many positive cases have been classified correctly out of all positive cases in the data. The other axis is different. ROC uses FPR, which is how many mistakenly declared positives out of all negatives in the data. PR curve uses precision: how many true positives out of all that have been predicted as positives. So the base of the second axis is different. ROC uses what's in the data, PR uses what's in the prediction as a basis. PR curve is thought to be more informative when there is a high class imbalance in the data, see this paper http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf .
Interpretation of the area under the PR curve
One axis of ROC and PR curves is the same, that is TPR: how many positive cases have been classified correctly out of all positive cases in the data. The other axis is different. ROC uses FPR, which
Interpretation of the area under the PR curve One axis of ROC and PR curves is the same, that is TPR: how many positive cases have been classified correctly out of all positive cases in the data. The other axis is different. ROC uses FPR, which is how many mistakenly declared positives out of all negatives in the data. PR curve uses precision: how many true positives out of all that have been predicted as positives. So the base of the second axis is different. ROC uses what's in the data, PR uses what's in the prediction as a basis. PR curve is thought to be more informative when there is a high class imbalance in the data, see this paper http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf .
Interpretation of the area under the PR curve One axis of ROC and PR curves is the same, that is TPR: how many positive cases have been classified correctly out of all positive cases in the data. The other axis is different. ROC uses FPR, which
17,651
Goodness-of-fit for very large sample sizes
The test is returning the correct result. The distributions are not the same from day to day. This is, of course, no use to you. The issue you are facing has been long known. See: Karl Pearson and R. A. Fisher on Statistical Tests: A 1935 Exchange from Nature Instead you could look back at previous data (either yours or from somewhere else) and get the distribution of day to day changes for each category. Then you check if the current change is likely to have occurred given that distribution. It is difficult to answer more specifically without knowing about the data and types of errors, but this approach seems more suited to your problem.
Goodness-of-fit for very large sample sizes
The test is returning the correct result. The distributions are not the same from day to day. This is, of course, no use to you. The issue you are facing has been long known. See: Karl Pearson and R.
Goodness-of-fit for very large sample sizes The test is returning the correct result. The distributions are not the same from day to day. This is, of course, no use to you. The issue you are facing has been long known. See: Karl Pearson and R. A. Fisher on Statistical Tests: A 1935 Exchange from Nature Instead you could look back at previous data (either yours or from somewhere else) and get the distribution of day to day changes for each category. Then you check if the current change is likely to have occurred given that distribution. It is difficult to answer more specifically without knowing about the data and types of errors, but this approach seems more suited to your problem.
Goodness-of-fit for very large sample sizes The test is returning the correct result. The distributions are not the same from day to day. This is, of course, no use to you. The issue you are facing has been long known. See: Karl Pearson and R.
17,652
Goodness-of-fit for very large sample sizes
Let's go ahead and kill the sacred cow of 5%. You have (correctly) pointed out that the issue is that of exuberant power of the test. You may want to recalibrate it towards a more relevant power, like say a more traditional value of 80%: Decide on the effect size you want to detect (e.g., 0.2% shift) Decide on the power that is good enough for you so that it's not overpowered (e.g., $1-\beta=80\%)$ Work back from the existing theory of Pearson test to determine the level which would make your test practical. Suppose you have 5 categories with equal probabilities, $p_1=p_2=p_3=p_4=p_5=0.2$, and your alternative is $p+\delta/\sqrt{n}=(0.198,0.202,0.2,0.2,0.2)$. So for $n=10^6$, $\delta=(-2,+2,0,0,0)$. The asymptotic distribution is non-central chi-square with $k=$ (# categories-1) = 4 d.f. and non-centrality parameter $$ \lambda=\sum_j \delta_j^2/p_j = 4/0.2 + 4/0.2 = 40 $$ With this large value of $\lambda$, this is close enough to $N(\mu=\lambda+k=44,\sigma^2=2(k+2\lambda)=168)$. The 80%-tile is $44+13\cdot\Phi^{-1}(0.8)=44+13\cdot0.84=54.91$. Hence your desirable level of the test is the inverse tail cdf of $\chi^2_4$ from 54.91: $${\rm Prob}[\chi_4^2>54.91]=3.3\cdot10^{-11}$$ So that would be the level you should consider testing your data at so that it would have the power of 80% to detect the 0.2% differences. (Please check my math, this is a ridiculous level of a test, but that's what you wanted with your Big Data, didn't you? On the other hand, if you routinely see Pearson $\chi^2$ in the range of a couple hundred, this may be an entirely meaningful critical value to entertain.) Keep in mind though that the approximations, both for the null and the alternative, may work poorly in the tails, see this discussion.
Goodness-of-fit for very large sample sizes
Let's go ahead and kill the sacred cow of 5%. You have (correctly) pointed out that the issue is that of exuberant power of the test. You may want to recalibrate it towards a more relevant power, like
Goodness-of-fit for very large sample sizes Let's go ahead and kill the sacred cow of 5%. You have (correctly) pointed out that the issue is that of exuberant power of the test. You may want to recalibrate it towards a more relevant power, like say a more traditional value of 80%: Decide on the effect size you want to detect (e.g., 0.2% shift) Decide on the power that is good enough for you so that it's not overpowered (e.g., $1-\beta=80\%)$ Work back from the existing theory of Pearson test to determine the level which would make your test practical. Suppose you have 5 categories with equal probabilities, $p_1=p_2=p_3=p_4=p_5=0.2$, and your alternative is $p+\delta/\sqrt{n}=(0.198,0.202,0.2,0.2,0.2)$. So for $n=10^6$, $\delta=(-2,+2,0,0,0)$. The asymptotic distribution is non-central chi-square with $k=$ (# categories-1) = 4 d.f. and non-centrality parameter $$ \lambda=\sum_j \delta_j^2/p_j = 4/0.2 + 4/0.2 = 40 $$ With this large value of $\lambda$, this is close enough to $N(\mu=\lambda+k=44,\sigma^2=2(k+2\lambda)=168)$. The 80%-tile is $44+13\cdot\Phi^{-1}(0.8)=44+13\cdot0.84=54.91$. Hence your desirable level of the test is the inverse tail cdf of $\chi^2_4$ from 54.91: $${\rm Prob}[\chi_4^2>54.91]=3.3\cdot10^{-11}$$ So that would be the level you should consider testing your data at so that it would have the power of 80% to detect the 0.2% differences. (Please check my math, this is a ridiculous level of a test, but that's what you wanted with your Big Data, didn't you? On the other hand, if you routinely see Pearson $\chi^2$ in the range of a couple hundred, this may be an entirely meaningful critical value to entertain.) Keep in mind though that the approximations, both for the null and the alternative, may work poorly in the tails, see this discussion.
Goodness-of-fit for very large sample sizes Let's go ahead and kill the sacred cow of 5%. You have (correctly) pointed out that the issue is that of exuberant power of the test. You may want to recalibrate it towards a more relevant power, like
17,653
Goodness-of-fit for very large sample sizes
In these cases, my professor has suggested to compute Cramér's V which is a measure of association based on the chi-squared statistic. This should give you the strength and help you decide if the test is hypersensitive. But, I am not sure whether you can use the V with the kind of statistic which the G2 tests return. This should be the formula for V: $$\phi_c=\sqrt{\frac{\chi^2}{n(k-1)}}$$ where $n$ is the grand total of observations and $k$ is the number of rows or number of columns whichever is less. Or for goodness of fit tests, the $k$ is apparently the no. of rows.
Goodness-of-fit for very large sample sizes
In these cases, my professor has suggested to compute Cramér's V which is a measure of association based on the chi-squared statistic. This should give you the strength and help you decide if the test
Goodness-of-fit for very large sample sizes In these cases, my professor has suggested to compute Cramér's V which is a measure of association based on the chi-squared statistic. This should give you the strength and help you decide if the test is hypersensitive. But, I am not sure whether you can use the V with the kind of statistic which the G2 tests return. This should be the formula for V: $$\phi_c=\sqrt{\frac{\chi^2}{n(k-1)}}$$ where $n$ is the grand total of observations and $k$ is the number of rows or number of columns whichever is less. Or for goodness of fit tests, the $k$ is apparently the no. of rows.
Goodness-of-fit for very large sample sizes In these cases, my professor has suggested to compute Cramér's V which is a measure of association based on the chi-squared statistic. This should give you the strength and help you decide if the test
17,654
Goodness-of-fit for very large sample sizes
One approach would be to make the goodness-of fit tests more meaningful by performing them on smaller blocks of data. You could split your data from a given day into e.g. 1000 blocks of 1000 samples each, and run an individual goodness-of-fit test for each block, with the expected distribution given by the full dataset from the previous day. Keep the significance level for each individual test at the level you were using (e.g. $\alpha = 0.05$). Then look for significant departures of the total number of positive tests from the expected number of false positives (under the null hypothesis that there is no difference in the distributions, the total number of positive tests is binomially distributed, with parameter $\alpha$). You could find a good block size to use by taking datasets from two days where you could assume the distribution was the same, and seeing what block size gives a frequency of positive tests that is roughly equal to $\alpha$ (i.e., what block size stops your test from reporting spurious differences).
Goodness-of-fit for very large sample sizes
One approach would be to make the goodness-of fit tests more meaningful by performing them on smaller blocks of data. You could split your data from a given day into e.g. 1000 blocks of 1000 samples e
Goodness-of-fit for very large sample sizes One approach would be to make the goodness-of fit tests more meaningful by performing them on smaller blocks of data. You could split your data from a given day into e.g. 1000 blocks of 1000 samples each, and run an individual goodness-of-fit test for each block, with the expected distribution given by the full dataset from the previous day. Keep the significance level for each individual test at the level you were using (e.g. $\alpha = 0.05$). Then look for significant departures of the total number of positive tests from the expected number of false positives (under the null hypothesis that there is no difference in the distributions, the total number of positive tests is binomially distributed, with parameter $\alpha$). You could find a good block size to use by taking datasets from two days where you could assume the distribution was the same, and seeing what block size gives a frequency of positive tests that is roughly equal to $\alpha$ (i.e., what block size stops your test from reporting spurious differences).
Goodness-of-fit for very large sample sizes One approach would be to make the goodness-of fit tests more meaningful by performing them on smaller blocks of data. You could split your data from a given day into e.g. 1000 blocks of 1000 samples e
17,655
When is it necessary to include the lag of the dependent variable in a regression model and which lag?
A dynamic panel model might make sense if you have a eye-for-an-eye retaliation model for homicides. For example, if the homicide rate was largely driven by gangs feuds, the murders at time $t$ might well be a function of the deaths at $t-1$, or other lags. I am going to answer your questions out of order. Suppose the DGP is \begin{equation} y_{it}=\delta y_{it-1}+x_{it}^{\prime}\beta+\mu_{i}+v_{it}, \end{equation} where the errors $\mu$ and $v$ are independent of each other and among themselves. You're interested in conducting the test of whether $\delta = 0$ (question 2). If you use OLS, it's easy to see that $y_{it-1}$ and the first part of the error are correlated, which renders OLS biased and inconsistent, even when there's no serial correlation in $v$. We need something more complicated to do the test. The next thing you might try is the fixed effects estimator with the within transformation, where you transform the data by subtracting each unit's average $y$, $\bar y_{i}$, from each observation. This wipes out $\mu$, but this estimator suffers from Nickell bias, which bias does not go away as the number of observations $N$ grows, so it is inconsistent for large $N$ and small $T$ panels. However, as $T$ grows, you get consistency of $\delta$ and $\beta$. Judson and Owen (1999) do some simulations with $N=20,100$ and $T=5,10,20,30$ and found the bias to be increasing in $\delta$ and decreasing in $T$. However, even for $T=30$, the bias could be as much as $20\%$ of the true coefficient value. That's bad news bears! So depending on the dimensions of you panel, you may want to avoid the within FE estimator. If $\delta > 0$, the bias is negative, so the persistence of $y$ is underestimated. If the regressors are correlated with the lag, the $\beta$ will also be biased. Another simple FE approach is to first-difference the data to remove the fixed effect, and use $y_{it-2}$ to instrument for $\Delta y_{it-1} = y_{it-1}-y_{it-2}$. You also use $x_{it}-x_{it-1}$ as an instrument for itself. Anderson and Hsiao (1981) is the canonical reference. This estimator is consistent (as long as the explanatory $X$s are pre-determined and the original error terms are not serially correlated), but not fully efficient since it does not use all the available moment conditions and does not use the fact that the error term is now differenced. This would probably be my first choice. If you think that $v$ follow an AR(1) process, can use third and fourth lags of $y$ instead. Arellano and Bond (1991) derive a more efficient generalized method of moments (GMM) estimator, which has been extended since, relaxing some of the assumptions. Chapter 8 of Baltagi's panel book is a good survey of this literature, though it does not deal with lag selection as far as I can tell. This is state of the art 'metrics, but more technically demanding. I think the plm package in R has some of these built in. Dynamic panel models have been in Stata since version 10, and SAS has the GMM version at least. None of these are count data models, but that may not be a big deal depending on your data. However, here's one example of a GMM dynamic Poisson panel model in Stata. The answer to your first question is more speculative. If you leave out the lagged $y$ and first difference, I believe that $\beta$ can still be estimated consistently, though less precisely since the variance is now larger. If that is the parameter you care about, that may be acceptable. What you loose is that you cannot say whether there were a lot of homicides in area X because they were lots last month or because area X has a propensity for violence. You give up the ability to distinguish between state dependence and unobserved heterogeneity (question 1).
When is it necessary to include the lag of the dependent variable in a regression model and which la
A dynamic panel model might make sense if you have a eye-for-an-eye retaliation model for homicides. For example, if the homicide rate was largely driven by gangs feuds, the murders at time $t$ might
When is it necessary to include the lag of the dependent variable in a regression model and which lag? A dynamic panel model might make sense if you have a eye-for-an-eye retaliation model for homicides. For example, if the homicide rate was largely driven by gangs feuds, the murders at time $t$ might well be a function of the deaths at $t-1$, or other lags. I am going to answer your questions out of order. Suppose the DGP is \begin{equation} y_{it}=\delta y_{it-1}+x_{it}^{\prime}\beta+\mu_{i}+v_{it}, \end{equation} where the errors $\mu$ and $v$ are independent of each other and among themselves. You're interested in conducting the test of whether $\delta = 0$ (question 2). If you use OLS, it's easy to see that $y_{it-1}$ and the first part of the error are correlated, which renders OLS biased and inconsistent, even when there's no serial correlation in $v$. We need something more complicated to do the test. The next thing you might try is the fixed effects estimator with the within transformation, where you transform the data by subtracting each unit's average $y$, $\bar y_{i}$, from each observation. This wipes out $\mu$, but this estimator suffers from Nickell bias, which bias does not go away as the number of observations $N$ grows, so it is inconsistent for large $N$ and small $T$ panels. However, as $T$ grows, you get consistency of $\delta$ and $\beta$. Judson and Owen (1999) do some simulations with $N=20,100$ and $T=5,10,20,30$ and found the bias to be increasing in $\delta$ and decreasing in $T$. However, even for $T=30$, the bias could be as much as $20\%$ of the true coefficient value. That's bad news bears! So depending on the dimensions of you panel, you may want to avoid the within FE estimator. If $\delta > 0$, the bias is negative, so the persistence of $y$ is underestimated. If the regressors are correlated with the lag, the $\beta$ will also be biased. Another simple FE approach is to first-difference the data to remove the fixed effect, and use $y_{it-2}$ to instrument for $\Delta y_{it-1} = y_{it-1}-y_{it-2}$. You also use $x_{it}-x_{it-1}$ as an instrument for itself. Anderson and Hsiao (1981) is the canonical reference. This estimator is consistent (as long as the explanatory $X$s are pre-determined and the original error terms are not serially correlated), but not fully efficient since it does not use all the available moment conditions and does not use the fact that the error term is now differenced. This would probably be my first choice. If you think that $v$ follow an AR(1) process, can use third and fourth lags of $y$ instead. Arellano and Bond (1991) derive a more efficient generalized method of moments (GMM) estimator, which has been extended since, relaxing some of the assumptions. Chapter 8 of Baltagi's panel book is a good survey of this literature, though it does not deal with lag selection as far as I can tell. This is state of the art 'metrics, but more technically demanding. I think the plm package in R has some of these built in. Dynamic panel models have been in Stata since version 10, and SAS has the GMM version at least. None of these are count data models, but that may not be a big deal depending on your data. However, here's one example of a GMM dynamic Poisson panel model in Stata. The answer to your first question is more speculative. If you leave out the lagged $y$ and first difference, I believe that $\beta$ can still be estimated consistently, though less precisely since the variance is now larger. If that is the parameter you care about, that may be acceptable. What you loose is that you cannot say whether there were a lot of homicides in area X because they were lots last month or because area X has a propensity for violence. You give up the ability to distinguish between state dependence and unobserved heterogeneity (question 1).
When is it necessary to include the lag of the dependent variable in a regression model and which la A dynamic panel model might make sense if you have a eye-for-an-eye retaliation model for homicides. For example, if the homicide rate was largely driven by gangs feuds, the murders at time $t$ might
17,656
In an election, how can we tell the certainty that a candidate will be the winner?
The main difficulty in practice is not the statistical uncertainty that a fluke streak of luck would have given one candidate more votes. The main difficulty, by an order of magnitude or more, is that the ballots which have been opened are almost never an unbiased sample of the votes cast. If you ignore this effect, you get the famous error "Dewey Defeats Truman," which occurred with a large biased sample. In practice, voters who favor one candidate versus another are not equally distributed by region, by whether they work during the day, or by whether they would be deployed overseas hence would vote by absentee ballots. These are not small differences. I think what news organizations do now is to break the population into groups and use the results to estimate how each group voted (including turnout). These may be based on models and prior assumptions based on previous elections, not just the data from this election. These may not take into account oddities such as the butterfly ballots of Palm Beach.
In an election, how can we tell the certainty that a candidate will be the winner?
The main difficulty in practice is not the statistical uncertainty that a fluke streak of luck would have given one candidate more votes. The main difficulty, by an order of magnitude or more, is tha
In an election, how can we tell the certainty that a candidate will be the winner? The main difficulty in practice is not the statistical uncertainty that a fluke streak of luck would have given one candidate more votes. The main difficulty, by an order of magnitude or more, is that the ballots which have been opened are almost never an unbiased sample of the votes cast. If you ignore this effect, you get the famous error "Dewey Defeats Truman," which occurred with a large biased sample. In practice, voters who favor one candidate versus another are not equally distributed by region, by whether they work during the day, or by whether they would be deployed overseas hence would vote by absentee ballots. These are not small differences. I think what news organizations do now is to break the population into groups and use the results to estimate how each group voted (including turnout). These may be based on models and prior assumptions based on previous elections, not just the data from this election. These may not take into account oddities such as the butterfly ballots of Palm Beach.
In an election, how can we tell the certainty that a candidate will be the winner? The main difficulty in practice is not the statistical uncertainty that a fluke streak of luck would have given one candidate more votes. The main difficulty, by an order of magnitude or more, is tha
17,657
In an election, how can we tell the certainty that a candidate will be the winner?
In survey sampling the standard error of the estimate of proportion is needed. It depends more on i than j. Also it requires that the i opened ballots were selected at random. If p is the true final proportion for candidate A, then the variance of the estimate is $$\frac{(1-\frac{i}{j})p(1-p)}{i}$$ The quantity $(1-\frac{i}{j})$ is called the finite population correction factor. To estimate this variance the usual estimate for p is substituted for p in the formula. The standard error is gotten by taking the square root. In predicting a winner the pollster might use the estimate plus or minus 3 standard errors. If 0.5 is not contained in the interval, then Candidate A is declared the winner if 0.5 is below the lower limit, or his opponent is declared the winner if 0.5 is above the upper limit. Of course this only says with very high confidence who the winner will be in the event that 0.5 is outside the interval. The confidence level is 0.99 if three standard errors is what you use (based on the normal approximation to the binomial). If 0.5 is inside the interval no one is declared the winner and the pollster waits for more data to accumulate. In making a projection the pollsters can select a stratified random sample from the accumulated votes to avoid potential bias that mmay occur if one looks at all the counted ballots. The problem with looking at all accumulated votes is that certain precincts complete counting over others and they may not be representative of the population. The article here provides good coverage of the problem and numerous references. It has been pointed out that accumulated votes can provided biased estimates of proportions because either the precincts that have yet to report are precincts that tend to favor the party with the candidate that is trailing or the absentee ballots are likely to favor the candidate that is trailing and those votes get counted last. The sophisticated pollsters like Harris and Gallup do not fall into such traps. The simple analysis of constructing confidence intervals based on accumulated votes that I have outlined is only one factor that is used. These pollsters have a great deal more information at their disposal. They have polls that were taken shortly before the election and they have the voting patterns of all the precincts and absentee votes taken in elections in recent past years. So if there are clear biases that could swing a close election in the opposite direction the pollsters will recognize this and hold off projecting a winner. In the US absentee ballots come predominantly from the military overseas and college students who are at school away from home. While the military may tend to be more conservative and likely to vote Republican, the colleage students tend to be more liberal and likely to vote Democratic. All these considerations are taken into account. The care and sophistication of modern polling is the reason that gross errors such as the Literary Digest poll of 1936 or the Chicago newspaper's premature concession of the 1948 election to Dewey have not occurred since then.
In an election, how can we tell the certainty that a candidate will be the winner?
In survey sampling the standard error of the estimate of proportion is needed. It depends more on i than j. Also it requires that the i opened ballots were selected at random. If p is the true final
In an election, how can we tell the certainty that a candidate will be the winner? In survey sampling the standard error of the estimate of proportion is needed. It depends more on i than j. Also it requires that the i opened ballots were selected at random. If p is the true final proportion for candidate A, then the variance of the estimate is $$\frac{(1-\frac{i}{j})p(1-p)}{i}$$ The quantity $(1-\frac{i}{j})$ is called the finite population correction factor. To estimate this variance the usual estimate for p is substituted for p in the formula. The standard error is gotten by taking the square root. In predicting a winner the pollster might use the estimate plus or minus 3 standard errors. If 0.5 is not contained in the interval, then Candidate A is declared the winner if 0.5 is below the lower limit, or his opponent is declared the winner if 0.5 is above the upper limit. Of course this only says with very high confidence who the winner will be in the event that 0.5 is outside the interval. The confidence level is 0.99 if three standard errors is what you use (based on the normal approximation to the binomial). If 0.5 is inside the interval no one is declared the winner and the pollster waits for more data to accumulate. In making a projection the pollsters can select a stratified random sample from the accumulated votes to avoid potential bias that mmay occur if one looks at all the counted ballots. The problem with looking at all accumulated votes is that certain precincts complete counting over others and they may not be representative of the population. The article here provides good coverage of the problem and numerous references. It has been pointed out that accumulated votes can provided biased estimates of proportions because either the precincts that have yet to report are precincts that tend to favor the party with the candidate that is trailing or the absentee ballots are likely to favor the candidate that is trailing and those votes get counted last. The sophisticated pollsters like Harris and Gallup do not fall into such traps. The simple analysis of constructing confidence intervals based on accumulated votes that I have outlined is only one factor that is used. These pollsters have a great deal more information at their disposal. They have polls that were taken shortly before the election and they have the voting patterns of all the precincts and absentee votes taken in elections in recent past years. So if there are clear biases that could swing a close election in the opposite direction the pollsters will recognize this and hold off projecting a winner. In the US absentee ballots come predominantly from the military overseas and college students who are at school away from home. While the military may tend to be more conservative and likely to vote Republican, the colleage students tend to be more liberal and likely to vote Democratic. All these considerations are taken into account. The care and sophistication of modern polling is the reason that gross errors such as the Literary Digest poll of 1936 or the Chicago newspaper's premature concession of the 1948 election to Dewey have not occurred since then.
In an election, how can we tell the certainty that a candidate will be the winner? In survey sampling the standard error of the estimate of proportion is needed. It depends more on i than j. Also it requires that the i opened ballots were selected at random. If p is the true final
17,658
Difference between the terms 'joint distribution' and 'multivariate distribution'?
The terms are basically synonyms, but the usages are slightly different. Think about the univariate case: you may talk about "distributions" in general, you might more specifically refer to "univariate distributions", and you refer to "the distribution of $X$". You don't normally say "the univariate distribution of $X$". Similarly, in the multivariate case you may talk about "distributions" in general, you might more specifically refer to "multivariate distribution", and you refer to "the distribution of $(X,Y)$" or "the joint distribution of $X$ and $Y$". Thus the joint distribution of $X$ and $Y$ is a multivariate distribution, but you don't normally say "the multivariate distribution of $(X,Y)$" or "the multivariate distribution of $X$ and $Y$".
Difference between the terms 'joint distribution' and 'multivariate distribution'?
The terms are basically synonyms, but the usages are slightly different. Think about the univariate case: you may talk about "distributions" in general, you might more specifically refer to "univaria
Difference between the terms 'joint distribution' and 'multivariate distribution'? The terms are basically synonyms, but the usages are slightly different. Think about the univariate case: you may talk about "distributions" in general, you might more specifically refer to "univariate distributions", and you refer to "the distribution of $X$". You don't normally say "the univariate distribution of $X$". Similarly, in the multivariate case you may talk about "distributions" in general, you might more specifically refer to "multivariate distribution", and you refer to "the distribution of $(X,Y)$" or "the joint distribution of $X$ and $Y$". Thus the joint distribution of $X$ and $Y$ is a multivariate distribution, but you don't normally say "the multivariate distribution of $(X,Y)$" or "the multivariate distribution of $X$ and $Y$".
Difference between the terms 'joint distribution' and 'multivariate distribution'? The terms are basically synonyms, but the usages are slightly different. Think about the univariate case: you may talk about "distributions" in general, you might more specifically refer to "univaria
17,659
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I'd be inclined to say that "multivariate" describes the random variable, i.e., it is a vector, and that the components of a multivariate random variable have a joint distribution. "Multivariate random variable" sounds a bit strange, though; I'd call it a random vector.
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I'd be inclined to say that "multivariate" describes the random variable, i.e., it is a vector, and that the components of a multivariate random variable have a joint distribution. "Multivariate rando
Difference between the terms 'joint distribution' and 'multivariate distribution'? I'd be inclined to say that "multivariate" describes the random variable, i.e., it is a vector, and that the components of a multivariate random variable have a joint distribution. "Multivariate random variable" sounds a bit strange, though; I'd call it a random vector.
Difference between the terms 'joint distribution' and 'multivariate distribution'? I'd be inclined to say that "multivariate" describes the random variable, i.e., it is a vector, and that the components of a multivariate random variable have a joint distribution. "Multivariate rando
17,660
Difference between the terms 'joint distribution' and 'multivariate distribution'?
The canonical textbooks describing properties of the various probability distributions by Johnson & Kotz and later co-authors are entitled Univariate Discrete Distributions, Continuous Univariate Distributions, Continuous Multivariate Distributions and Discrete Multivariate Distributions. So I think you're on safe ground describing a distribution as 'multivariate' rather than 'joint'. Conflict of interest statement: The author is a member of Wikipedia:WikiProject Statistics.
Difference between the terms 'joint distribution' and 'multivariate distribution'?
The canonical textbooks describing properties of the various probability distributions by Johnson & Kotz and later co-authors are entitled Univariate Discrete Distributions, Continuous Univariate Dist
Difference between the terms 'joint distribution' and 'multivariate distribution'? The canonical textbooks describing properties of the various probability distributions by Johnson & Kotz and later co-authors are entitled Univariate Discrete Distributions, Continuous Univariate Distributions, Continuous Multivariate Distributions and Discrete Multivariate Distributions. So I think you're on safe ground describing a distribution as 'multivariate' rather than 'joint'. Conflict of interest statement: The author is a member of Wikipedia:WikiProject Statistics.
Difference between the terms 'joint distribution' and 'multivariate distribution'? The canonical textbooks describing properties of the various probability distributions by Johnson & Kotz and later co-authors are entitled Univariate Discrete Distributions, Continuous Univariate Dist
17,661
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I think they are mostly synonyms, and that if there is any difference, it lies in details that are likely irrelevant to your audience.
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I think they are mostly synonyms, and that if there is any difference, it lies in details that are likely irrelevant to your audience.
Difference between the terms 'joint distribution' and 'multivariate distribution'? I think they are mostly synonyms, and that if there is any difference, it lies in details that are likely irrelevant to your audience.
Difference between the terms 'joint distribution' and 'multivariate distribution'? I think they are mostly synonyms, and that if there is any difference, it lies in details that are likely irrelevant to your audience.
17,662
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I would be careful to say a joint distribution is synonymous with a multivariate distribution. For example a joint normal distribution can be a multivariate normal distribution or a product of univariate normal distributions. A univariate normal distribution has a scalar mean and a scalar variance, so for the univariate (one dimensional) random variable $x$ distributed according to a normal we have $p(x) = N(x ; \mu, \sigma)$. A multivariate normal distribution has mean vector of length $n>1$ and a covariance matrix of size $n \times n$. For two univariate random variables $x,y$ they can be jointly distributed according to a multivariate normal distribution $p(x,y) = \mathcal{N}([x \ y]^\intercal ; [\mu_x \ \mu_y]^\intercal , \Sigma_{xy})$. However, if the covariance matrix of the multivariate distribution is a diagonal matrix, this means that x and y have zero correlation (are independent) and so the joint distribution can be a product of univariate Gaussians, $p(x,y) = N(x ; \mu_x, \sigma_x)*N(y ; \mu_y, \sigma_y)$. Therefore the joint distribution is not really synonymous with the multivariate in the case of independent variables. https://en.wikipedia.org/wiki/Joint_probability_distribution#Joint_distribution_for_independent_variables
Difference between the terms 'joint distribution' and 'multivariate distribution'?
I would be careful to say a joint distribution is synonymous with a multivariate distribution. For example a joint normal distribution can be a multivariate normal distribution or a product of univari
Difference between the terms 'joint distribution' and 'multivariate distribution'? I would be careful to say a joint distribution is synonymous with a multivariate distribution. For example a joint normal distribution can be a multivariate normal distribution or a product of univariate normal distributions. A univariate normal distribution has a scalar mean and a scalar variance, so for the univariate (one dimensional) random variable $x$ distributed according to a normal we have $p(x) = N(x ; \mu, \sigma)$. A multivariate normal distribution has mean vector of length $n>1$ and a covariance matrix of size $n \times n$. For two univariate random variables $x,y$ they can be jointly distributed according to a multivariate normal distribution $p(x,y) = \mathcal{N}([x \ y]^\intercal ; [\mu_x \ \mu_y]^\intercal , \Sigma_{xy})$. However, if the covariance matrix of the multivariate distribution is a diagonal matrix, this means that x and y have zero correlation (are independent) and so the joint distribution can be a product of univariate Gaussians, $p(x,y) = N(x ; \mu_x, \sigma_x)*N(y ; \mu_y, \sigma_y)$. Therefore the joint distribution is not really synonymous with the multivariate in the case of independent variables. https://en.wikipedia.org/wiki/Joint_probability_distribution#Joint_distribution_for_independent_variables
Difference between the terms 'joint distribution' and 'multivariate distribution'? I would be careful to say a joint distribution is synonymous with a multivariate distribution. For example a joint normal distribution can be a multivariate normal distribution or a product of univari
17,663
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I am a biologist who models the effects of inter-annual climatic variation on population dynamics of several migratory species. My datasets are very large (spatially intensive data) so I run my R code using multicore on Amazon EC2 servers. If my task is particularly resource intensive, I will choose a High Memory Quadruple Extra Large instance which comes with 26 CPU units, 8 cores, and 68G of RAM. In this case I usually run 4-6 scripts simultaneously, each of which is working through a fairly large data set. For smaller tasks, I choose servers with 4-6 cores and about 20 gigs of RAM. I launch these instances (usually spot instances because they are cheaper but can terminate anytime the current rate exceeds what I have chosen to pay), run the script for several hours, and then terminate the instance once my script has finished. As for the machine image (Amazon Machine Image), I took someone elses Ubuntu install, updated R, installed my packages, and saved that as my private AMI on my S3 storage space. My personal machine is a dualcore macbook pro and it has a hard time forking multicore calls. Feel free to email if you have other questions.
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I am a biologist who models the effects of inter-annual climatic variation on population dynamics of several migratory species. My datasets are very large (spatially intensive data) so I run my R code
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I am a biologist who models the effects of inter-annual climatic variation on population dynamics of several migratory species. My datasets are very large (spatially intensive data) so I run my R code using multicore on Amazon EC2 servers. If my task is particularly resource intensive, I will choose a High Memory Quadruple Extra Large instance which comes with 26 CPU units, 8 cores, and 68G of RAM. In this case I usually run 4-6 scripts simultaneously, each of which is working through a fairly large data set. For smaller tasks, I choose servers with 4-6 cores and about 20 gigs of RAM. I launch these instances (usually spot instances because they are cheaper but can terminate anytime the current rate exceeds what I have chosen to pay), run the script for several hours, and then terminate the instance once my script has finished. As for the machine image (Amazon Machine Image), I took someone elses Ubuntu install, updated R, installed my packages, and saved that as my private AMI on my S3 storage space. My personal machine is a dualcore macbook pro and it has a hard time forking multicore calls. Feel free to email if you have other questions.
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I am a biologist who models the effects of inter-annual climatic variation on population dynamics of several migratory species. My datasets are very large (spatially intensive data) so I run my R code
17,664
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
Since you ask, I am using the foreach package with the multicore backend. I use it to split an embarrassingly parallel workload across multiple cores on a single Nehalem box with lots of RAM. This works pretty well for the task at hand.
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
Since you ask, I am using the foreach package with the multicore backend. I use it to split an embarrassingly parallel workload across multiple cores on a single Nehalem box with lots of RAM. This wor
Who uses R with multicore, SNOW or CUDA package for resource intense computing? Since you ask, I am using the foreach package with the multicore backend. I use it to split an embarrassingly parallel workload across multiple cores on a single Nehalem box with lots of RAM. This works pretty well for the task at hand.
Who uses R with multicore, SNOW or CUDA package for resource intense computing? Since you ask, I am using the foreach package with the multicore backend. I use it to split an embarrassingly parallel workload across multiple cores on a single Nehalem box with lots of RAM. This wor
17,665
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I work in academy and I'm using multicore for some heavy benchmarks of machine learning algorithms, mostly on our Opteron based Sun Constellation and some smaller clusters; those are also rather embarrassingly parallel problems so multicore's main role is to spread computation over node without multiplication of memory usage.
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I work in academy and I'm using multicore for some heavy benchmarks of machine learning algorithms, mostly on our Opteron based Sun Constellation and some smaller clusters; those are also rather embar
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I work in academy and I'm using multicore for some heavy benchmarks of machine learning algorithms, mostly on our Opteron based Sun Constellation and some smaller clusters; those are also rather embarrassingly parallel problems so multicore's main role is to spread computation over node without multiplication of memory usage.
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I work in academy and I'm using multicore for some heavy benchmarks of machine learning algorithms, mostly on our Opteron based Sun Constellation and some smaller clusters; those are also rather embar
17,666
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I use snow and snowfall for course parallelization on HPC clusters and CUDA for fine data parallel processing. I'm in Epidemiology doing disease transmission modeling. So I use both.
Who uses R with multicore, SNOW or CUDA package for resource intense computing?
I use snow and snowfall for course parallelization on HPC clusters and CUDA for fine data parallel processing. I'm in Epidemiology doing disease transmission modeling. So I use both.
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I use snow and snowfall for course parallelization on HPC clusters and CUDA for fine data parallel processing. I'm in Epidemiology doing disease transmission modeling. So I use both.
Who uses R with multicore, SNOW or CUDA package for resource intense computing? I use snow and snowfall for course parallelization on HPC clusters and CUDA for fine data parallel processing. I'm in Epidemiology doing disease transmission modeling. So I use both.
17,667
A non-parametric repeated-measures multi-way Anova in R?
The ez package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest version has a function called ezBoot(), which lets you do bootstrap resampling that takes into account repeated measures (by resampling subjects, then within subjects), either using traditional cell means as the prediction statistic or using mixed effects modelling to make predictions for each cell in the design. I'm still not sure how "non-parametric" the bootstrap CIs from mixed effects model predictions are; my intuition is that they might reasonably be considered non-parametric, but my confidence in this area is low given that I'm still learning about mixed effects models.
A non-parametric repeated-measures multi-way Anova in R?
The ez package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest
A non-parametric repeated-measures multi-way Anova in R? The ez package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest version has a function called ezBoot(), which lets you do bootstrap resampling that takes into account repeated measures (by resampling subjects, then within subjects), either using traditional cell means as the prediction statistic or using mixed effects modelling to make predictions for each cell in the design. I'm still not sure how "non-parametric" the bootstrap CIs from mixed effects model predictions are; my intuition is that they might reasonably be considered non-parametric, but my confidence in this area is low given that I'm still learning about mixed effects models.
A non-parametric repeated-measures multi-way Anova in R? The ez package, of which I am the author, has a function called ezPerm() which computes a permutation test, but probably doesn't do interactions properly (the documentation admits as much). The latest
17,668
A non-parametric repeated-measures multi-way Anova in R?
When in doubt, bootstrap! Really, I don't know of a canned procedure to handle such a scenario. Bootstrapping is a generally applicable way of generating some error parameters from the data at hand. Rather than relying on the typical parametric assumptions, bootstrap procedures capitalize on the characteristics of the sample to generate an empirical distribution against which your sample estimates can be compared. Google scholar is gold...it's been done before...at least once. Lunneborg, Clifford E.; Tousignant, James P.; 1985 "Efron's Bootstrap with Application to the Repeated Measures Design." Multivariate Behavioral Research; Apr85, Vol. 20 Issue 2, p161, 18p
A non-parametric repeated-measures multi-way Anova in R?
When in doubt, bootstrap! Really, I don't know of a canned procedure to handle such a scenario. Bootstrapping is a generally applicable way of generating some error parameters from the data at hand.
A non-parametric repeated-measures multi-way Anova in R? When in doubt, bootstrap! Really, I don't know of a canned procedure to handle such a scenario. Bootstrapping is a generally applicable way of generating some error parameters from the data at hand. Rather than relying on the typical parametric assumptions, bootstrap procedures capitalize on the characteristics of the sample to generate an empirical distribution against which your sample estimates can be compared. Google scholar is gold...it's been done before...at least once. Lunneborg, Clifford E.; Tousignant, James P.; 1985 "Efron's Bootstrap with Application to the Repeated Measures Design." Multivariate Behavioral Research; Apr85, Vol. 20 Issue 2, p161, 18p
A non-parametric repeated-measures multi-way Anova in R? When in doubt, bootstrap! Really, I don't know of a canned procedure to handle such a scenario. Bootstrapping is a generally applicable way of generating some error parameters from the data at hand.
17,669
A non-parametric repeated-measures multi-way Anova in R?
There is a "trick" mentioned in some forums and mailing lists - I also found it mentioned in Joop Hox' book "Multilevel Analysis" (second edition, 2010), pp. 189. The idea is: you reformat your long data into a long long dataset in which you create a new DV that includes all your DV responses, and use an index variable that holds information about the nature of the DVs to predict this outcome. Let's assume you have 9 depression symptoms (ordinal), 2 measurement points, and 300 subjects. So while you have 300 rows in your normal dataset, and in your long dataset you'll have 600 rows, this new dataset will have 9 (symptoms) x 2 (time) x 300 (subjects) rows. The new DV variable "symptoms" now contains the symptom severity of participants on 9 symptoms, the variables "index" contains the information about the nature of the symptom (1 to 9), and then there are the two variables "time" and "UserID". You can now use the ordinal package to run this. data<-read.csv("data_long_long.csv", head=T) data$symptoms <- factor(data$symptoms) data$time <- factor(data$time) data$index <-factor(data$index) m1<-clmm2(symptoms ~ index+time, random=UserID, data = data, Hess=TRUE, nAGQ=10) In my specific case, I was interested whether there was a significant interaction between index and time, so I ran one additional model and compared them: m2<-clmm2(symptoms ~ index+time, random=UserID, data = data, Hess=TRUE, nAGQ=10) anova(m1,m2) CLMM2 uses a random intercept model (to the best of my knowledge, the package ordinal does not do random slopes), if you do not with for a random intercept model you can run the models instead using CLM, e.g.: m3<-clm(symptoms ~ index+time, data = data)
A non-parametric repeated-measures multi-way Anova in R?
There is a "trick" mentioned in some forums and mailing lists - I also found it mentioned in Joop Hox' book "Multilevel Analysis" (second edition, 2010), pp. 189. The idea is: you reformat your long
A non-parametric repeated-measures multi-way Anova in R? There is a "trick" mentioned in some forums and mailing lists - I also found it mentioned in Joop Hox' book "Multilevel Analysis" (second edition, 2010), pp. 189. The idea is: you reformat your long data into a long long dataset in which you create a new DV that includes all your DV responses, and use an index variable that holds information about the nature of the DVs to predict this outcome. Let's assume you have 9 depression symptoms (ordinal), 2 measurement points, and 300 subjects. So while you have 300 rows in your normal dataset, and in your long dataset you'll have 600 rows, this new dataset will have 9 (symptoms) x 2 (time) x 300 (subjects) rows. The new DV variable "symptoms" now contains the symptom severity of participants on 9 symptoms, the variables "index" contains the information about the nature of the symptom (1 to 9), and then there are the two variables "time" and "UserID". You can now use the ordinal package to run this. data<-read.csv("data_long_long.csv", head=T) data$symptoms <- factor(data$symptoms) data$time <- factor(data$time) data$index <-factor(data$index) m1<-clmm2(symptoms ~ index+time, random=UserID, data = data, Hess=TRUE, nAGQ=10) In my specific case, I was interested whether there was a significant interaction between index and time, so I ran one additional model and compared them: m2<-clmm2(symptoms ~ index+time, random=UserID, data = data, Hess=TRUE, nAGQ=10) anova(m1,m2) CLMM2 uses a random intercept model (to the best of my knowledge, the package ordinal does not do random slopes), if you do not with for a random intercept model you can run the models instead using CLM, e.g.: m3<-clm(symptoms ~ index+time, data = data)
A non-parametric repeated-measures multi-way Anova in R? There is a "trick" mentioned in some forums and mailing lists - I also found it mentioned in Joop Hox' book "Multilevel Analysis" (second edition, 2010), pp. 189. The idea is: you reformat your long
17,670
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity
The main one that springs to mind is "Three myths about risk thresholds for prediction models" by Wynants et al. (2019) where they argue strongly against using a "universally optimal threshold" without context. I liked they used the term "dichotomania" too (in effect meaning: "manically dichotomising continuous variables"). I like Peter Flach's work on the area of "evaluating ML model performance" too. I do not have a single definitive reference there but something like Berrar's and his: "Caveats and pitfalls of ROC analysis in clinical microarray research (and how to avoid them)" (2012) is a reasonable point to start. His "Precision-recall-gain curves: PR analysis done right" (2015) with Kull has been very thought-provoking too.
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity
The main one that springs to mind is "Three myths about risk thresholds for prediction models" by Wynants et al. (2019) where they argue strongly against using a "universally optimal threshold" withou
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity The main one that springs to mind is "Three myths about risk thresholds for prediction models" by Wynants et al. (2019) where they argue strongly against using a "universally optimal threshold" without context. I liked they used the term "dichotomania" too (in effect meaning: "manically dichotomising continuous variables"). I like Peter Flach's work on the area of "evaluating ML model performance" too. I do not have a single definitive reference there but something like Berrar's and his: "Caveats and pitfalls of ROC analysis in clinical microarray research (and how to avoid them)" (2012) is a reasonable point to start. His "Precision-recall-gain curves: PR analysis done right" (2015) with Kull has been very thought-provoking too.
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity The main one that springs to mind is "Three myths about risk thresholds for prediction models" by Wynants et al. (2019) where they argue strongly against using a "universally optimal threshold" withou
17,671
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity
The KPI (Key Performance Indicator) depends on the requirements of the application. For some applications (i.e. those where a hard classification must be made and we know a-priori that the misclassification costs are equal, e.g. some handwritten character recognition tasks) accuracy is a completely reasonable performance metric and it would be a mistake to recommend avoiding it because it has problems as well as advantages. Similarly, for some applications (primarily information retrieval) where it is more natural to talk of the relative importance of precision and recall than of misclassification costs, then $F_1$ or more generally $F_\beta$ may be appropriate, especially where we need to make a decision ("do I read this article, or don't I?"). An important consideration is whether we need to make a decision. We may well implement the system using a probabilistic classifier, and then applying a threshold. However, if we need a decision, then the performance of the system depends on the setting of that threshold, so we should be using a performance metric that depends on the threshold, as we need to include the effects of the threshold on the performance of the system. The advice I would give is not to have a single KPI, but have a range of performance metrics that provide information on different aspects of classifier performance. I quite often use accuracy (to measure the quality of the decisions), or equivalently the expected risk where misclassification costs are unequal, the area under the receiver operating characteristic (to measure the ranking of samples) and the cross-entropy (or similar) to measure the calibration of the probability estimates. Basically, our job as statisticians is to understand the advantages and disadvantages of performance metrics so that we can select the appropriate metric(s) for the needs of the application. All metrics have advantages and disadvantages, and we shouldn't reject any of them a-priori because of their disadvantages if they have advantages or relevance for our application. I think the advantages and disadvantages are well covered in textbooks (even ML ones! ;o), so I would just use those. Also, as I have said elsewhere, we should make a distinction between performance estimation and model selection. They are not the same problem, and sometimes we should have different metrics for each task.
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity
The KPI (Key Performance Indicator) depends on the requirements of the application. For some applications (i.e. those where a hard classification must be made and we know a-priori that the misclassif
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity The KPI (Key Performance Indicator) depends on the requirements of the application. For some applications (i.e. those where a hard classification must be made and we know a-priori that the misclassification costs are equal, e.g. some handwritten character recognition tasks) accuracy is a completely reasonable performance metric and it would be a mistake to recommend avoiding it because it has problems as well as advantages. Similarly, for some applications (primarily information retrieval) where it is more natural to talk of the relative importance of precision and recall than of misclassification costs, then $F_1$ or more generally $F_\beta$ may be appropriate, especially where we need to make a decision ("do I read this article, or don't I?"). An important consideration is whether we need to make a decision. We may well implement the system using a probabilistic classifier, and then applying a threshold. However, if we need a decision, then the performance of the system depends on the setting of that threshold, so we should be using a performance metric that depends on the threshold, as we need to include the effects of the threshold on the performance of the system. The advice I would give is not to have a single KPI, but have a range of performance metrics that provide information on different aspects of classifier performance. I quite often use accuracy (to measure the quality of the decisions), or equivalently the expected risk where misclassification costs are unequal, the area under the receiver operating characteristic (to measure the ranking of samples) and the cross-entropy (or similar) to measure the calibration of the probability estimates. Basically, our job as statisticians is to understand the advantages and disadvantages of performance metrics so that we can select the appropriate metric(s) for the needs of the application. All metrics have advantages and disadvantages, and we shouldn't reject any of them a-priori because of their disadvantages if they have advantages or relevance for our application. I think the advantages and disadvantages are well covered in textbooks (even ML ones! ;o), so I would just use those. Also, as I have said elsewhere, we should make a distinction between performance estimation and model selection. They are not the same problem, and sometimes we should have different metrics for each task.
Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity The KPI (Key Performance Indicator) depends on the requirements of the application. For some applications (i.e. those where a hard classification must be made and we know a-priori that the misclassif
17,672
Many binary classifiers vs. single multiclass classifier
Your Option 1 may not be the best way to go; if you want to have multiple binary classifiers try a strategy called One-vs-All. In One-vs-All you essentially have an expert binary classifier that is really good at recognizing one pattern from all the others, and the implementation strategy is typically cascaded. For example: if classifierNone says is None: you are done else: if classifierThumbsUp says is ThumbsIp: you are done else: if classifierClenchedFist says is ClenchedFist: you are done else: it must be AllFingersExtended and thus you are done Here is a graphical explanation of One-vs-all from Andrew Ng's course: Multi-class classifiers pros and cons: Pros: Easy to use out of the box Great when you have really many classes Cons: Usually slower than binary classifiers during training For high-dimensional problems they could really take a while to converge Popular methods: Neural networks Tree-based algorithms One-vs-All classifiers pros and cons: Pros: Since they use binary classifiers, they are usually faster to converge Great when you have a handful of classes Cons: It is really annoying to deal with when you have too many classes You really need to be careful when training to avoid class imbalances that introduce bias, e.g., if you have 1000 samples of none and 3000 samples of the thumbs_up class. Popular methods: SVMs Most ensemble methods Tree-based algorithms
Many binary classifiers vs. single multiclass classifier
Your Option 1 may not be the best way to go; if you want to have multiple binary classifiers try a strategy called One-vs-All. In One-vs-All you essentially have an expert binary classifier that is r
Many binary classifiers vs. single multiclass classifier Your Option 1 may not be the best way to go; if you want to have multiple binary classifiers try a strategy called One-vs-All. In One-vs-All you essentially have an expert binary classifier that is really good at recognizing one pattern from all the others, and the implementation strategy is typically cascaded. For example: if classifierNone says is None: you are done else: if classifierThumbsUp says is ThumbsIp: you are done else: if classifierClenchedFist says is ClenchedFist: you are done else: it must be AllFingersExtended and thus you are done Here is a graphical explanation of One-vs-all from Andrew Ng's course: Multi-class classifiers pros and cons: Pros: Easy to use out of the box Great when you have really many classes Cons: Usually slower than binary classifiers during training For high-dimensional problems they could really take a while to converge Popular methods: Neural networks Tree-based algorithms One-vs-All classifiers pros and cons: Pros: Since they use binary classifiers, they are usually faster to converge Great when you have a handful of classes Cons: It is really annoying to deal with when you have too many classes You really need to be careful when training to avoid class imbalances that introduce bias, e.g., if you have 1000 samples of none and 3000 samples of the thumbs_up class. Popular methods: SVMs Most ensemble methods Tree-based algorithms
Many binary classifiers vs. single multiclass classifier Your Option 1 may not be the best way to go; if you want to have multiple binary classifiers try a strategy called One-vs-All. In One-vs-All you essentially have an expert binary classifier that is r
17,673
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words?
I'm not an expert in word2vec, but upon reading Rong, X. (2014). word2vec Parameter Learning Explained and from my own NN experience, I'd simplify the reasoning to this: Hierarchical softmax provides for an improvement in training efficiency since the output vector is determined by a tree-like traversal of the network layers; a given training sample only has to evaluate/update $O(log(N))$ network units, not $O(N)$. This essentially expands the weights to support a large vocabulary - a given word is related to fewer neurons and visa-versa. Negative sampling is a way to sample the training data, similar to stochastic gradient descent, but the key is you look for negative training examples. Intuitively, it trains based on sampling places it might have expected a word, but didn't find one, which is faster than training an entire corpus every iteration and makes sense for common words. The two methods don't seem to be exclusive, theoretically, but anyway that seems to be why they'd be better for frequent and infrequent words.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ
I'm not an expert in word2vec, but upon reading Rong, X. (2014). word2vec Parameter Learning Explained and from my own NN experience, I'd simplify the reasoning to this: Hierarchical softmax provides
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words? I'm not an expert in word2vec, but upon reading Rong, X. (2014). word2vec Parameter Learning Explained and from my own NN experience, I'd simplify the reasoning to this: Hierarchical softmax provides for an improvement in training efficiency since the output vector is determined by a tree-like traversal of the network layers; a given training sample only has to evaluate/update $O(log(N))$ network units, not $O(N)$. This essentially expands the weights to support a large vocabulary - a given word is related to fewer neurons and visa-versa. Negative sampling is a way to sample the training data, similar to stochastic gradient descent, but the key is you look for negative training examples. Intuitively, it trains based on sampling places it might have expected a word, but didn't find one, which is faster than training an entire corpus every iteration and makes sense for common words. The two methods don't seem to be exclusive, theoretically, but anyway that seems to be why they'd be better for frequent and infrequent words.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ I'm not an expert in word2vec, but upon reading Rong, X. (2014). word2vec Parameter Learning Explained and from my own NN experience, I'd simplify the reasoning to this: Hierarchical softmax provides
17,674
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words?
My understanding is this is because of the Huffman coding used when building the category hierarchy. Hierarchical softmax uses a tree of sigmoid nodes instead of one big softmax, Huffman coding ensures that the distribution of data points belonging to each side of any sigmoid node is balanced. Therefore it helps eliminate the preference towards frequent categories comparing with using one big softmax and negative sampling.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ
My understanding is this is because of the Huffman coding used when building the category hierarchy. Hierarchical softmax uses a tree of sigmoid nodes instead of one big softmax, Huffman coding ensur
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words? My understanding is this is because of the Huffman coding used when building the category hierarchy. Hierarchical softmax uses a tree of sigmoid nodes instead of one big softmax, Huffman coding ensures that the distribution of data points belonging to each side of any sigmoid node is balanced. Therefore it helps eliminate the preference towards frequent categories comparing with using one big softmax and negative sampling.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ My understanding is this is because of the Huffman coding used when building the category hierarchy. Hierarchical softmax uses a tree of sigmoid nodes instead of one big softmax, Huffman coding ensur
17,675
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words?
Hierarchical softmax builds a tree over the whole vocabulary and the leaf nodes representing rare words will inevitably inherit their ancestors' vector representations in the tree, which can be affected by other frequent words in the corpus. This will benefit the incremental training for new corpus. Negative sampling is developed based on noise contrastive estimation and randomly samples the words not in the context to distinguish the observed data from the artificially generated random noise.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ
Hierarchical softmax builds a tree over the whole vocabulary and the leaf nodes representing rare words will inevitably inherit their ancestors' vector representations in the tree, which can be affect
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequent words? Hierarchical softmax builds a tree over the whole vocabulary and the leaf nodes representing rare words will inevitably inherit their ancestors' vector representations in the tree, which can be affected by other frequent words in the corpus. This will benefit the incremental training for new corpus. Negative sampling is developed based on noise contrastive estimation and randomly samples the words not in the context to distinguish the observed data from the artificially generated random noise.
Why is hierarchical softmax better for infrequent words, while negative sampling is better for frequ Hierarchical softmax builds a tree over the whole vocabulary and the leaf nodes representing rare words will inevitably inherit their ancestors' vector representations in the tree, which can be affect
17,676
Back transforming regression results when modeling log(y)
It depends on what you want to obtain at the other end. A confidence interval for a transformed parameter transforms just fine. If it has the nominal coverage on the log scale it will have the same coverage back on the original scale, because of the monotonicity of the transformation. A prediction interval for a future observation also transforms just fine. An interval for a mean on the log scale will not generally be a suitable interval for the mean on the original scale. However, sometimes you can either exactly or approximately produce a reasonable estimate for the mean on the original scale from the model on the log scale. However, care is required or you might end up producing estimates that have somewhat surprising properties (it's possible to produce estimates that don't themselves have a population mean for example; this isn't everyone's idea of a good thing). So for example, in the lognormal case, when you exponentiate back, you have a nice estimate of $\exp(\mu_i)$, and you might note that the population mean is $\exp(\mu_i+\frac{1}{2}\sigma^2)$, so you may think to improve $\exp(\hat{\mu_i})$ by scaling it by some estimate of $\exp(\frac{1}{2}\sigma^2)$. One should at least be able to get consistent estimation and indeed some distributional asymptotics via Slutsky's theorem (specifically the product-form) as long as one can consistently estimate the adjustment. The continuous mapping theorem says that you can if you can estimate $\sigma^2$ consistently... which is the case. So as long as $\hat{\sigma}^2$ is a consistent estimator of $\sigma^2$, then $\exp(\hat{\mu_i})\cdot \exp(\frac{1}{2}\hat{\sigma}^2)$ converges in distribution to the distribution of $\exp(\hat{\mu_i})\cdot \exp(\frac{1}{2}\sigma^2)$ (which by inspection will then be asymptotically lognormally distributed). Since $\hat{\mu_i}$ will be consistent for $\mu_i$, bu the continuous mapping theorem, $\exp(\hat{\mu_i})$ will be consistent for $\exp(\mu_i)$, and so we have a consistent estimator of the mean on the original scale. See here. Some related posts: Back transformation of an MLR model Back Transformation Back-transformed confidence intervals
Back transforming regression results when modeling log(y)
It depends on what you want to obtain at the other end. A confidence interval for a transformed parameter transforms just fine. If it has the nominal coverage on the log scale it will have the same co
Back transforming regression results when modeling log(y) It depends on what you want to obtain at the other end. A confidence interval for a transformed parameter transforms just fine. If it has the nominal coverage on the log scale it will have the same coverage back on the original scale, because of the monotonicity of the transformation. A prediction interval for a future observation also transforms just fine. An interval for a mean on the log scale will not generally be a suitable interval for the mean on the original scale. However, sometimes you can either exactly or approximately produce a reasonable estimate for the mean on the original scale from the model on the log scale. However, care is required or you might end up producing estimates that have somewhat surprising properties (it's possible to produce estimates that don't themselves have a population mean for example; this isn't everyone's idea of a good thing). So for example, in the lognormal case, when you exponentiate back, you have a nice estimate of $\exp(\mu_i)$, and you might note that the population mean is $\exp(\mu_i+\frac{1}{2}\sigma^2)$, so you may think to improve $\exp(\hat{\mu_i})$ by scaling it by some estimate of $\exp(\frac{1}{2}\sigma^2)$. One should at least be able to get consistent estimation and indeed some distributional asymptotics via Slutsky's theorem (specifically the product-form) as long as one can consistently estimate the adjustment. The continuous mapping theorem says that you can if you can estimate $\sigma^2$ consistently... which is the case. So as long as $\hat{\sigma}^2$ is a consistent estimator of $\sigma^2$, then $\exp(\hat{\mu_i})\cdot \exp(\frac{1}{2}\hat{\sigma}^2)$ converges in distribution to the distribution of $\exp(\hat{\mu_i})\cdot \exp(\frac{1}{2}\sigma^2)$ (which by inspection will then be asymptotically lognormally distributed). Since $\hat{\mu_i}$ will be consistent for $\mu_i$, bu the continuous mapping theorem, $\exp(\hat{\mu_i})$ will be consistent for $\exp(\mu_i)$, and so we have a consistent estimator of the mean on the original scale. See here. Some related posts: Back transformation of an MLR model Back Transformation Back-transformed confidence intervals
Back transforming regression results when modeling log(y) It depends on what you want to obtain at the other end. A confidence interval for a transformed parameter transforms just fine. If it has the nominal coverage on the log scale it will have the same co
17,677
Intraclass Correlation Coefficient in mixed model with random slopes
Basically there's no single number or estimate that can summarize the degree of clustering in a random slopes model. The intra-class correlation (ICC) can only be written as a simple proportion of variances in random-intercepts-only models. To see why, a sketch of the derivation of the ICC expression can be found here. When you throw random slopes into the model equation, following the same steps leads instead to the ICC expression on page 5 of this paper. As you can see, that complicated expression is a function of the predictor X. To see more intuitively why var(Y) depends on X when there are random slopes, check out page 30 of these slides ("Why does the variance depend on x?"). Because the ICC is a function of the predictors (the x-values), it can only be computed for particular sets of x-values. You could perhaps try something like reporting the ICC at the joint average of the x-values, but this estimate will be demonstrably inaccurate for the majority of the observations. Everything I've said still only refers to cases where there is a single random factor. With multiple random factors it becomes even more complicated. For example, in a multi-site project where participants at each site respond to a sample of stimuli (i.e., 3 random factors: site, participant, stimulus), we could ask about many different ICCs: What is the expected correlation between two responses at the same site, to the same stimulus, from different participants? How about at different sites, the same stimulus, and different participants? And so on. @rvl mentions these complications in the answer that the OP linked to. So as you can see, the only case where we can summarize the degree of clustering with a single value is the single-random-factor random-intercept-only case. Because this is such a small proportion of real-world cases, ICCs are not that useful most of the time. So my general recommendation is to not even worry about them. Instead I recommend just reporting the variance components (preferably in standard deviation form).
Intraclass Correlation Coefficient in mixed model with random slopes
Basically there's no single number or estimate that can summarize the degree of clustering in a random slopes model. The intra-class correlation (ICC) can only be written as a simple proportion of var
Intraclass Correlation Coefficient in mixed model with random slopes Basically there's no single number or estimate that can summarize the degree of clustering in a random slopes model. The intra-class correlation (ICC) can only be written as a simple proportion of variances in random-intercepts-only models. To see why, a sketch of the derivation of the ICC expression can be found here. When you throw random slopes into the model equation, following the same steps leads instead to the ICC expression on page 5 of this paper. As you can see, that complicated expression is a function of the predictor X. To see more intuitively why var(Y) depends on X when there are random slopes, check out page 30 of these slides ("Why does the variance depend on x?"). Because the ICC is a function of the predictors (the x-values), it can only be computed for particular sets of x-values. You could perhaps try something like reporting the ICC at the joint average of the x-values, but this estimate will be demonstrably inaccurate for the majority of the observations. Everything I've said still only refers to cases where there is a single random factor. With multiple random factors it becomes even more complicated. For example, in a multi-site project where participants at each site respond to a sample of stimuli (i.e., 3 random factors: site, participant, stimulus), we could ask about many different ICCs: What is the expected correlation between two responses at the same site, to the same stimulus, from different participants? How about at different sites, the same stimulus, and different participants? And so on. @rvl mentions these complications in the answer that the OP linked to. So as you can see, the only case where we can summarize the degree of clustering with a single value is the single-random-factor random-intercept-only case. Because this is such a small proportion of real-world cases, ICCs are not that useful most of the time. So my general recommendation is to not even worry about them. Instead I recommend just reporting the variance components (preferably in standard deviation form).
Intraclass Correlation Coefficient in mixed model with random slopes Basically there's no single number or estimate that can summarize the degree of clustering in a random slopes model. The intra-class correlation (ICC) can only be written as a simple proportion of var
17,678
R: What do I see in partial dependence plots of gbm and RandomForest?
I spent some time writing my own "partial.function-plotter" before I realized it was already bundled in the R randomForest library. [EDIT ...but then I spent a year making the CRAN package forestFloor, which is by my opinion significantly better than classical partial dependence plots] Partial.function plot are great in instances as this simulation example you show here, where the explaining variable do not interact with other variables. If each explaining variable contribute additively to the target-Y by some unknown function, this method is great to show that estimated hidden function. I often see such flattening in the borders of partial functions. Some reasons: randomForsest has an argument called 'nodesize=5' which means no tree will subdivide a group of 5 members or less. Therefore each tree cannot distinguish with further precision. Bagging/bootstrapping layer of ensemple smooths by voting the many step functions of the individual trees - but only in the middle of the data region. Nearing the borders of data represented space, the 'amplitude' of the partial.function will fall. Setting nodesize=3 and/or get more observations compared to noise can reduce this border flatting effect... When signal to noise ratio falls in general in random forest the predictions scale condenses. Thus the predictions are not absolutely terms accurate, but only linearly correlated with target. You can see the a and b values as examples of and extremely low signal to noise ratio, and therefore these partial functions are very flat. It's a nice feature of random forest that you already from the range of predictions of training set can guess how well the model is performing. OOB.predictions is great also.. flattening of partial plot in regions with no data is reasonable: As random forest and CART are data driven modeling, I personally like the concept that these models do not extrapolate. Thus prediction of c=500 or c=1100 is the exactly same as c=100 or in most instances also c=98. Here is a code example with the border flattening is reduced: I have not tried the gbm package... here is some illustrative code based on your eaxample... #more observations are created... a <- runif(5000, 1, 100) b <- runif(5000, 1, 100) c <- (1:5000)/50 + rnorm(100, mean = 0, sd = 0.1) y <- (1:5000)/50 + rnorm(100, mean = 0, sd = 0.1) par(mfrow = c(1,3)) plot(y ~ a); plot(y ~ b); plot(y ~ c) Data <- data.frame(matrix(c(y, a, b, c), ncol = 4)) names(Data) <- c("y", "a", "b", "c") library(randomForest) #smaller nodesize "not as important" when there number of observartion is increased #more tress can smooth flattening so boundery regions have best possible signal to noise, data specific how many needed plot.partial = function() { partialPlot(rf.model, Data[,2:4], x.var = "a",xlim=c(1,100),ylim=c(1,100)) partialPlot(rf.model, Data[,2:4], x.var = "b",xlim=c(1,100),ylim=c(1,100)) partialPlot(rf.model, Data[,2:4], x.var = "c",xlim=c(1,100),ylim=c(1,100)) } #worst case! : with 100 samples from Data and nodesize=30 rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,100),],nodesize=30) plot.partial() #reasonble settings for least partial flattening by few observations: 100 samples and nodesize=3 and ntrees=2000 #more tress can smooth flattening so boundery regions have best possiblefidelity rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,100),],nodesize=5,ntress=2000) plot.partial() #more observations is great! rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,5000),], nodesize=5,ntress=2000) plot.partial()
R: What do I see in partial dependence plots of gbm and RandomForest?
I spent some time writing my own "partial.function-plotter" before I realized it was already bundled in the R randomForest library. [EDIT ...but then I spent a year making the CRAN package forestFloor
R: What do I see in partial dependence plots of gbm and RandomForest? I spent some time writing my own "partial.function-plotter" before I realized it was already bundled in the R randomForest library. [EDIT ...but then I spent a year making the CRAN package forestFloor, which is by my opinion significantly better than classical partial dependence plots] Partial.function plot are great in instances as this simulation example you show here, where the explaining variable do not interact with other variables. If each explaining variable contribute additively to the target-Y by some unknown function, this method is great to show that estimated hidden function. I often see such flattening in the borders of partial functions. Some reasons: randomForsest has an argument called 'nodesize=5' which means no tree will subdivide a group of 5 members or less. Therefore each tree cannot distinguish with further precision. Bagging/bootstrapping layer of ensemple smooths by voting the many step functions of the individual trees - but only in the middle of the data region. Nearing the borders of data represented space, the 'amplitude' of the partial.function will fall. Setting nodesize=3 and/or get more observations compared to noise can reduce this border flatting effect... When signal to noise ratio falls in general in random forest the predictions scale condenses. Thus the predictions are not absolutely terms accurate, but only linearly correlated with target. You can see the a and b values as examples of and extremely low signal to noise ratio, and therefore these partial functions are very flat. It's a nice feature of random forest that you already from the range of predictions of training set can guess how well the model is performing. OOB.predictions is great also.. flattening of partial plot in regions with no data is reasonable: As random forest and CART are data driven modeling, I personally like the concept that these models do not extrapolate. Thus prediction of c=500 or c=1100 is the exactly same as c=100 or in most instances also c=98. Here is a code example with the border flattening is reduced: I have not tried the gbm package... here is some illustrative code based on your eaxample... #more observations are created... a <- runif(5000, 1, 100) b <- runif(5000, 1, 100) c <- (1:5000)/50 + rnorm(100, mean = 0, sd = 0.1) y <- (1:5000)/50 + rnorm(100, mean = 0, sd = 0.1) par(mfrow = c(1,3)) plot(y ~ a); plot(y ~ b); plot(y ~ c) Data <- data.frame(matrix(c(y, a, b, c), ncol = 4)) names(Data) <- c("y", "a", "b", "c") library(randomForest) #smaller nodesize "not as important" when there number of observartion is increased #more tress can smooth flattening so boundery regions have best possible signal to noise, data specific how many needed plot.partial = function() { partialPlot(rf.model, Data[,2:4], x.var = "a",xlim=c(1,100),ylim=c(1,100)) partialPlot(rf.model, Data[,2:4], x.var = "b",xlim=c(1,100),ylim=c(1,100)) partialPlot(rf.model, Data[,2:4], x.var = "c",xlim=c(1,100),ylim=c(1,100)) } #worst case! : with 100 samples from Data and nodesize=30 rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,100),],nodesize=30) plot.partial() #reasonble settings for least partial flattening by few observations: 100 samples and nodesize=3 and ntrees=2000 #more tress can smooth flattening so boundery regions have best possiblefidelity rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,100),],nodesize=5,ntress=2000) plot.partial() #more observations is great! rf.model <- randomForest(y ~ a + b + c, data = Data[sample(5000,5000),], nodesize=5,ntress=2000) plot.partial()
R: What do I see in partial dependence plots of gbm and RandomForest? I spent some time writing my own "partial.function-plotter" before I realized it was already bundled in the R randomForest library. [EDIT ...but then I spent a year making the CRAN package forestFloor
17,679
R: What do I see in partial dependence plots of gbm and RandomForest?
As mentioned in the comments above, the gbm model would be better with some parameter tuning. An easy way to spot problems in the model and the need for such parameters is to generate some diagnostic plots. For example, for the gbm model above with the default parameters (and using the plotmo package to create the plots) we have gbm.gaus <- gbm(y~., data = Data, dist = "gaussian") library(plotmo) # for the plotres function plotres(gbm.gaus) # plot the error per ntrees and the residuals which gives In the left hand plot we see that the error curve hasn't bottomed out. And in the right hand plot the residuals are not what we would want. If we rebuild the model with a bigger number of trees gbm.gaus1 <- gbm(y~., data = Data, dist = "gaussian", n.trees=5000, interact=3) plotres(gbm.gaus1) we get We see the error curve bottom out with a large number of trees, and the residuals plot is healthier. We can also plot the partial dependence plots for the new gbm model and the random forest model library(plotmo) plotmo(gbm.gaus1, pmethod="partdep", all1=TRUE, all2=TRUE) plotmo(rf.model, pmethod="partdep", all1=TRUE, all2=TRUE) which gives The gbm and random forest model plots are now similar, as expected.
R: What do I see in partial dependence plots of gbm and RandomForest?
As mentioned in the comments above, the gbm model would be better with some parameter tuning. An easy way to spot problems in the model and the need for such parameters is to generate some diagnostic
R: What do I see in partial dependence plots of gbm and RandomForest? As mentioned in the comments above, the gbm model would be better with some parameter tuning. An easy way to spot problems in the model and the need for such parameters is to generate some diagnostic plots. For example, for the gbm model above with the default parameters (and using the plotmo package to create the plots) we have gbm.gaus <- gbm(y~., data = Data, dist = "gaussian") library(plotmo) # for the plotres function plotres(gbm.gaus) # plot the error per ntrees and the residuals which gives In the left hand plot we see that the error curve hasn't bottomed out. And in the right hand plot the residuals are not what we would want. If we rebuild the model with a bigger number of trees gbm.gaus1 <- gbm(y~., data = Data, dist = "gaussian", n.trees=5000, interact=3) plotres(gbm.gaus1) we get We see the error curve bottom out with a large number of trees, and the residuals plot is healthier. We can also plot the partial dependence plots for the new gbm model and the random forest model library(plotmo) plotmo(gbm.gaus1, pmethod="partdep", all1=TRUE, all2=TRUE) plotmo(rf.model, pmethod="partdep", all1=TRUE, all2=TRUE) which gives The gbm and random forest model plots are now similar, as expected.
R: What do I see in partial dependence plots of gbm and RandomForest? As mentioned in the comments above, the gbm model would be better with some parameter tuning. An easy way to spot problems in the model and the need for such parameters is to generate some diagnostic
17,680
R: What do I see in partial dependence plots of gbm and RandomForest?
You need to update your interaction.depth parameter when you build your boosted model. It defaults to 1 and that will cause all the trees that the gbm algorithm builds to split only once each. This would mean every tree is just splitting on variable c and depending on the sample of observations it uses it will split somewhere around 40 - 60. Here are the partial plots with interaction.depth = 3
R: What do I see in partial dependence plots of gbm and RandomForest?
You need to update your interaction.depth parameter when you build your boosted model. It defaults to 1 and that will cause all the trees that the gbm algorithm builds to split only once each. This wo
R: What do I see in partial dependence plots of gbm and RandomForest? You need to update your interaction.depth parameter when you build your boosted model. It defaults to 1 and that will cause all the trees that the gbm algorithm builds to split only once each. This would mean every tree is just splitting on variable c and depending on the sample of observations it uses it will split somewhere around 40 - 60. Here are the partial plots with interaction.depth = 3
R: What do I see in partial dependence plots of gbm and RandomForest? You need to update your interaction.depth parameter when you build your boosted model. It defaults to 1 and that will cause all the trees that the gbm algorithm builds to split only once each. This wo
17,681
Is there a function in R that takes the centers of clusters that were found and assigns clusters to a new data set
You can compute the cluster assignments for a new data set with the following function: clusters <- function(x, centers) { # compute squared euclidean distance from each sample to each cluster center tmp <- sapply(seq_len(nrow(x)), function(i) apply(centers, 1, function(v) sum((x[i, ]-v)^2))) max.col(-t(tmp)) # find index of min distance } # create a simple data set with two clusters set.seed(1) x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") x_new <- rbind(matrix(rnorm(10, sd = 0.3), ncol = 2), matrix(rnorm(10, mean = 1, sd = 0.3), ncol = 2)) colnames(x_new) <- c("x", "y") cl <- kmeans(x, centers=2) all.equal(cl[["cluster"]], clusters(x, cl[["centers"]])) # [1] TRUE clusters(x_new, cl[["centers"]]) # [1] 2 2 2 2 2 1 1 1 1 1 plot(x, col=cl$cluster, pch=3) points(x_new, col= clusters(x_new, cl[["centers"]]), pch=19) points(cl[["centers"]], pch=4, cex=2, col="blue") or you could use the flexclust package, which has an implemented predict method for k-means: library("flexclust") data("Nclus") set.seed(1) dat <- as.data.frame(Nclus) ind <- sample(nrow(dat), 50) dat[["train"]] <- TRUE dat[["train"]][ind] <- FALSE cl1 = kcca(dat[dat[["train"]]==TRUE, 1:2], k=4, kccaFamily("kmeans")) cl1 # # call: # kcca(x = dat[dat[["train"]] == TRUE, 1:2], k = 4) # # cluster sizes: # # 1 2 3 4 #130 181 98 91 pred_train <- predict(cl1) pred_test <- predict(cl1, newdata=dat[dat[["train"]]==FALSE, 1:2]) image(cl1) points(dat[dat[["train"]]==TRUE, 1:2], col=pred_train, pch=19, cex=0.3) points(dat[dat[["train"]]==FALSE, 1:2], col=pred_test, pch=22, bg="orange") There are also conversion methods to convert the results from cluster functions like stats::kmeans or cluster::pam to objects of class kcca and vice versa: as.kcca(cl, data=x) # kcca object of family ‘kmeans’ # # call: # as.kcca(object = cl, data = x) # # cluster sizes: # # 1 2 # 50 50
Is there a function in R that takes the centers of clusters that were found and assigns clusters to
You can compute the cluster assignments for a new data set with the following function: clusters <- function(x, centers) { # compute squared euclidean distance from each sample to each cluster cente
Is there a function in R that takes the centers of clusters that were found and assigns clusters to a new data set You can compute the cluster assignments for a new data set with the following function: clusters <- function(x, centers) { # compute squared euclidean distance from each sample to each cluster center tmp <- sapply(seq_len(nrow(x)), function(i) apply(centers, 1, function(v) sum((x[i, ]-v)^2))) max.col(-t(tmp)) # find index of min distance } # create a simple data set with two clusters set.seed(1) x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") x_new <- rbind(matrix(rnorm(10, sd = 0.3), ncol = 2), matrix(rnorm(10, mean = 1, sd = 0.3), ncol = 2)) colnames(x_new) <- c("x", "y") cl <- kmeans(x, centers=2) all.equal(cl[["cluster"]], clusters(x, cl[["centers"]])) # [1] TRUE clusters(x_new, cl[["centers"]]) # [1] 2 2 2 2 2 1 1 1 1 1 plot(x, col=cl$cluster, pch=3) points(x_new, col= clusters(x_new, cl[["centers"]]), pch=19) points(cl[["centers"]], pch=4, cex=2, col="blue") or you could use the flexclust package, which has an implemented predict method for k-means: library("flexclust") data("Nclus") set.seed(1) dat <- as.data.frame(Nclus) ind <- sample(nrow(dat), 50) dat[["train"]] <- TRUE dat[["train"]][ind] <- FALSE cl1 = kcca(dat[dat[["train"]]==TRUE, 1:2], k=4, kccaFamily("kmeans")) cl1 # # call: # kcca(x = dat[dat[["train"]] == TRUE, 1:2], k = 4) # # cluster sizes: # # 1 2 3 4 #130 181 98 91 pred_train <- predict(cl1) pred_test <- predict(cl1, newdata=dat[dat[["train"]]==FALSE, 1:2]) image(cl1) points(dat[dat[["train"]]==TRUE, 1:2], col=pred_train, pch=19, cex=0.3) points(dat[dat[["train"]]==FALSE, 1:2], col=pred_test, pch=22, bg="orange") There are also conversion methods to convert the results from cluster functions like stats::kmeans or cluster::pam to objects of class kcca and vice versa: as.kcca(cl, data=x) # kcca object of family ‘kmeans’ # # call: # as.kcca(object = cl, data = x) # # cluster sizes: # # 1 2 # 50 50
Is there a function in R that takes the centers of clusters that were found and assigns clusters to You can compute the cluster assignments for a new data set with the following function: clusters <- function(x, centers) { # compute squared euclidean distance from each sample to each cluster cente
17,682
Is there a function in R that takes the centers of clusters that were found and assigns clusters to a new data set
step1: a function computing distance between a vector and each row of a matrix calc_vec2mat_dist = function(x, ref_mat) { # compute row-wise vec2vec distance apply(ref_mat, 1, function(r) sum((r - x)^2)) } step 2: a function that apply the vec2mat computer to every row of the input_matrix calc_mat2mat_dist = function(input_mat, ref_mat) { dist_mat = apply(input_mat, 1, function(r) calc_vec2mat_dist(r, ref_mat)) # transpose to have each row for each input datapoint # each column for each centroids cbind(t(dist_mat), max.col(-t(dist_mat))) } step3. apply the mat2mat function calc_mat2mat_dist(my_input_mat, kmeans_model$centers) step4. Optionally use plyr::ddply and doMC to parallelize mat2mat for big dataset library(doMC) library(plyr) pred_cluster_para = function(input_df, center_mat, cl_feat, id_cols, use_ncore = 8) { # assign cluster lables for each individual (row) in the input_df # input: input_df - dataframe with all features used in clustering, plus some id/indicator columns # input: center_mat - matrix of centroid, K rows by M features # input: cl_feat - list of features (col names) # input: id_cols - list of index cols (e.g. id) to include in output # output: output_df - dataframe with same number of rows as input, # K columns of distances to each clusters # 1 column of cluster_labels # x column of indices in idx_cols n_cluster = nrow(center_mat) n_feat = ncol(center_mat) n_input = nrow(input_df) if(!(typeof(center_mat) %in% c('double','interger') & is.matrix(center_mat))){ stop('The argument "center_mat" must be numeric matrix') } else if(length(cl_feat) != n_feat) { stop(sprintf('cl_feat size: %d , center_mat n_col: %d, they have to match!',length(cl_feat), n_feat)) } else { # register MultiCore backend through doMC and foreach package doMC::registerDoMC(cores = use_ncore) # create job_key for mapping/spliting the input data input_df[,'job_idx'] = sample(1:use_ncore, n_input, replace = TRUE) # create row_key for tracing the original row order which will be shuffled by mapreduce input_df[,'row_idx'] = seq(n_input) # use ddply (df input, df output) to split-process-combine output_df = ddply( input_df[, c('job_idx','row_idx',cl_feat,id_cols)], # input big data 'job_idx', # map/split by job_idx function(chunk) { # work on each chunk dist = data.frame(calc_mat2mat_dist(chunk[,cl_feat], center_mat)) names(dist) = c(paste0('dist2c_', seq(n_cluster)), 'pred_cluster') dist[,id_cols] = chunk[,id_cols] dist[,'row_idx'] = chunk[,'row_idx'] dist # product of mapper }, .parallel = TRUE) # end of ddply # sort back to original row order output_df = output_df[order(output_df$row_idx),] output_df[c('job_idx')] = NULL return(output_df) } }
Is there a function in R that takes the centers of clusters that were found and assigns clusters to
step1: a function computing distance between a vector and each row of a matrix calc_vec2mat_dist = function(x, ref_mat) { # compute row-wise vec2vec distance apply(ref_mat, 1, function(r) sum
Is there a function in R that takes the centers of clusters that were found and assigns clusters to a new data set step1: a function computing distance between a vector and each row of a matrix calc_vec2mat_dist = function(x, ref_mat) { # compute row-wise vec2vec distance apply(ref_mat, 1, function(r) sum((r - x)^2)) } step 2: a function that apply the vec2mat computer to every row of the input_matrix calc_mat2mat_dist = function(input_mat, ref_mat) { dist_mat = apply(input_mat, 1, function(r) calc_vec2mat_dist(r, ref_mat)) # transpose to have each row for each input datapoint # each column for each centroids cbind(t(dist_mat), max.col(-t(dist_mat))) } step3. apply the mat2mat function calc_mat2mat_dist(my_input_mat, kmeans_model$centers) step4. Optionally use plyr::ddply and doMC to parallelize mat2mat for big dataset library(doMC) library(plyr) pred_cluster_para = function(input_df, center_mat, cl_feat, id_cols, use_ncore = 8) { # assign cluster lables for each individual (row) in the input_df # input: input_df - dataframe with all features used in clustering, plus some id/indicator columns # input: center_mat - matrix of centroid, K rows by M features # input: cl_feat - list of features (col names) # input: id_cols - list of index cols (e.g. id) to include in output # output: output_df - dataframe with same number of rows as input, # K columns of distances to each clusters # 1 column of cluster_labels # x column of indices in idx_cols n_cluster = nrow(center_mat) n_feat = ncol(center_mat) n_input = nrow(input_df) if(!(typeof(center_mat) %in% c('double','interger') & is.matrix(center_mat))){ stop('The argument "center_mat" must be numeric matrix') } else if(length(cl_feat) != n_feat) { stop(sprintf('cl_feat size: %d , center_mat n_col: %d, they have to match!',length(cl_feat), n_feat)) } else { # register MultiCore backend through doMC and foreach package doMC::registerDoMC(cores = use_ncore) # create job_key for mapping/spliting the input data input_df[,'job_idx'] = sample(1:use_ncore, n_input, replace = TRUE) # create row_key for tracing the original row order which will be shuffled by mapreduce input_df[,'row_idx'] = seq(n_input) # use ddply (df input, df output) to split-process-combine output_df = ddply( input_df[, c('job_idx','row_idx',cl_feat,id_cols)], # input big data 'job_idx', # map/split by job_idx function(chunk) { # work on each chunk dist = data.frame(calc_mat2mat_dist(chunk[,cl_feat], center_mat)) names(dist) = c(paste0('dist2c_', seq(n_cluster)), 'pred_cluster') dist[,id_cols] = chunk[,id_cols] dist[,'row_idx'] = chunk[,'row_idx'] dist # product of mapper }, .parallel = TRUE) # end of ddply # sort back to original row order output_df = output_df[order(output_df$row_idx),] output_df[c('job_idx')] = NULL return(output_df) } }
Is there a function in R that takes the centers of clusters that were found and assigns clusters to step1: a function computing distance between a vector and each row of a matrix calc_vec2mat_dist = function(x, ref_mat) { # compute row-wise vec2vec distance apply(ref_mat, 1, function(r) sum
17,683
Intuition and uses for coefficient of variation
I think of it as a relative measure of spread or variability in the data. If you think of the statement, "The standard deviation is 2.4" it really tells you nothing without respect to the mean (and thus the unit of measure, I suppose). If the mean is equal to 104, the standard deviation of 2.4 communicates quite a different picture of spread than if the mean were 25,452 with a standard deviation of 2.4.. The same reason you normalize data (subtract the mean and divide by the standard deviation) to place data expressed in different units on a comparable or equal footing—so too this measure of variability is normalized—to aid in comparisons.
Intuition and uses for coefficient of variation
I think of it as a relative measure of spread or variability in the data. If you think of the statement, "The standard deviation is 2.4" it really tells you nothing without respect to the mean (and th
Intuition and uses for coefficient of variation I think of it as a relative measure of spread or variability in the data. If you think of the statement, "The standard deviation is 2.4" it really tells you nothing without respect to the mean (and thus the unit of measure, I suppose). If the mean is equal to 104, the standard deviation of 2.4 communicates quite a different picture of spread than if the mean were 25,452 with a standard deviation of 2.4.. The same reason you normalize data (subtract the mean and divide by the standard deviation) to place data expressed in different units on a comparable or equal footing—so too this measure of variability is normalized—to aid in comparisons.
Intuition and uses for coefficient of variation I think of it as a relative measure of spread or variability in the data. If you think of the statement, "The standard deviation is 2.4" it really tells you nothing without respect to the mean (and th
17,684
Intuition and uses for coefficient of variation
Coefficient of variation is effectively a normalized or relative measure of the variation in a data set, (e.g. a time series) in that it is a proportion (and therefore can be expressed as a percentage). Intuitively, if the mean is the expected value, then the coefficient of variation is the expected variability of a measurement, relative to the mean. This is useful when comparing measurements across multiple heterogeneous data sets or across multiple measurements taken on the same data set - the coefficient of variation between two data sets, or calculated for two sets of measurements can be directly compared, even if the data in each are measured on very different scales, sampling rates or resolutions. In contrast, standard deviation is specific to the measurement/sample it is obtained from, i.e. it is an absolute rather than a relative measure of variation.
Intuition and uses for coefficient of variation
Coefficient of variation is effectively a normalized or relative measure of the variation in a data set, (e.g. a time series) in that it is a proportion (and therefore can be expressed as a percentag
Intuition and uses for coefficient of variation Coefficient of variation is effectively a normalized or relative measure of the variation in a data set, (e.g. a time series) in that it is a proportion (and therefore can be expressed as a percentage). Intuitively, if the mean is the expected value, then the coefficient of variation is the expected variability of a measurement, relative to the mean. This is useful when comparing measurements across multiple heterogeneous data sets or across multiple measurements taken on the same data set - the coefficient of variation between two data sets, or calculated for two sets of measurements can be directly compared, even if the data in each are measured on very different scales, sampling rates or resolutions. In contrast, standard deviation is specific to the measurement/sample it is obtained from, i.e. it is an absolute rather than a relative measure of variation.
Intuition and uses for coefficient of variation Coefficient of variation is effectively a normalized or relative measure of the variation in a data set, (e.g. a time series) in that it is a proportion (and therefore can be expressed as a percentag
17,685
Generate three correlated uniformly-distributed random variables
The question contains several errors as noted in comments - as defined in the question, Z is neither uniform nor has the specified correlation. cardinal mentions copulas, and that's the most general way to go about it. However, there are several quite easy ways to get correlated uniforms (which can be seen as mere shortcuts to different kinds of copulas). So let's start with some ways to get a pair of correlated uniforms. 1) If you add two uniforms the result is triangular, not uniform. But you can use the cdf of the resulting variable as a transform to take the result back to a uniform. The result isn't linearly correlated any more, of course. Here's an R function to transform a symmetric triangular on (0,2) to standard uniform t2u = function(x) ifelse(x<1, x^2, 2-(2-x)^2)/2 Let's check that it does give a uniform u1 = runif(30000) u2 = runif(30000) v1 = t2u(u1+u2) And it's correlated with u1 and u2: > cor(cbind(u1,u2,v1)) u1 u2 v1 u1 1.000000000 0.006311667 0.7035149 u2 0.006311667 1.000000000 0.7008528 v1 0.703514895 0.700852805 1.0000000 but not linearly, due to the monotonic transformation to uniformity With this as a tool we can generate some additional variables to get three equicorrelated uniforms: u3 = runif(30000) v2 = t2u(u1+u3) v3 = t2u(u2+u3) cor(cbind(v1,v2,v3)) v1 v2 v3 v1 1.0000000 0.4967572 0.4896972 v2 0.4967572 1.0000000 0.4934746 v3 0.4896972 0.4934746 1.0000000 The relationship between the v-variables all look like this: -- A second alternative is to generate by taking a mixture. Instead of summing uniforms, take them with fixed probabilities. e.g. z = ifelse(rbinom(30000,1,.7),u1,u2) cor(cbind(u1,z)) u1 z u1 1.0000000 0.7081533 z 0.7081533 1.0000000 Which can again be used to generate multiple correlated uniforms. -- A third simple approach is to generate correlated normals and transform to uniformity. n1=rnorm(30000) n2=rnorm(30000) n3=rnorm(30000) x=.6*n1+.8*n2 y=.6*n2+.8*n3 z=.6*n3+.8*n1 cor(cbind(x,y,z)) x y z x 1.0000000 0.4763703 0.4792897 y 0.4763703 1.0000000 0.4769403 z 0.4792897 0.4769403 1.0000000 So now we convert to uniform: w1 = pnorm(x) w2 = pnorm(y) w3 = pnorm(z) cor(cbind(w1,w2,w3)) w1 w2 w3 w1 1.0000000 0.4606723 0.4623311 w2 0.4606723 1.0000000 0.4620257 w3 0.4623311 0.4620257 1.0000000 One nice thing about methods 2 and 3 is that you get plenty of variety in your choice of how correlated things might be (and they don't have to be equicorrelated like the examples here). There's a large variety of other approaches of course, but these are all quick and easy. The tricky part is getting exactly the desired population correlation; it's not quite so simple as when you just want correlated Gaussians. Quantibex's answer at Generate pairs of random numbers uniformly distributed and correlated gives an approach that modifies my third method here which should give about the desired population correlation.
Generate three correlated uniformly-distributed random variables
The question contains several errors as noted in comments - as defined in the question, Z is neither uniform nor has the specified correlation. cardinal mentions copulas, and that's the most general w
Generate three correlated uniformly-distributed random variables The question contains several errors as noted in comments - as defined in the question, Z is neither uniform nor has the specified correlation. cardinal mentions copulas, and that's the most general way to go about it. However, there are several quite easy ways to get correlated uniforms (which can be seen as mere shortcuts to different kinds of copulas). So let's start with some ways to get a pair of correlated uniforms. 1) If you add two uniforms the result is triangular, not uniform. But you can use the cdf of the resulting variable as a transform to take the result back to a uniform. The result isn't linearly correlated any more, of course. Here's an R function to transform a symmetric triangular on (0,2) to standard uniform t2u = function(x) ifelse(x<1, x^2, 2-(2-x)^2)/2 Let's check that it does give a uniform u1 = runif(30000) u2 = runif(30000) v1 = t2u(u1+u2) And it's correlated with u1 and u2: > cor(cbind(u1,u2,v1)) u1 u2 v1 u1 1.000000000 0.006311667 0.7035149 u2 0.006311667 1.000000000 0.7008528 v1 0.703514895 0.700852805 1.0000000 but not linearly, due to the monotonic transformation to uniformity With this as a tool we can generate some additional variables to get three equicorrelated uniforms: u3 = runif(30000) v2 = t2u(u1+u3) v3 = t2u(u2+u3) cor(cbind(v1,v2,v3)) v1 v2 v3 v1 1.0000000 0.4967572 0.4896972 v2 0.4967572 1.0000000 0.4934746 v3 0.4896972 0.4934746 1.0000000 The relationship between the v-variables all look like this: -- A second alternative is to generate by taking a mixture. Instead of summing uniforms, take them with fixed probabilities. e.g. z = ifelse(rbinom(30000,1,.7),u1,u2) cor(cbind(u1,z)) u1 z u1 1.0000000 0.7081533 z 0.7081533 1.0000000 Which can again be used to generate multiple correlated uniforms. -- A third simple approach is to generate correlated normals and transform to uniformity. n1=rnorm(30000) n2=rnorm(30000) n3=rnorm(30000) x=.6*n1+.8*n2 y=.6*n2+.8*n3 z=.6*n3+.8*n1 cor(cbind(x,y,z)) x y z x 1.0000000 0.4763703 0.4792897 y 0.4763703 1.0000000 0.4769403 z 0.4792897 0.4769403 1.0000000 So now we convert to uniform: w1 = pnorm(x) w2 = pnorm(y) w3 = pnorm(z) cor(cbind(w1,w2,w3)) w1 w2 w3 w1 1.0000000 0.4606723 0.4623311 w2 0.4606723 1.0000000 0.4620257 w3 0.4623311 0.4620257 1.0000000 One nice thing about methods 2 and 3 is that you get plenty of variety in your choice of how correlated things might be (and they don't have to be equicorrelated like the examples here). There's a large variety of other approaches of course, but these are all quick and easy. The tricky part is getting exactly the desired population correlation; it's not quite so simple as when you just want correlated Gaussians. Quantibex's answer at Generate pairs of random numbers uniformly distributed and correlated gives an approach that modifies my third method here which should give about the desired population correlation.
Generate three correlated uniformly-distributed random variables The question contains several errors as noted in comments - as defined in the question, Z is neither uniform nor has the specified correlation. cardinal mentions copulas, and that's the most general w
17,686
Generate three correlated uniformly-distributed random variables
First, do you assume that $X_1,X_2$ are independent? If they are, then correlation coefficient between $Z$ and $X_1$ is not $0.4$. It would be $0.4$ if $Y$ were defined as $Y = 0.4X_1+\sqrt{1-(0.4)^2}X_2$. A simple look at the definition of the correlation coefficient formula and the law of cosines should convince you that $\rho$ is a $\cos$ between $2$ series if the series are treated as vectors with each datapoint being treated as a dimension of a vector. If you have $3$ pair-wise independent series, that's three vectors all of which are orthogonal to each other (because the $\cos$'s of the angles between them are all $0$'s. This should start you on the way to decomposing a series into its components the same way you'd decompose a vector into its orthogonal components.
Generate three correlated uniformly-distributed random variables
First, do you assume that $X_1,X_2$ are independent? If they are, then correlation coefficient between $Z$ and $X_1$ is not $0.4$. It would be $0.4$ if $Y$ were defined as $Y = 0.4X_1+\sqrt{1-(0.4)^
Generate three correlated uniformly-distributed random variables First, do you assume that $X_1,X_2$ are independent? If they are, then correlation coefficient between $Z$ and $X_1$ is not $0.4$. It would be $0.4$ if $Y$ were defined as $Y = 0.4X_1+\sqrt{1-(0.4)^2}X_2$. A simple look at the definition of the correlation coefficient formula and the law of cosines should convince you that $\rho$ is a $\cos$ between $2$ series if the series are treated as vectors with each datapoint being treated as a dimension of a vector. If you have $3$ pair-wise independent series, that's three vectors all of which are orthogonal to each other (because the $\cos$'s of the angles between them are all $0$'s. This should start you on the way to decomposing a series into its components the same way you'd decompose a vector into its orthogonal components.
Generate three correlated uniformly-distributed random variables First, do you assume that $X_1,X_2$ are independent? If they are, then correlation coefficient between $Z$ and $X_1$ is not $0.4$. It would be $0.4$ if $Y$ were defined as $Y = 0.4X_1+\sqrt{1-(0.4)^
17,687
Why is it wrong to interpret SVM as classification probabilities?
A SVM does not feed anything into a sigmoid function. It fits a separating hyperplane to the data that tries to put all data points from your training set that are of one class on one side, and all points of the other class on the other. Consequently, it assigns class based on which side your feature vector is on. More formally, if we denote the feature vector as $\mathbf{x}$ and the hyperplane coefficients as $\mathbf{\beta}$ and $\beta_0$ the intercept, then the class assignment is $y = sign(\beta \cdot \mathbf{x} + \beta_0)$. Solving an SVM amounts to finding $\beta, \beta_0$ which minimize the hinge loss with the greatest possible margin. Therefore, because an SVM only cares about which side of the hyperplane you are on, you cannot transform its class assignments into probabilities. In the case of a linear SVM (no kernel), the decision boundary boundary will be similar to that of a logistic regression model, but may vary depending on the regularization strength you used to fit the SVM. Because the SVM and LR solve different optimization problems, you are not guaranteed to have identical solutions for the decision boundary. There are many resources out there about the SVM which will help clarify things: here is one example, and another one.
Why is it wrong to interpret SVM as classification probabilities?
A SVM does not feed anything into a sigmoid function. It fits a separating hyperplane to the data that tries to put all data points from your training set that are of one class on one side, and all po
Why is it wrong to interpret SVM as classification probabilities? A SVM does not feed anything into a sigmoid function. It fits a separating hyperplane to the data that tries to put all data points from your training set that are of one class on one side, and all points of the other class on the other. Consequently, it assigns class based on which side your feature vector is on. More formally, if we denote the feature vector as $\mathbf{x}$ and the hyperplane coefficients as $\mathbf{\beta}$ and $\beta_0$ the intercept, then the class assignment is $y = sign(\beta \cdot \mathbf{x} + \beta_0)$. Solving an SVM amounts to finding $\beta, \beta_0$ which minimize the hinge loss with the greatest possible margin. Therefore, because an SVM only cares about which side of the hyperplane you are on, you cannot transform its class assignments into probabilities. In the case of a linear SVM (no kernel), the decision boundary boundary will be similar to that of a logistic regression model, but may vary depending on the regularization strength you used to fit the SVM. Because the SVM and LR solve different optimization problems, you are not guaranteed to have identical solutions for the decision boundary. There are many resources out there about the SVM which will help clarify things: here is one example, and another one.
Why is it wrong to interpret SVM as classification probabilities? A SVM does not feed anything into a sigmoid function. It fits a separating hyperplane to the data that tries to put all data points from your training set that are of one class on one side, and all po
17,688
What distribution does the inverse normal CDF of a beta random variable follow?
Synopsis You have rediscovered part of the construction described at Central Limit Theorem for Sample Medians, which illustrates an analysis of the median of a sample. (The analysis obviously applies, mutatis mutandis, to any quantile, not just the median). Therefore it is no surprise that for large Beta parameters (corresponding to large samples) a Normal distribution arises under the transformation described in the question. What is of interest is how close to Normal the distribution is even for small Beta parameters. That deserves an explanation. I will sketch an analysis below. To keep this post at a reasonable length, it involves a lot of suggestive hand-waving: I aim only to point out the key ideas. Let me therefore summarize the results here: When $\alpha$ is close to $\beta$, everything is symmetric. This causes the transformed distribution already to look Normal. The functions of the form $\Phi^{\alpha-1}(x)\left(1-\Phi(x)\right)^{\beta-1}$ look fairly Normal in the first place, even for small values of $\alpha$ and $\beta$ (provided both exceed $1$ and their ratio is not too close to $0$ or $1$). The apparent Normality of the transformed distribution is due to the fact that its density consists of a Normal density multiplied by a function in (2). As $\alpha$ and $\beta$ increase, the departure from Normality can be measured in the remainder terms in a Taylor series for the log density. The term of order $n$ decreases in proportion to the $(n-2)/2$ powers of $\alpha$ and $\beta$. This implies that eventually, for sufficiently large $\alpha$ and $\beta$, all terms of power $n=3$ or greater have become relatively small, leaving only a quadratic: which is precisely the log density of a Normal distribution. Collectively, these behaviors nicely explain why even for small $\alpha$ and $\beta$ the non-extreme quantiles of an iid Normal sample look approximately Normal. Analysis Because it can be useful to generalize, let $F$ be any distribution function, although we have in mind $F=\Phi$. The density function $g(y)$ of a Beta$(\alpha,\beta)$ variable is, by definition, proportional to $$y^{\alpha-1}(1-y)^{\beta-1}dy.$$ Letting $y=F(x)$ be the probability integral transform of $x$ and writing $f$ for the derivative of $F$, it is immediate that $x$ has a density proportional to $$G(x;\alpha,\beta)=F(x)^{\alpha-1}(1-F(x))^{\beta-1}f(x)dx.$$ Because this is a monotonic transformation of a strongly unimodal distribution (a Beta), unless $F$ is rather strange, the transformed distribution will be unimodal, too. To study how close to Normal it might be, let's examine the logarithm of its density, $$\log G(x;\alpha,\beta) = (\alpha-1)\log F(x) + (\beta-1)\log(1-F(x)) + \log f(x) + C\tag{1}$$ where $C$ is an irrelevant constant of normalization. Expand the components of $\log G(x;\alpha,\beta)$ in Taylor series to order three around a value $x_0$ (which will be close to a mode). For instance, we may write the expansion of $\log F$ as $$\log F(x) = c^{F}_0 + c^{F}_1 (x-x_0) + c^{F}_2(x-x_0)^2 + c^{F}_3h^3$$ for some $h$ with $|h| \le |x-x_0|$. Use a similar notation for $\log(1-F)$ and $\log f$. Linear terms The linear term in $(1)$ thereby becomes $$g_1(\alpha,\beta) = (\alpha-1)c^{F}_1 + (\beta-1)c^{1-F}_1 + c^{f}_1.$$ When $x_0$ is a mode of $G(\,;\alpha,\beta)$, this expression is zero. Note that because the coefficients are continuous functions of $x_0$, as $\alpha$ and $\beta$ are varied, the mode $x_0$ will vary continuously too. Moreover, once $\alpha$ and $\beta$ are sufficiently large, the $c^{f}_1$ term becomes relatively inconsequential. If we aim to study the limit as $\alpha\to\infty$ and $\beta\to\infty$ for which $\alpha:\beta$ stays in constant proportion $\gamma$, we may therefore once and for all choose a base point $x_0$ for which $$\gamma c^{F}_1 + c^{1-F}_1 = 0.$$ A nice case is where $\gamma=1$, where $\alpha=\beta$ throughout, and $F$ is symmetric about $0$. In that case it is obvious $x_0=F(0)=1/2$. We have achieved a method whereby (a) in the limit, the first-order term in the Taylor series vanishes and (b) in the special case just described, the first-order term is always zero. Quadratic terms These are the sum $$g_2(\alpha,\beta) = (\alpha-1)c^{F}_2 + (\beta-1)c^{1-F}_2 + c^{f}_2.$$ Comparing to a Normal distribution, whose quadratic term is $-(1/2)(x-x_0)^2/\sigma^2$, we may estimate that $-1/(2g_2(\alpha,\beta))$ is approximately the variance of $G$. Let us standardize $G$ by rescaling $x$ by its square root. we don't really need the details; it suffices to understand that this rescaling is going to multiply the coefficient of $(x-x_0)^n$ in the Taylor expansion by $(-1/(2g_2(\alpha,\beta)))^{n/2}.$ Remainder term Here's the punchline: the term of order $n$ in the Taylor expansion is, according to our notation, $$g_n(\alpha,\beta) = (\alpha-1)c^{F}_n + (\beta-1)c^{1-F}_n + c^{f}_n.$$ After standardization, it becomes $$g_n^\prime(\alpha,\beta) = \frac{g_n(\alpha,\beta)}{(-2g_2(\alpha,\beta))^{n/2})}.$$ Both of the $g_i$ are affine combination of $\alpha$ and $\beta$. By raising the denominator to the $n/2$ power, the net behavior is of order $-(n-2)/2$ in each of $\alpha$ and $\beta$. As these parameters grow large, then, each term in the Taylor expansion after the second decreases to zero asymptotically. In particular, the third-order remainder term becomes arbitrarily small. The case when $F$ is normal The vanishing of the remainder term is particularly fast when $F$ is standard Normal, because in this case $f(x)$ is purely quadratic: it contributes nothing to the remainder terms. Consequently, the deviation of $G$ from normality depends solely on the deviation between $F^{\alpha-1}(1-F)^{\beta-1}$ and normality. This deviation is fairly small even for small $\alpha$ and $\beta$. To illustrate, consider the case $\alpha=\beta$. $G$ is symmetric, whence the order-3 term vanishes altogether. The remainder is of order $4$ in $x-x_0=x$. Here is a plot showing how the standardized fourth order term changes with small values of $\alpha \gt 1$: The value starts out at $0$ for $\alpha=\beta=1$, because then the distribution obviously is Normal ($\Phi^{-1}$ applied to a uniform distribution, which is what Beta$(1,1)$ is, gives a standard Normal distribution). Although it increases rapidly, it tops off at less than $0.008$--which is practically indistinguishable from zero. After that the asymptotic reciprocal decay kicks in, making the distribution ever closer to Normal as $\alpha$ increases beyond $2$.
What distribution does the inverse normal CDF of a beta random variable follow?
Synopsis You have rediscovered part of the construction described at Central Limit Theorem for Sample Medians, which illustrates an analysis of the median of a sample. (The analysis obviously applies
What distribution does the inverse normal CDF of a beta random variable follow? Synopsis You have rediscovered part of the construction described at Central Limit Theorem for Sample Medians, which illustrates an analysis of the median of a sample. (The analysis obviously applies, mutatis mutandis, to any quantile, not just the median). Therefore it is no surprise that for large Beta parameters (corresponding to large samples) a Normal distribution arises under the transformation described in the question. What is of interest is how close to Normal the distribution is even for small Beta parameters. That deserves an explanation. I will sketch an analysis below. To keep this post at a reasonable length, it involves a lot of suggestive hand-waving: I aim only to point out the key ideas. Let me therefore summarize the results here: When $\alpha$ is close to $\beta$, everything is symmetric. This causes the transformed distribution already to look Normal. The functions of the form $\Phi^{\alpha-1}(x)\left(1-\Phi(x)\right)^{\beta-1}$ look fairly Normal in the first place, even for small values of $\alpha$ and $\beta$ (provided both exceed $1$ and their ratio is not too close to $0$ or $1$). The apparent Normality of the transformed distribution is due to the fact that its density consists of a Normal density multiplied by a function in (2). As $\alpha$ and $\beta$ increase, the departure from Normality can be measured in the remainder terms in a Taylor series for the log density. The term of order $n$ decreases in proportion to the $(n-2)/2$ powers of $\alpha$ and $\beta$. This implies that eventually, for sufficiently large $\alpha$ and $\beta$, all terms of power $n=3$ or greater have become relatively small, leaving only a quadratic: which is precisely the log density of a Normal distribution. Collectively, these behaviors nicely explain why even for small $\alpha$ and $\beta$ the non-extreme quantiles of an iid Normal sample look approximately Normal. Analysis Because it can be useful to generalize, let $F$ be any distribution function, although we have in mind $F=\Phi$. The density function $g(y)$ of a Beta$(\alpha,\beta)$ variable is, by definition, proportional to $$y^{\alpha-1}(1-y)^{\beta-1}dy.$$ Letting $y=F(x)$ be the probability integral transform of $x$ and writing $f$ for the derivative of $F$, it is immediate that $x$ has a density proportional to $$G(x;\alpha,\beta)=F(x)^{\alpha-1}(1-F(x))^{\beta-1}f(x)dx.$$ Because this is a monotonic transformation of a strongly unimodal distribution (a Beta), unless $F$ is rather strange, the transformed distribution will be unimodal, too. To study how close to Normal it might be, let's examine the logarithm of its density, $$\log G(x;\alpha,\beta) = (\alpha-1)\log F(x) + (\beta-1)\log(1-F(x)) + \log f(x) + C\tag{1}$$ where $C$ is an irrelevant constant of normalization. Expand the components of $\log G(x;\alpha,\beta)$ in Taylor series to order three around a value $x_0$ (which will be close to a mode). For instance, we may write the expansion of $\log F$ as $$\log F(x) = c^{F}_0 + c^{F}_1 (x-x_0) + c^{F}_2(x-x_0)^2 + c^{F}_3h^3$$ for some $h$ with $|h| \le |x-x_0|$. Use a similar notation for $\log(1-F)$ and $\log f$. Linear terms The linear term in $(1)$ thereby becomes $$g_1(\alpha,\beta) = (\alpha-1)c^{F}_1 + (\beta-1)c^{1-F}_1 + c^{f}_1.$$ When $x_0$ is a mode of $G(\,;\alpha,\beta)$, this expression is zero. Note that because the coefficients are continuous functions of $x_0$, as $\alpha$ and $\beta$ are varied, the mode $x_0$ will vary continuously too. Moreover, once $\alpha$ and $\beta$ are sufficiently large, the $c^{f}_1$ term becomes relatively inconsequential. If we aim to study the limit as $\alpha\to\infty$ and $\beta\to\infty$ for which $\alpha:\beta$ stays in constant proportion $\gamma$, we may therefore once and for all choose a base point $x_0$ for which $$\gamma c^{F}_1 + c^{1-F}_1 = 0.$$ A nice case is where $\gamma=1$, where $\alpha=\beta$ throughout, and $F$ is symmetric about $0$. In that case it is obvious $x_0=F(0)=1/2$. We have achieved a method whereby (a) in the limit, the first-order term in the Taylor series vanishes and (b) in the special case just described, the first-order term is always zero. Quadratic terms These are the sum $$g_2(\alpha,\beta) = (\alpha-1)c^{F}_2 + (\beta-1)c^{1-F}_2 + c^{f}_2.$$ Comparing to a Normal distribution, whose quadratic term is $-(1/2)(x-x_0)^2/\sigma^2$, we may estimate that $-1/(2g_2(\alpha,\beta))$ is approximately the variance of $G$. Let us standardize $G$ by rescaling $x$ by its square root. we don't really need the details; it suffices to understand that this rescaling is going to multiply the coefficient of $(x-x_0)^n$ in the Taylor expansion by $(-1/(2g_2(\alpha,\beta)))^{n/2}.$ Remainder term Here's the punchline: the term of order $n$ in the Taylor expansion is, according to our notation, $$g_n(\alpha,\beta) = (\alpha-1)c^{F}_n + (\beta-1)c^{1-F}_n + c^{f}_n.$$ After standardization, it becomes $$g_n^\prime(\alpha,\beta) = \frac{g_n(\alpha,\beta)}{(-2g_2(\alpha,\beta))^{n/2})}.$$ Both of the $g_i$ are affine combination of $\alpha$ and $\beta$. By raising the denominator to the $n/2$ power, the net behavior is of order $-(n-2)/2$ in each of $\alpha$ and $\beta$. As these parameters grow large, then, each term in the Taylor expansion after the second decreases to zero asymptotically. In particular, the third-order remainder term becomes arbitrarily small. The case when $F$ is normal The vanishing of the remainder term is particularly fast when $F$ is standard Normal, because in this case $f(x)$ is purely quadratic: it contributes nothing to the remainder terms. Consequently, the deviation of $G$ from normality depends solely on the deviation between $F^{\alpha-1}(1-F)^{\beta-1}$ and normality. This deviation is fairly small even for small $\alpha$ and $\beta$. To illustrate, consider the case $\alpha=\beta$. $G$ is symmetric, whence the order-3 term vanishes altogether. The remainder is of order $4$ in $x-x_0=x$. Here is a plot showing how the standardized fourth order term changes with small values of $\alpha \gt 1$: The value starts out at $0$ for $\alpha=\beta=1$, because then the distribution obviously is Normal ($\Phi^{-1}$ applied to a uniform distribution, which is what Beta$(1,1)$ is, gives a standard Normal distribution). Although it increases rapidly, it tops off at less than $0.008$--which is practically indistinguishable from zero. After that the asymptotic reciprocal decay kicks in, making the distribution ever closer to Normal as $\alpha$ increases beyond $2$.
What distribution does the inverse normal CDF of a beta random variable follow? Synopsis You have rediscovered part of the construction described at Central Limit Theorem for Sample Medians, which illustrates an analysis of the median of a sample. (The analysis obviously applies
17,689
What distribution does the inverse normal CDF of a beta random variable follow?
Convergence Suppose that $\alpha = \beta$ and let $\alpha \to \infty$ and take any small $\varepsilon > 0$. Then $var(X) \to 0$. By Chebyshev's inequality we have $\mathbb{P} [\vert X - 0.5 \vert > \varepsilon] \to 0$ and $\mathbb{P} [\vert Y \vert > \varepsilon] \to 0$. This means that $Y$ converges in probability (not in distribution actually it converges in distribution - to singleton). Exact distribution Denote by $f_X$ the density of beta distribution. Then your variable $Y$ has density $$ f_Y (y) = f_X ( \Phi (y) ) \phi (y). $$ Since $\Phi$ does not have a closed form I believe that this is the furthest you can get (analytically). You can try to put it into FullSimplify function in Wolfram Mathematica to see if it finds some better form. Here is the density in R so you can plot it instead of histogram. f_y <- function(x, alpha, beta) { dbeta(pnorm(x), alpha, beta) * dnorm(x) } Modification However, you are maybe interested in distribution of $$ Z = \Phi^{-1} (\sqrt{\alpha} X) $$. (still assuming $\alpha = \beta$) This may be useful because $var(\sqrt{\alpha} X) \to 1/8$ (useful because it is not zero).
What distribution does the inverse normal CDF of a beta random variable follow?
Convergence Suppose that $\alpha = \beta$ and let $\alpha \to \infty$ and take any small $\varepsilon > 0$. Then $var(X) \to 0$. By Chebyshev's inequality we have $\mathbb{P} [\vert X - 0.5 \vert > \v
What distribution does the inverse normal CDF of a beta random variable follow? Convergence Suppose that $\alpha = \beta$ and let $\alpha \to \infty$ and take any small $\varepsilon > 0$. Then $var(X) \to 0$. By Chebyshev's inequality we have $\mathbb{P} [\vert X - 0.5 \vert > \varepsilon] \to 0$ and $\mathbb{P} [\vert Y \vert > \varepsilon] \to 0$. This means that $Y$ converges in probability (not in distribution actually it converges in distribution - to singleton). Exact distribution Denote by $f_X$ the density of beta distribution. Then your variable $Y$ has density $$ f_Y (y) = f_X ( \Phi (y) ) \phi (y). $$ Since $\Phi$ does not have a closed form I believe that this is the furthest you can get (analytically). You can try to put it into FullSimplify function in Wolfram Mathematica to see if it finds some better form. Here is the density in R so you can plot it instead of histogram. f_y <- function(x, alpha, beta) { dbeta(pnorm(x), alpha, beta) * dnorm(x) } Modification However, you are maybe interested in distribution of $$ Z = \Phi^{-1} (\sqrt{\alpha} X) $$. (still assuming $\alpha = \beta$) This may be useful because $var(\sqrt{\alpha} X) \to 1/8$ (useful because it is not zero).
What distribution does the inverse normal CDF of a beta random variable follow? Convergence Suppose that $\alpha = \beta$ and let $\alpha \to \infty$ and take any small $\varepsilon > 0$. Then $var(X) \to 0$. By Chebyshev's inequality we have $\mathbb{P} [\vert X - 0.5 \vert > \v
17,690
What distribution does the inverse normal CDF of a beta random variable follow?
Here I present a heuristic explanation (which can be made rigorous at least asymptotically). For simplicity, take $k \in \mathbb N$, $k \geq 2$. Let $X \sim \text{Beta}(k,k)$. I want to argue that $Y = \Phi^{-1}(X)$ is approximately normal. Now let $n=2k-1$. We start by drawing $n$ i.i.d. uniformly distributed random variables $U_1, \dotsc, U_n$. Next, form the order statistics $U_{(1)} \leq \dotsc \leq U_{(n)}$. It is well known that $U_{(k)} \sim \text{Beta}(k, n+1-k)$, thus: $$U_{(k)} \sim \text{Beta}(k, k)$$ In other words: The sample median of $n$ i.i.d. uniformly distributed random variables is $\text{Beta}(k,k)$ distributed. Now let's transform by $Z_i = \Phi^{-1}(U_i)$. Then by the probability integral transform, the $Z_i$ are i.i.d. normally distributed. Also form the order statistics of the $Z_i$ ($Z_{(1)} \leq \dotsc \leq Z_{(n)}$). Since $\Phi^{-1}$ is strictly increasing, it follows that: $$ \Phi^{-1}(U_{(k)}) = Z_{(k)}$$ Therefore, to show that $Y$ is approximately normal, we just have to argue that the sample median of $n$ i.i.d. normal random variables is approximately normal. For $k$ large, this can be made precise by a central limit theorem for sample medians. For $k$ small, say $k=2$, I will let everyone's gut feeling do the speaking. For $a \neq b$ (but not too different) one can argue similarly by using corresponding quantiles.
What distribution does the inverse normal CDF of a beta random variable follow?
Here I present a heuristic explanation (which can be made rigorous at least asymptotically). For simplicity, take $k \in \mathbb N$, $k \geq 2$. Let $X \sim \text{Beta}(k,k)$. I want to argue that $Y
What distribution does the inverse normal CDF of a beta random variable follow? Here I present a heuristic explanation (which can be made rigorous at least asymptotically). For simplicity, take $k \in \mathbb N$, $k \geq 2$. Let $X \sim \text{Beta}(k,k)$. I want to argue that $Y = \Phi^{-1}(X)$ is approximately normal. Now let $n=2k-1$. We start by drawing $n$ i.i.d. uniformly distributed random variables $U_1, \dotsc, U_n$. Next, form the order statistics $U_{(1)} \leq \dotsc \leq U_{(n)}$. It is well known that $U_{(k)} \sim \text{Beta}(k, n+1-k)$, thus: $$U_{(k)} \sim \text{Beta}(k, k)$$ In other words: The sample median of $n$ i.i.d. uniformly distributed random variables is $\text{Beta}(k,k)$ distributed. Now let's transform by $Z_i = \Phi^{-1}(U_i)$. Then by the probability integral transform, the $Z_i$ are i.i.d. normally distributed. Also form the order statistics of the $Z_i$ ($Z_{(1)} \leq \dotsc \leq Z_{(n)}$). Since $\Phi^{-1}$ is strictly increasing, it follows that: $$ \Phi^{-1}(U_{(k)}) = Z_{(k)}$$ Therefore, to show that $Y$ is approximately normal, we just have to argue that the sample median of $n$ i.i.d. normal random variables is approximately normal. For $k$ large, this can be made precise by a central limit theorem for sample medians. For $k$ small, say $k=2$, I will let everyone's gut feeling do the speaking. For $a \neq b$ (but not too different) one can argue similarly by using corresponding quantiles.
What distribution does the inverse normal CDF of a beta random variable follow? Here I present a heuristic explanation (which can be made rigorous at least asymptotically). For simplicity, take $k \in \mathbb N$, $k \geq 2$. Let $X \sim \text{Beta}(k,k)$. I want to argue that $Y
17,691
If so many people use set.seed(123) doesn't that affect randomness of world's reporting? [duplicate]
An interesting question, though I don't know whether it's answerable here at CV. A few thoughts: If you do an analysis involving random sampling, it's always a good idea to re-run it with different seeds, just to assess whether your results are sensitive to the choice of seed. If your results vary "much", you should revisit your analysis (and/or your code). If everyone did this, I wouldn't worry overly about the aggregate effect of everyone in the end using the same seed, because after this sanity check, everyone's results don't depend on it too much any more. Given that random numbers are used in many, many, many different contexts, with different models used in different applications, transforming the pseudorandom numbers in different orders and in different ways, I wouldn't worry too much about a possible systematic effect overall. Even if, yes, such an effect could in theory be visible on an aggregate level even when it is not visible to each separate researcher as per the previous bullet point. Finally, I personally never use 123 or 1234 as seeds. I use 1 ;-) Or the year. Or the date. I really don't think 123 or 1234 are all that prevalent as seeds. You could of course set up a poll somewhere.
If so many people use set.seed(123) doesn't that affect randomness of world's reporting? [duplicate]
An interesting question, though I don't know whether it's answerable here at CV. A few thoughts: If you do an analysis involving random sampling, it's always a good idea to re-run it with different s
If so many people use set.seed(123) doesn't that affect randomness of world's reporting? [duplicate] An interesting question, though I don't know whether it's answerable here at CV. A few thoughts: If you do an analysis involving random sampling, it's always a good idea to re-run it with different seeds, just to assess whether your results are sensitive to the choice of seed. If your results vary "much", you should revisit your analysis (and/or your code). If everyone did this, I wouldn't worry overly about the aggregate effect of everyone in the end using the same seed, because after this sanity check, everyone's results don't depend on it too much any more. Given that random numbers are used in many, many, many different contexts, with different models used in different applications, transforming the pseudorandom numbers in different orders and in different ways, I wouldn't worry too much about a possible systematic effect overall. Even if, yes, such an effect could in theory be visible on an aggregate level even when it is not visible to each separate researcher as per the previous bullet point. Finally, I personally never use 123 or 1234 as seeds. I use 1 ;-) Or the year. Or the date. I really don't think 123 or 1234 are all that prevalent as seeds. You could of course set up a poll somewhere.
If so many people use set.seed(123) doesn't that affect randomness of world's reporting? [duplicate] An interesting question, though I don't know whether it's answerable here at CV. A few thoughts: If you do an analysis involving random sampling, it's always a good idea to re-run it with different s
17,692
Ensemble Learning: Why is Model Stacking Effective?
Think of ensembling as basically an exploitation of the central limit theorem. The central limit theorem loosely says that, as the sample size increases, the mean of the sample will become an increasingly accurate estimate of the actual location of the population mean (assuming that's the statistic you're looking at), and the variance will tighten. If you have one model and it produces one prediction for your dependent variable, that prediction will likely be high or low to some degree. But if you have 3 or 5 or 10 different models that produce different predictions, for any given observation, the high predictions from some models will tend to offset the low errors from some other models, and the net effect will be a convergence of the average (or other combination) of the predictions towards "the truth." Not on every observation, but in general that's the tendency. And so, generally, an ensemble will outperform the best single model.
Ensemble Learning: Why is Model Stacking Effective?
Think of ensembling as basically an exploitation of the central limit theorem. The central limit theorem loosely says that, as the sample size increases, the mean of the sample will become an increasi
Ensemble Learning: Why is Model Stacking Effective? Think of ensembling as basically an exploitation of the central limit theorem. The central limit theorem loosely says that, as the sample size increases, the mean of the sample will become an increasingly accurate estimate of the actual location of the population mean (assuming that's the statistic you're looking at), and the variance will tighten. If you have one model and it produces one prediction for your dependent variable, that prediction will likely be high or low to some degree. But if you have 3 or 5 or 10 different models that produce different predictions, for any given observation, the high predictions from some models will tend to offset the low errors from some other models, and the net effect will be a convergence of the average (or other combination) of the predictions towards "the truth." Not on every observation, but in general that's the tendency. And so, generally, an ensemble will outperform the best single model.
Ensemble Learning: Why is Model Stacking Effective? Think of ensembling as basically an exploitation of the central limit theorem. The central limit theorem loosely says that, as the sample size increases, the mean of the sample will become an increasi
17,693
Ensemble Learning: Why is Model Stacking Effective?
Late answer, but some key points can be added. I personnally think of model stacking as "the natural sequel" of model averaging. And there is a reason why model averages are often better than single models. Model averaging Usually, two (different) models with similar performance often perform better than the best model when the average of the prediction is used. This is particularly true when the penalty is a convex function (MSE, RMSE...) and is a consequence of Jensen's inequality Model (weighted) averaging Model averaging can be seen as a particular case of model stacking. If you use a linear model on top on the "first stage" models, you are simply optimizing the weights given to each model (while model averaging was just giving the same weight to each model). Model stacking Then, when you drop the linear assumption on the model you train on the "stage 1 features" you simply increase the size of the space in which you "search" your model. The larger this space is, the more likely you are to find better performance. I presented this in more details in model stacking: a tutorial You may also be interested in this article: Stacked generalization by David H. Wolpert, which is one of the first (as far as I know) scholar publication about this method. Edit I could not find many other references online so I detailed the arguments above on my blog: why does model staging works ?
Ensemble Learning: Why is Model Stacking Effective?
Late answer, but some key points can be added. I personnally think of model stacking as "the natural sequel" of model averaging. And there is a reason why model averages are often better than single m
Ensemble Learning: Why is Model Stacking Effective? Late answer, but some key points can be added. I personnally think of model stacking as "the natural sequel" of model averaging. And there is a reason why model averages are often better than single models. Model averaging Usually, two (different) models with similar performance often perform better than the best model when the average of the prediction is used. This is particularly true when the penalty is a convex function (MSE, RMSE...) and is a consequence of Jensen's inequality Model (weighted) averaging Model averaging can be seen as a particular case of model stacking. If you use a linear model on top on the "first stage" models, you are simply optimizing the weights given to each model (while model averaging was just giving the same weight to each model). Model stacking Then, when you drop the linear assumption on the model you train on the "stage 1 features" you simply increase the size of the space in which you "search" your model. The larger this space is, the more likely you are to find better performance. I presented this in more details in model stacking: a tutorial You may also be interested in this article: Stacked generalization by David H. Wolpert, which is one of the first (as far as I know) scholar publication about this method. Edit I could not find many other references online so I detailed the arguments above on my blog: why does model staging works ?
Ensemble Learning: Why is Model Stacking Effective? Late answer, but some key points can be added. I personnally think of model stacking as "the natural sequel" of model averaging. And there is a reason why model averages are often better than single m
17,694
Why KNN is a non linear classifier ?
A classifier is linear if its decision boundary on the feature space is a linear function: positive and negative examples are separated by an hyperplane. This is what a SVM does by definition without the use of the kernel trick. Also logistic regression uses linear decision boundaries. Imagine you trained a logistic regression and obtained the coefficients $\beta_i$. You might want to classify a test record $\mathbf{x} =(x_1,\dots,x_k)$ if $P(\mathbf{x}) > 0.5$. Where the probability is obtained with your logistic regression by: $$P(\mathbf{x}) = \frac{1}{1+e^{-(\beta_0 + \beta_1 x_1 + \dots + \beta_k x_k)}}$$ If you work out the math you see that $P(\mathbf{x}) > 0.5$ defines a hyperplane on the feature space which separates positive from negative examples. With $k$NN you don't have an hyperplane in general. Imagine some dense region of positive points. The decision boundary to classify test instances around those points will look like a curve - not a hyperplane.
Why KNN is a non linear classifier ?
A classifier is linear if its decision boundary on the feature space is a linear function: positive and negative examples are separated by an hyperplane. This is what a SVM does by definition without
Why KNN is a non linear classifier ? A classifier is linear if its decision boundary on the feature space is a linear function: positive and negative examples are separated by an hyperplane. This is what a SVM does by definition without the use of the kernel trick. Also logistic regression uses linear decision boundaries. Imagine you trained a logistic regression and obtained the coefficients $\beta_i$. You might want to classify a test record $\mathbf{x} =(x_1,\dots,x_k)$ if $P(\mathbf{x}) > 0.5$. Where the probability is obtained with your logistic regression by: $$P(\mathbf{x}) = \frac{1}{1+e^{-(\beta_0 + \beta_1 x_1 + \dots + \beta_k x_k)}}$$ If you work out the math you see that $P(\mathbf{x}) > 0.5$ defines a hyperplane on the feature space which separates positive from negative examples. With $k$NN you don't have an hyperplane in general. Imagine some dense region of positive points. The decision boundary to classify test instances around those points will look like a curve - not a hyperplane.
Why KNN is a non linear classifier ? A classifier is linear if its decision boundary on the feature space is a linear function: positive and negative examples are separated by an hyperplane. This is what a SVM does by definition without
17,695
Is differential entropy always less than infinity?
I thought about this question some more and managed to find a counter-example, thanks also to the Piotr's comments above. The answer to the first question is no - the differential entropy of a continuous random variable (RV) is not always less than $\infty$. For example, consider a continuous RV X whose pdf is $$f(x) = \frac{\log(2)}{x \log(x)^2}$$ for $x > 2$. It's not hard to verify that its differential entropy is infinite. It grows quite slowly though (approx. logarithmically). For the 2nd question, I am not aware of a simple necessary and sufficient condition. However, one partial answer is as follows. Categorize a continuous RV into one of the following 3 Types based on its support, i.e. Type 1: a continuous RV whose support is bounded, i.e. contained in [a,b]. Type 2: a continuous RV whose support is half-bounded, i.e. contained in [a,$\infty$) or ($-\infty$,a] Type 3: a continuous RV whose support is unbounded. Then we have the following - For a Type 1 RV, its entropy is always less than $\infty$, unconditionally. For a Type 2 RV, its entropy is less than $\infty$, if its mean ($\mu$) is finite. For a Type 3 RV, its entropy is less than $\infty$, if its variance ($\sigma^2$) is finite. The differential entropy of a Type 1 RV is less than that of the corresponding uniform distribution, i.e. $log(b-a)$, a Type 2 RV, that of the exponential distribution, i.e. $1+log(|\mu-a|)$, and a Type 3 RV, that of the Gaussian distribution, i.e. $\frac{1}{2} log(2{\pi}e\sigma^2)$. Note that for a Type 2 or 3 RV, the above condition is only a sufficient condition. For example, consider a Type 2 RV with $$f(x) = \frac{3}{x^2}$$ for $x > 3$. Clearly, its mean is infinite, but its entropy is 3.1 nats. Or consider a Type 3 RV with $$f(x) = \frac{9}{|x|^3}$$ for $|x| > 3$. Its variance is infinite, but its entropy is 2.6 nats. So it would be great, if someone can provide a complete or more elegant answer for this part.
Is differential entropy always less than infinity?
I thought about this question some more and managed to find a counter-example, thanks also to the Piotr's comments above. The answer to the first question is no - the differential entropy of a contin
Is differential entropy always less than infinity? I thought about this question some more and managed to find a counter-example, thanks also to the Piotr's comments above. The answer to the first question is no - the differential entropy of a continuous random variable (RV) is not always less than $\infty$. For example, consider a continuous RV X whose pdf is $$f(x) = \frac{\log(2)}{x \log(x)^2}$$ for $x > 2$. It's not hard to verify that its differential entropy is infinite. It grows quite slowly though (approx. logarithmically). For the 2nd question, I am not aware of a simple necessary and sufficient condition. However, one partial answer is as follows. Categorize a continuous RV into one of the following 3 Types based on its support, i.e. Type 1: a continuous RV whose support is bounded, i.e. contained in [a,b]. Type 2: a continuous RV whose support is half-bounded, i.e. contained in [a,$\infty$) or ($-\infty$,a] Type 3: a continuous RV whose support is unbounded. Then we have the following - For a Type 1 RV, its entropy is always less than $\infty$, unconditionally. For a Type 2 RV, its entropy is less than $\infty$, if its mean ($\mu$) is finite. For a Type 3 RV, its entropy is less than $\infty$, if its variance ($\sigma^2$) is finite. The differential entropy of a Type 1 RV is less than that of the corresponding uniform distribution, i.e. $log(b-a)$, a Type 2 RV, that of the exponential distribution, i.e. $1+log(|\mu-a|)$, and a Type 3 RV, that of the Gaussian distribution, i.e. $\frac{1}{2} log(2{\pi}e\sigma^2)$. Note that for a Type 2 or 3 RV, the above condition is only a sufficient condition. For example, consider a Type 2 RV with $$f(x) = \frac{3}{x^2}$$ for $x > 3$. Clearly, its mean is infinite, but its entropy is 3.1 nats. Or consider a Type 3 RV with $$f(x) = \frac{9}{|x|^3}$$ for $|x| > 3$. Its variance is infinite, but its entropy is 2.6 nats. So it would be great, if someone can provide a complete or more elegant answer for this part.
Is differential entropy always less than infinity? I thought about this question some more and managed to find a counter-example, thanks also to the Piotr's comments above. The answer to the first question is no - the differential entropy of a contin
17,696
What if high validation accuracy but low test accuracy in research?
By definition, when training accuracy (or whatever metric you are using) is higher than your testing you have an overfit model. In essence, your model has learned particulars that help it perform better in your training data that are not applicable to the larger data population and therefore result in worse performance. I’m not sure why you say k-fold validation wouldn’t be helpful. Its’ purpose is to help avoid over fitting your models. Perhaps you don’t have enough data? A statement like this is important, especially if you are going to defend any research when such cross-validation methods are highly recommended. You say you aren’t able to use the test set just once (again I assume smaller sample size?). In my experience the most common path followed is k-fold cross-validation of you model. Let’s take an example with 10-fold CV for a sample size of 100 and assume your classification problem is binary to make the calculations simple. I therefore have split my data in to 10 different folds. I then fit my model to 9/10 folds and then predict the 1/10 I left out. For this first run, the resulting confusion matrix is: 0 1 0 4 1 1 2 3 I then repeat this analysis again with the next 1/10 fold left out and train on the other 9/10. And get my next confusion matrix. Once completed, I have 10 confusion matrices. I would then sum these matrices (so I had all 100 samples predicted) and then report my statistics (Accuracy, PPV, F1-score, Kappa, etc.). If your accuracy is not where you want it to be there are many other possibilities. Your model needs be improved (change parameters) You may need to try a different machine learning algorithm (not all algorithms created equal) You need more data (subtle relationship difficult to find) You may need to try transforming your data (dependent upon algorithm used) There may be no relationship between your dependent and independent variables The fact of the matter is, a lower testing metric (e.g. accuracy) than your training is indicative of overfitting your model not something you want when trying to create a new predictive model.
What if high validation accuracy but low test accuracy in research?
By definition, when training accuracy (or whatever metric you are using) is higher than your testing you have an overfit model. In essence, your model has learned particulars that help it perform bet
What if high validation accuracy but low test accuracy in research? By definition, when training accuracy (or whatever metric you are using) is higher than your testing you have an overfit model. In essence, your model has learned particulars that help it perform better in your training data that are not applicable to the larger data population and therefore result in worse performance. I’m not sure why you say k-fold validation wouldn’t be helpful. Its’ purpose is to help avoid over fitting your models. Perhaps you don’t have enough data? A statement like this is important, especially if you are going to defend any research when such cross-validation methods are highly recommended. You say you aren’t able to use the test set just once (again I assume smaller sample size?). In my experience the most common path followed is k-fold cross-validation of you model. Let’s take an example with 10-fold CV for a sample size of 100 and assume your classification problem is binary to make the calculations simple. I therefore have split my data in to 10 different folds. I then fit my model to 9/10 folds and then predict the 1/10 I left out. For this first run, the resulting confusion matrix is: 0 1 0 4 1 1 2 3 I then repeat this analysis again with the next 1/10 fold left out and train on the other 9/10. And get my next confusion matrix. Once completed, I have 10 confusion matrices. I would then sum these matrices (so I had all 100 samples predicted) and then report my statistics (Accuracy, PPV, F1-score, Kappa, etc.). If your accuracy is not where you want it to be there are many other possibilities. Your model needs be improved (change parameters) You may need to try a different machine learning algorithm (not all algorithms created equal) You need more data (subtle relationship difficult to find) You may need to try transforming your data (dependent upon algorithm used) There may be no relationship between your dependent and independent variables The fact of the matter is, a lower testing metric (e.g. accuracy) than your training is indicative of overfitting your model not something you want when trying to create a new predictive model.
What if high validation accuracy but low test accuracy in research? By definition, when training accuracy (or whatever metric you are using) is higher than your testing you have an overfit model. In essence, your model has learned particulars that help it perform bet
17,697
Meaning of y axis in Random Forest partial dependence plot
Answering these two first: Particularly, what do the negative values mean? What does it mean to have a negative influence on accurately predicting the class? If you look at the definition of how the partial plot is computed in the Random Forest package documentation, is says that the plots show the relative logit contribution of the variable on the class probability from the perspective of the model. In other words negative values (in the y-axis) mean that the positive class is less likely for that value of the independent variable (x-axis) according to the model. Similarly positive values mean that the positive class is more likely for that value of the independent variable according to the model. Clearly, zero implies no average impact on class probability according to the model. And what is the most important feature from these figures, is it the max value, the shape of the trend etc? There are many different approaches to determine feature importance and max absolute value is just one simple measure. Typically, people look at the shape of the partial plots to gather understanding about what the model is suggesting about the relationship from variables to class labels. Can you compare the partial plots with partial plots of other variables? The answer to this is less black and white. You can certain look at the range of the y-axis for each plot; If the partial dependence on one variable is near zero for the whole range of the variable, that tells you that the model does not have any relationship from the variable to the class label. Back to your question, the larger the range, the stronger the influence overall so in this sense they can be compared. I have no experience with Maxent.
Meaning of y axis in Random Forest partial dependence plot
Answering these two first: Particularly, what do the negative values mean? What does it mean to have a negative influence on accurately predicting the class? If you look at the definition of how t
Meaning of y axis in Random Forest partial dependence plot Answering these two first: Particularly, what do the negative values mean? What does it mean to have a negative influence on accurately predicting the class? If you look at the definition of how the partial plot is computed in the Random Forest package documentation, is says that the plots show the relative logit contribution of the variable on the class probability from the perspective of the model. In other words negative values (in the y-axis) mean that the positive class is less likely for that value of the independent variable (x-axis) according to the model. Similarly positive values mean that the positive class is more likely for that value of the independent variable according to the model. Clearly, zero implies no average impact on class probability according to the model. And what is the most important feature from these figures, is it the max value, the shape of the trend etc? There are many different approaches to determine feature importance and max absolute value is just one simple measure. Typically, people look at the shape of the partial plots to gather understanding about what the model is suggesting about the relationship from variables to class labels. Can you compare the partial plots with partial plots of other variables? The answer to this is less black and white. You can certain look at the range of the y-axis for each plot; If the partial dependence on one variable is near zero for the whole range of the variable, that tells you that the model does not have any relationship from the variable to the class label. Back to your question, the larger the range, the stronger the influence overall so in this sense they can be compared. I have no experience with Maxent.
Meaning of y axis in Random Forest partial dependence plot Answering these two first: Particularly, what do the negative values mean? What does it mean to have a negative influence on accurately predicting the class? If you look at the definition of how t
17,698
Showing spatial and temporal correlation on maps
I think there are a few options for showing this type of data: The first option would be to conduct an "Empirical Orthogonal Functions Analysis" (EOF) (also referred to as "Principal Component Analysis" (PCA) in non-climate circles). For your case, this should be conducted on a correlation matrix of your data locations. For example, your data matrix dat would be your spatial locations in the column dimension, and the measured parameter in the rows; So, your data matrix will consist of time series for each location. The prcomp() function will allow you to obtain the principal components, or dominant modes of correlation, relating to this field: res <- prcomp(dat, retx = TRUE, center = TRUE, scale = TRUE) # center and scale should be "TRUE" for an analysis of dominant correlation modes) #res$x and res$rotation will contain the PC modes in the temporal and spatial dimension, respectively. The second option would be to create maps that show correlation relative to an individual location of interest: C <- cor(dat) #C[,n] would be the correlation values between the nth location (e.g. dat[,n]) and all other locations. EDIT: additional example While the following example doesn't use gappy data, you could apply the same analysis to a data field following interpolation with DINEOF (http://menugget.blogspot.de/2012/10/dineof-data-interpolating-empirical.html). The example below uses a subset of monthly anomaly sea level pressure data from the following data set (http://www.esrl.noaa.gov/psd/gcos_wgsp/Gridded/data.hadslp2.html): library(sinkr) # https://github.com/marchtaylor/sinkr # load data data(slp) grd <- slp$grid time <- slp$date field <- slp$field # make anomaly dataset slp.anom <- fieldAnomaly(field, time) # EOF/PCA of SLP anom P <- prcomp(slp.anom, center = TRUE, scale. = TRUE) expl.var <- P$sdev^2 / sum(P$sdev^2) # explained variance cum.expl.var <- cumsum(expl.var) # cumulative explained variance plot(cum.expl.var) Map the leading EOF mode # make interpolation require(akima) require(maps) eof.num <- 1 F1 <- interp(x=grd$lon, y=grd$lat, z=P$rotation[,eof.num]) # interpolated spatial EOF mode png(paste0("EOF_mode", eof.num, ".png"), width=7, height=6, units="in", res=400) op <- par(ps=10) #settings before layout layout(matrix(c(1,2), nrow=2, ncol=1, byrow=TRUE), heights=c(4,2), widths=7) #layout.show(2) # run to see layout; comment out to prevent plotting during .pdf par(cex=1) # layout has the tendency change par()$cex, so this step is important for control par(mar=c(4,4,1,1)) # I usually set my margins before each plot pal <- jetPal image(F1, col=pal(100)) map("world", add=TRUE, lwd=2) contour(F1, add=TRUE, col="white") box() par(mar=c(4,4,1,1)) # I usually set my margins before each plot plot(time, P$x[,eof.num], t="l", lwd=1, ylab="", xlab="") plotRegionCol() abline(h=0, lwd=2, col=8) abline(h=seq(par()$yaxp[1], par()$yaxp[2], len=par()$yaxp[3]+1), col="white", lty=3) abline(v=seq.Date(as.Date("1800-01-01"), as.Date("2100-01-01"), by="10 years"), col="white", lty=3) box() lines(time, P$x[,eof.num]) mtext(paste0("EOF ", eof.num, " [expl.var = ", round(expl.var[eof.num]*100), "%]"), side=3, line=1) par(op) dev.off() # closes device Create correlation map loc <- c(-90, 0) target <- which(grd$lon==loc[1] & grd$lat==loc[2]) COR <- cor(slp.anom) F1 <- interp(x=grd$lon, y=grd$lat, z=COR[,target]) # interpolated spatial EOF mode png(paste0("Correlation_map", "_lon", loc[1], "_lat", loc[2], ".png"), width=7, height=5, units="in", res=400) op <- par(ps=10) #settings before layout layout(matrix(c(1,2), nrow=2, ncol=1, byrow=TRUE), heights=c(4,1), widths=7) #layout.show(2) # run to see layout; comment out to prevent plotting during .pdf par(cex=1) # layout has the tendency change par()$cex, so this step is important for control par(mar=c(4,4,1,1)) # I usually set my margins before each plot pal <- colorRampPalette(c("blue", "cyan", "yellow", "red", "yellow", "cyan", "blue")) ncolors <- 100 breaks <- seq(-1,1,,ncolors+1) image(F1, col=pal(ncolors), breaks=breaks) map("world", add=TRUE, lwd=2) contour(F1, add=TRUE, col="white") box() par(mar=c(4,4,0,1)) # I usually set my margins before each plot imageScale(F1, col=pal(ncolors), breaks=breaks, axis.pos = 1) mtext("Correlation [R]", side=1, line=2.5) box() par(op) dev.off() # closes device
Showing spatial and temporal correlation on maps
I think there are a few options for showing this type of data: The first option would be to conduct an "Empirical Orthogonal Functions Analysis" (EOF) (also referred to as "Principal Component Analysi
Showing spatial and temporal correlation on maps I think there are a few options for showing this type of data: The first option would be to conduct an "Empirical Orthogonal Functions Analysis" (EOF) (also referred to as "Principal Component Analysis" (PCA) in non-climate circles). For your case, this should be conducted on a correlation matrix of your data locations. For example, your data matrix dat would be your spatial locations in the column dimension, and the measured parameter in the rows; So, your data matrix will consist of time series for each location. The prcomp() function will allow you to obtain the principal components, or dominant modes of correlation, relating to this field: res <- prcomp(dat, retx = TRUE, center = TRUE, scale = TRUE) # center and scale should be "TRUE" for an analysis of dominant correlation modes) #res$x and res$rotation will contain the PC modes in the temporal and spatial dimension, respectively. The second option would be to create maps that show correlation relative to an individual location of interest: C <- cor(dat) #C[,n] would be the correlation values between the nth location (e.g. dat[,n]) and all other locations. EDIT: additional example While the following example doesn't use gappy data, you could apply the same analysis to a data field following interpolation with DINEOF (http://menugget.blogspot.de/2012/10/dineof-data-interpolating-empirical.html). The example below uses a subset of monthly anomaly sea level pressure data from the following data set (http://www.esrl.noaa.gov/psd/gcos_wgsp/Gridded/data.hadslp2.html): library(sinkr) # https://github.com/marchtaylor/sinkr # load data data(slp) grd <- slp$grid time <- slp$date field <- slp$field # make anomaly dataset slp.anom <- fieldAnomaly(field, time) # EOF/PCA of SLP anom P <- prcomp(slp.anom, center = TRUE, scale. = TRUE) expl.var <- P$sdev^2 / sum(P$sdev^2) # explained variance cum.expl.var <- cumsum(expl.var) # cumulative explained variance plot(cum.expl.var) Map the leading EOF mode # make interpolation require(akima) require(maps) eof.num <- 1 F1 <- interp(x=grd$lon, y=grd$lat, z=P$rotation[,eof.num]) # interpolated spatial EOF mode png(paste0("EOF_mode", eof.num, ".png"), width=7, height=6, units="in", res=400) op <- par(ps=10) #settings before layout layout(matrix(c(1,2), nrow=2, ncol=1, byrow=TRUE), heights=c(4,2), widths=7) #layout.show(2) # run to see layout; comment out to prevent plotting during .pdf par(cex=1) # layout has the tendency change par()$cex, so this step is important for control par(mar=c(4,4,1,1)) # I usually set my margins before each plot pal <- jetPal image(F1, col=pal(100)) map("world", add=TRUE, lwd=2) contour(F1, add=TRUE, col="white") box() par(mar=c(4,4,1,1)) # I usually set my margins before each plot plot(time, P$x[,eof.num], t="l", lwd=1, ylab="", xlab="") plotRegionCol() abline(h=0, lwd=2, col=8) abline(h=seq(par()$yaxp[1], par()$yaxp[2], len=par()$yaxp[3]+1), col="white", lty=3) abline(v=seq.Date(as.Date("1800-01-01"), as.Date("2100-01-01"), by="10 years"), col="white", lty=3) box() lines(time, P$x[,eof.num]) mtext(paste0("EOF ", eof.num, " [expl.var = ", round(expl.var[eof.num]*100), "%]"), side=3, line=1) par(op) dev.off() # closes device Create correlation map loc <- c(-90, 0) target <- which(grd$lon==loc[1] & grd$lat==loc[2]) COR <- cor(slp.anom) F1 <- interp(x=grd$lon, y=grd$lat, z=COR[,target]) # interpolated spatial EOF mode png(paste0("Correlation_map", "_lon", loc[1], "_lat", loc[2], ".png"), width=7, height=5, units="in", res=400) op <- par(ps=10) #settings before layout layout(matrix(c(1,2), nrow=2, ncol=1, byrow=TRUE), heights=c(4,1), widths=7) #layout.show(2) # run to see layout; comment out to prevent plotting during .pdf par(cex=1) # layout has the tendency change par()$cex, so this step is important for control par(mar=c(4,4,1,1)) # I usually set my margins before each plot pal <- colorRampPalette(c("blue", "cyan", "yellow", "red", "yellow", "cyan", "blue")) ncolors <- 100 breaks <- seq(-1,1,,ncolors+1) image(F1, col=pal(ncolors), breaks=breaks) map("world", add=TRUE, lwd=2) contour(F1, add=TRUE, col="white") box() par(mar=c(4,4,0,1)) # I usually set my margins before each plot imageScale(F1, col=pal(ncolors), breaks=breaks, axis.pos = 1) mtext("Correlation [R]", side=1, line=2.5) box() par(op) dev.off() # closes device
Showing spatial and temporal correlation on maps I think there are a few options for showing this type of data: The first option would be to conduct an "Empirical Orthogonal Functions Analysis" (EOF) (also referred to as "Principal Component Analysi
17,699
Showing spatial and temporal correlation on maps
I don't see clearly behind the lines but it seems to me that there are too much data points. Since you want to show the regional homogeneity and not exactly stations, I'd suggest you firstly to group them spatially. For example, overlay by a "fishnet" and compute average measured value in every cell (at every time moment). If you place these average values in the cell centers this way you rasterize the data (or you can compute also mean latitude and longitude in every cell if you don't want overlaying lines). Or to average inside administrative units, whatever. Then for these new averaged "stations" you can calculate correlations and plot a map with smaller number of lines. This can also remove those random single high-correlation lines going through all area.
Showing spatial and temporal correlation on maps
I don't see clearly behind the lines but it seems to me that there are too much data points. Since you want to show the regional homogeneity and not exactly stations, I'd suggest you firstly to group
Showing spatial and temporal correlation on maps I don't see clearly behind the lines but it seems to me that there are too much data points. Since you want to show the regional homogeneity and not exactly stations, I'd suggest you firstly to group them spatially. For example, overlay by a "fishnet" and compute average measured value in every cell (at every time moment). If you place these average values in the cell centers this way you rasterize the data (or you can compute also mean latitude and longitude in every cell if you don't want overlaying lines). Or to average inside administrative units, whatever. Then for these new averaged "stations" you can calculate correlations and plot a map with smaller number of lines. This can also remove those random single high-correlation lines going through all area.
Showing spatial and temporal correlation on maps I don't see clearly behind the lines but it seems to me that there are too much data points. Since you want to show the regional homogeneity and not exactly stations, I'd suggest you firstly to group
17,700
When data has a gaussian distribution, how many samples will characterise it?
The amount of data needed to estimate the parameters of a multivariate Normal distribution to within specified accuracy to a given confidence does not vary with the dimension, all other things being the same. Therefore you may apply any rule of thumb for two dimensions to higher dimensional problems without any change at all. Why should it? There are only three kinds of parameters: means, variances, and covariances. The error of the estimate in a mean depends only on the variance ($\sigma^2$) and the amount of data ($n$). Thus, when $(X_1, X_2, \ldots, X_d)$ has a multivariate Normal distribution and the $X_i$ have variances $\sigma_i^2$, then the estimates of $\mathbb{E[X_i]}$ depend only on the $\sigma_i$ and $n$. Whence, to achieve adequate accuracy in estimating all the $\mathbb{E}[X_i]$, we only need to consider the amount of data needed for the $X_i$ having the largest of the $\sigma_i$. Therefore, when we contemplate a succession of estimation problems for increasing dimensions $d$, all we need to consider is how much the largest $\sigma_i$ will increase. When these parameters are bounded above, we conclude that the amount of data needed does not depend on dimension. Similar considerations apply to estimating the variances $\sigma_i^2$ and covariances $\sigma_{ij}$: if a certain amount of data suffice for estimating one covariance (or correlation coefficient) to a desired accuracy, then--provided the underlying normal distribution has similar parameter values--the same amount of data will suffice for estimating any covariance or correlation coefficient. To illustrate, and provide empirical support for this argument, let's study some simulations. The following creates parameters for a multinormal distribution of specified dimensions, draws many independent, identically distributed sets of vectors from that distribution, estimates the parameters from each such sample, and summarizes the results of those parameter estimates in terms of (1) their averages--to demonstrate they are unbiased (and the code is working correctly--and (2) their standard deviations, which quantify the accuracy of the estimates. (Do not confuse these standard deviations, which quantify the amount of variation among estimates obtained over multiple iterations of the simulation, with the standard deviations used to define the underlying multinormal distribution!) My claim is that these standard deviations do not materially change when the dimension $d$ changes, provided that as $d$ changes, we do not introduce larger variances into the underlying multinormal distribution itself. The sizes of the variances of the underlying distribution are controlled in this simulation by making the largest eigenvalue of the covariance matrix equal to $1$. This keeps the probability density "cloud" within bounds as the dimension increases, no matter what the shape of this cloud might be. Simulations of other models of behavior of the system as the dimension increases can be created simply by changing how the eigenvalues are generated; one example (using a Gamma distribution) is shown commented out in the R code below. What we are looking for is to verify that the standard deviations of the parameter estimates do not appreciably change when the dimension $d$ is changed. I therefore show the results for two extremes, $d=2$ and $d=60$, using the same amount of data ($30$) in both cases. It is noteworthy that the number of parameters estimated when $d=60$, equal to $1890$, far exceeds the number of vectors ($30$) and exceeds even the individual numbers ($30*60=1800$) in the entire dataset. Let's begin with two dimensions, $d=2$. There are five parameters: two variances (with standard deviations of $0.097$ and $0.182$ in this simulation), a covariance (SD = $0.126$), and two means (SD = $0.11$ and $0.15$). With different simulations (obtainable by changing the starting value of the random seed) these will vary a bit, but they will consistently be of comparable size when the sample size is $n=30$. For instance, in the next simulation the SDs are $0.014$, $0.263$, $0.043$, $0.04$, and $0.18$, respectively: they all changed but are of comparable orders of magnitude. (These statements can be supported theoretically but the point here is to provide a purely empirical demonstration.) Now we move to $d=60$, keeping the sample size at $n=30$. Specifically, this means each sample consists of $30$ vectors, each having $60$ components. Rather than list all $1890$ standard deviations, let's just look at pictures of them using histograms to depict their ranges. The scatterplots in the top row compare the actual parameters sigma ($\sigma$) and mu ($\mu$) to the average estimates made during the $10^4$ iterations in this simulation. The gray reference lines mark the locus of perfect equality: clearly the estimates are working as intended and are unbiased. The histograms appear in the bottom row, separately for all entries in the covariance matrix (left) and for the means (right). The SDs of the individual variances tend to lie between $0.08$ and $0.12$ while the SDs of the covariances between separate components tend to lie between $0.04$ and $0.08$: exactly in the range achieved when $d=2$. Similarly, the SDs of the mean estimates tend to lie between $0.08$ and $0.13$, which is comparable to what was seen when $d=2$. Certainly there's no indication that the SDs have increased as $d$ went up from $2$ to $60$. The code follows. # # Create iid multivariate data and do it `n.iter` times. # sim <- function(n.data, mu, sigma, n.iter=1) { # # Returns arrays of parmeter estimates (distinguished by the last index). # library(MASS) #mvrnorm() x <- mvrnorm(n.iter * n.data, mu, sigma) s <- array(sapply(1:n.iter, function(i) cov(x[(n.data*(i-1)+1):(n.data*i),])), dim=c(n.dim, n.dim, n.iter)) m <-array(sapply(1:n.iter, function(i) colMeans(x[(n.data*(i-1)+1):(n.data*i),])), dim=c(n.dim, n.iter)) return(list(m=m, s=s)) } # # Control the study. # set.seed(17) n.dim <- 60 n.data <- 30 # Amount of data per iteration n.iter <- 10^4 # Number of iterations #n.parms <- choose(n.dim+2, 2) - 1 # # Create a random mean vector. # mu <- rnorm(n.dim) # # Create a random covariance matrix. # #eigenvalues <- rgamma(n.dim, 1) eigenvalues <- exp(-seq(from=0, to=3, length.out=n.dim)) # For comparability u <- svd(matrix(rnorm(n.dim^2), n.dim))$u sigma <- u %*% diag(eigenvalues) %*% t(u) # # Perform the simulation. # (Timing is about 5 seconds for n.dim=60, n.data=30, and n.iter=10000.) # system.time(sim.data <- sim(n.data, mu, sigma, n.iter)) # # Optional: plot the simulation results. # if (n.dim <= 6) { par(mfcol=c(n.dim, n.dim+1)) tmp <- apply(sim.data$s, 1:2, hist) tmp <- apply(sim.data$m, 1, hist) } # # Compare the mean simulation results to the parameters. # par(mfrow=c(2,2)) plot(sigma, apply(sim.data$s, 1:2, mean), main="Average covariances") abline(c(0,1), col="Gray") plot(mu, apply(sim.data$m, 1, mean), main="Average means") abline(c(0,1), col="Gray") # # Quantify the variability. # i <- lower.tri(matrix(1, n.dim, n.dim), diag=TRUE) hist(sd.cov <- apply(sim.data$s, 1:2, sd)[i], main="SD covariances") hist(sd.mean <- apply(sim.data$m, 1, sd), main="SD means") # # Display the simulation standard deviations for inspection. # sd.cov sd.mean
When data has a gaussian distribution, how many samples will characterise it?
The amount of data needed to estimate the parameters of a multivariate Normal distribution to within specified accuracy to a given confidence does not vary with the dimension, all other things being t
When data has a gaussian distribution, how many samples will characterise it? The amount of data needed to estimate the parameters of a multivariate Normal distribution to within specified accuracy to a given confidence does not vary with the dimension, all other things being the same. Therefore you may apply any rule of thumb for two dimensions to higher dimensional problems without any change at all. Why should it? There are only three kinds of parameters: means, variances, and covariances. The error of the estimate in a mean depends only on the variance ($\sigma^2$) and the amount of data ($n$). Thus, when $(X_1, X_2, \ldots, X_d)$ has a multivariate Normal distribution and the $X_i$ have variances $\sigma_i^2$, then the estimates of $\mathbb{E[X_i]}$ depend only on the $\sigma_i$ and $n$. Whence, to achieve adequate accuracy in estimating all the $\mathbb{E}[X_i]$, we only need to consider the amount of data needed for the $X_i$ having the largest of the $\sigma_i$. Therefore, when we contemplate a succession of estimation problems for increasing dimensions $d$, all we need to consider is how much the largest $\sigma_i$ will increase. When these parameters are bounded above, we conclude that the amount of data needed does not depend on dimension. Similar considerations apply to estimating the variances $\sigma_i^2$ and covariances $\sigma_{ij}$: if a certain amount of data suffice for estimating one covariance (or correlation coefficient) to a desired accuracy, then--provided the underlying normal distribution has similar parameter values--the same amount of data will suffice for estimating any covariance or correlation coefficient. To illustrate, and provide empirical support for this argument, let's study some simulations. The following creates parameters for a multinormal distribution of specified dimensions, draws many independent, identically distributed sets of vectors from that distribution, estimates the parameters from each such sample, and summarizes the results of those parameter estimates in terms of (1) their averages--to demonstrate they are unbiased (and the code is working correctly--and (2) their standard deviations, which quantify the accuracy of the estimates. (Do not confuse these standard deviations, which quantify the amount of variation among estimates obtained over multiple iterations of the simulation, with the standard deviations used to define the underlying multinormal distribution!) My claim is that these standard deviations do not materially change when the dimension $d$ changes, provided that as $d$ changes, we do not introduce larger variances into the underlying multinormal distribution itself. The sizes of the variances of the underlying distribution are controlled in this simulation by making the largest eigenvalue of the covariance matrix equal to $1$. This keeps the probability density "cloud" within bounds as the dimension increases, no matter what the shape of this cloud might be. Simulations of other models of behavior of the system as the dimension increases can be created simply by changing how the eigenvalues are generated; one example (using a Gamma distribution) is shown commented out in the R code below. What we are looking for is to verify that the standard deviations of the parameter estimates do not appreciably change when the dimension $d$ is changed. I therefore show the results for two extremes, $d=2$ and $d=60$, using the same amount of data ($30$) in both cases. It is noteworthy that the number of parameters estimated when $d=60$, equal to $1890$, far exceeds the number of vectors ($30$) and exceeds even the individual numbers ($30*60=1800$) in the entire dataset. Let's begin with two dimensions, $d=2$. There are five parameters: two variances (with standard deviations of $0.097$ and $0.182$ in this simulation), a covariance (SD = $0.126$), and two means (SD = $0.11$ and $0.15$). With different simulations (obtainable by changing the starting value of the random seed) these will vary a bit, but they will consistently be of comparable size when the sample size is $n=30$. For instance, in the next simulation the SDs are $0.014$, $0.263$, $0.043$, $0.04$, and $0.18$, respectively: they all changed but are of comparable orders of magnitude. (These statements can be supported theoretically but the point here is to provide a purely empirical demonstration.) Now we move to $d=60$, keeping the sample size at $n=30$. Specifically, this means each sample consists of $30$ vectors, each having $60$ components. Rather than list all $1890$ standard deviations, let's just look at pictures of them using histograms to depict their ranges. The scatterplots in the top row compare the actual parameters sigma ($\sigma$) and mu ($\mu$) to the average estimates made during the $10^4$ iterations in this simulation. The gray reference lines mark the locus of perfect equality: clearly the estimates are working as intended and are unbiased. The histograms appear in the bottom row, separately for all entries in the covariance matrix (left) and for the means (right). The SDs of the individual variances tend to lie between $0.08$ and $0.12$ while the SDs of the covariances between separate components tend to lie between $0.04$ and $0.08$: exactly in the range achieved when $d=2$. Similarly, the SDs of the mean estimates tend to lie between $0.08$ and $0.13$, which is comparable to what was seen when $d=2$. Certainly there's no indication that the SDs have increased as $d$ went up from $2$ to $60$. The code follows. # # Create iid multivariate data and do it `n.iter` times. # sim <- function(n.data, mu, sigma, n.iter=1) { # # Returns arrays of parmeter estimates (distinguished by the last index). # library(MASS) #mvrnorm() x <- mvrnorm(n.iter * n.data, mu, sigma) s <- array(sapply(1:n.iter, function(i) cov(x[(n.data*(i-1)+1):(n.data*i),])), dim=c(n.dim, n.dim, n.iter)) m <-array(sapply(1:n.iter, function(i) colMeans(x[(n.data*(i-1)+1):(n.data*i),])), dim=c(n.dim, n.iter)) return(list(m=m, s=s)) } # # Control the study. # set.seed(17) n.dim <- 60 n.data <- 30 # Amount of data per iteration n.iter <- 10^4 # Number of iterations #n.parms <- choose(n.dim+2, 2) - 1 # # Create a random mean vector. # mu <- rnorm(n.dim) # # Create a random covariance matrix. # #eigenvalues <- rgamma(n.dim, 1) eigenvalues <- exp(-seq(from=0, to=3, length.out=n.dim)) # For comparability u <- svd(matrix(rnorm(n.dim^2), n.dim))$u sigma <- u %*% diag(eigenvalues) %*% t(u) # # Perform the simulation. # (Timing is about 5 seconds for n.dim=60, n.data=30, and n.iter=10000.) # system.time(sim.data <- sim(n.data, mu, sigma, n.iter)) # # Optional: plot the simulation results. # if (n.dim <= 6) { par(mfcol=c(n.dim, n.dim+1)) tmp <- apply(sim.data$s, 1:2, hist) tmp <- apply(sim.data$m, 1, hist) } # # Compare the mean simulation results to the parameters. # par(mfrow=c(2,2)) plot(sigma, apply(sim.data$s, 1:2, mean), main="Average covariances") abline(c(0,1), col="Gray") plot(mu, apply(sim.data$m, 1, mean), main="Average means") abline(c(0,1), col="Gray") # # Quantify the variability. # i <- lower.tri(matrix(1, n.dim, n.dim), diag=TRUE) hist(sd.cov <- apply(sim.data$s, 1:2, sd)[i], main="SD covariances") hist(sd.mean <- apply(sim.data$m, 1, sd), main="SD means") # # Display the simulation standard deviations for inspection. # sd.cov sd.mean
When data has a gaussian distribution, how many samples will characterise it? The amount of data needed to estimate the parameters of a multivariate Normal distribution to within specified accuracy to a given confidence does not vary with the dimension, all other things being t