idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
28,501
Is there a difference between an autocorrelated time-series and serially autocorrelated errors?
Just to add up to Dimitriy very good answer: error autocorrelation poses problems for the calculation of the coefficients standard error and thus the significance levels, or p-value, making the IVs selection less straightforward. $R^2$ and the F value are also affected. Of all the assumptions in linear regression (homoscedasticity, independence of the residuals, linearity of the relationship IVs --> DV, normality of residuals) linearity and independence of the residuals are those that impact results more seriously if violated.
Is there a difference between an autocorrelated time-series and serially autocorrelated errors?
Just to add up to Dimitriy very good answer: error autocorrelation poses problems for the calculation of the coefficients standard error and thus the significance levels, or p-value, making the IVs se
Is there a difference between an autocorrelated time-series and serially autocorrelated errors? Just to add up to Dimitriy very good answer: error autocorrelation poses problems for the calculation of the coefficients standard error and thus the significance levels, or p-value, making the IVs selection less straightforward. $R^2$ and the F value are also affected. Of all the assumptions in linear regression (homoscedasticity, independence of the residuals, linearity of the relationship IVs --> DV, normality of residuals) linearity and independence of the residuals are those that impact results more seriously if violated.
Is there a difference between an autocorrelated time-series and serially autocorrelated errors? Just to add up to Dimitriy very good answer: error autocorrelation poses problems for the calculation of the coefficients standard error and thus the significance levels, or p-value, making the IVs se
28,502
When will a less true model predict better than a truer model?
I believe that this is one of the most counter-intuitive aspects of statistics; it's just really difficult to wrap your head around. The key notion here is the idea of the bias-variance tradeoff. It has been discussed in several places on CV, and you may want to check out some of the other answers, for example here or here, and I discussed it before here. Setting mine aside, the other two are quite good and well worth your time. I will try to give a quick sense of the idea. Let me first define some terms. To start with, what Shmueli means by "true" model is the actual data generating process; the closer your estimated model is to the real data generating process, the truer it is. For instance, if $\beta_1=.5$, and one model fit yields $\hat{\beta}_1=.6$, that's truer than another fit that yields $\hat{\beta}_1=.7$. On the other hand, predicting better means getting your $\hat{y}$'s as close as possible to the actual $y$'s, especially for out-of-sample data. Notice the differences in goals here (because that's crucial to understanding the issue): getting $\hat\beta$'s as close as possible vs. getting $\hat{y}$'s as close as possible. So Shmueli's point is that sometimes your $\hat{y}$'s can be closer to the actual $y$'s when your $\hat\beta$'s are estimated by a process that, on average, yields values a little further from the true $\beta$'s. Now, how is that possible? The key is that there is variance associated with parameters estimated from sample data. For a given sample, sometimes the maximum likelihood estimate happens to be further from the true value and sometimes closer. It is quite possible to have a situation where the variance of the sampling distribution of a parameter estimate is so large that $\hat\beta$'s routinely bounce so far out around their true value that they are not worth much. The thing to remember here is that classical statistics are based on what are called the 'best linear unbiased estimators', that is, the estimators that have the lowest variance of all the unbiased estimators. However, there can be other ways of attempting to get an estimate that are not unbiased. Typically, these have been developed within machine learning (a subfield of computer science). It is possible in some cases to have an estimate that doesn't tend to bounce as far from the true value, even though the sampling distribution of that estimate is not centered on the true value (i.e., it is biased). Given all of this, what matters for the accuracy of your predictions is how the inaccuracy due to the induced bias trades off vis-a-vis the inaccuracy induced by the high variance of the BLUE parameter estimate (hence the name). Specifically, if the inaccuracy due to higher variance is greater than the inaccuracy due to the bias, the less true model will give the better predictions.
When will a less true model predict better than a truer model?
I believe that this is one of the most counter-intuitive aspects of statistics; it's just really difficult to wrap your head around. The key notion here is the idea of the bias-variance tradeoff. It
When will a less true model predict better than a truer model? I believe that this is one of the most counter-intuitive aspects of statistics; it's just really difficult to wrap your head around. The key notion here is the idea of the bias-variance tradeoff. It has been discussed in several places on CV, and you may want to check out some of the other answers, for example here or here, and I discussed it before here. Setting mine aside, the other two are quite good and well worth your time. I will try to give a quick sense of the idea. Let me first define some terms. To start with, what Shmueli means by "true" model is the actual data generating process; the closer your estimated model is to the real data generating process, the truer it is. For instance, if $\beta_1=.5$, and one model fit yields $\hat{\beta}_1=.6$, that's truer than another fit that yields $\hat{\beta}_1=.7$. On the other hand, predicting better means getting your $\hat{y}$'s as close as possible to the actual $y$'s, especially for out-of-sample data. Notice the differences in goals here (because that's crucial to understanding the issue): getting $\hat\beta$'s as close as possible vs. getting $\hat{y}$'s as close as possible. So Shmueli's point is that sometimes your $\hat{y}$'s can be closer to the actual $y$'s when your $\hat\beta$'s are estimated by a process that, on average, yields values a little further from the true $\beta$'s. Now, how is that possible? The key is that there is variance associated with parameters estimated from sample data. For a given sample, sometimes the maximum likelihood estimate happens to be further from the true value and sometimes closer. It is quite possible to have a situation where the variance of the sampling distribution of a parameter estimate is so large that $\hat\beta$'s routinely bounce so far out around their true value that they are not worth much. The thing to remember here is that classical statistics are based on what are called the 'best linear unbiased estimators', that is, the estimators that have the lowest variance of all the unbiased estimators. However, there can be other ways of attempting to get an estimate that are not unbiased. Typically, these have been developed within machine learning (a subfield of computer science). It is possible in some cases to have an estimate that doesn't tend to bounce as far from the true value, even though the sampling distribution of that estimate is not centered on the true value (i.e., it is biased). Given all of this, what matters for the accuracy of your predictions is how the inaccuracy due to the induced bias trades off vis-a-vis the inaccuracy induced by the high variance of the BLUE parameter estimate (hence the name). Specifically, if the inaccuracy due to higher variance is greater than the inaccuracy due to the bias, the less true model will give the better predictions.
When will a less true model predict better than a truer model? I believe that this is one of the most counter-intuitive aspects of statistics; it's just really difficult to wrap your head around. The key notion here is the idea of the bias-variance tradeoff. It
28,503
Origin of the Naïve Bayes classifier?
A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. Bayes' theorem was named after the Reverend Thomas Bayes (1702–61), who studied how to compute a distribution for the probability parameter of a binomial distribution. After Bayes' death, his friend Richard Price edited and presented this work in 1763, as An Essay towards solving a Problem in the Doctrine of Chances. So it is safe to say that Bayes classifiers have been around since the 2nd half of the 18th century. Especially as Stephen Stigler suggested (in 1983, Stephen M. Stigler, "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296) that Bayes' theorem was discovered by Nicholas Saunderson some time before Bayes. On the other hand Edwards (1986) disputed that interpretation (in 1986, A. W. F. Edwards, "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110). Which takes us back to the safe assumption of "2nd half of the 18th century" again, as a naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem... which makes it "naive" is that it comes with strong (naive) independence assumptions. But practically, it's the same theorem.
Origin of the Naïve Bayes classifier?
A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. Bayes' theorem was named after the Reverend Thomas Bayes (
Origin of the Naïve Bayes classifier? A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. Bayes' theorem was named after the Reverend Thomas Bayes (1702–61), who studied how to compute a distribution for the probability parameter of a binomial distribution. After Bayes' death, his friend Richard Price edited and presented this work in 1763, as An Essay towards solving a Problem in the Doctrine of Chances. So it is safe to say that Bayes classifiers have been around since the 2nd half of the 18th century. Especially as Stephen Stigler suggested (in 1983, Stephen M. Stigler, "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296) that Bayes' theorem was discovered by Nicholas Saunderson some time before Bayes. On the other hand Edwards (1986) disputed that interpretation (in 1986, A. W. F. Edwards, "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110). Which takes us back to the safe assumption of "2nd half of the 18th century" again, as a naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem... which makes it "naive" is that it comes with strong (naive) independence assumptions. But practically, it's the same theorem.
Origin of the Naïve Bayes classifier? A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. Bayes' theorem was named after the Reverend Thomas Bayes (
28,504
Origin of the Naïve Bayes classifier?
I have seen the following paper cited before for Naive Bayes: Hand, D. J., & Yu, K. (2001). Idiot's Bayes—not so stupid after all?. International statistical review, 69(3), 385-398. It is a bit of a review and discussion of the topic.
Origin of the Naïve Bayes classifier?
I have seen the following paper cited before for Naive Bayes: Hand, D. J., & Yu, K. (2001). Idiot's Bayes—not so stupid after all?. International statistical review, 69(3), 385-398. It is a bit of
Origin of the Naïve Bayes classifier? I have seen the following paper cited before for Naive Bayes: Hand, D. J., & Yu, K. (2001). Idiot's Bayes—not so stupid after all?. International statistical review, 69(3), 385-398. It is a bit of a review and discussion of the topic.
Origin of the Naïve Bayes classifier? I have seen the following paper cited before for Naive Bayes: Hand, D. J., & Yu, K. (2001). Idiot's Bayes—not so stupid after all?. International statistical review, 69(3), 385-398. It is a bit of
28,505
Testing for stability in a time-series
This short remark is far from complete answer, just some suggestions: if you have two periods of time where the behaviour is different, by different I mean either differences in model parameters (not relevant in this particular situation), mean or variance or any other expected characteristic of time-series object ($x_t$ in your case), you can try any methods that do estimate the time (interval) of structural (or epidemic) change. In R there is a strucchange library for structural changes in linear regression models. Though it is primarily used for testing and monitoring changes in linear regression's parameters, some statistics could be used for general structural changes in time series.
Testing for stability in a time-series
This short remark is far from complete answer, just some suggestions: if you have two periods of time where the behaviour is different, by different I mean either differences in model parameters (n
Testing for stability in a time-series This short remark is far from complete answer, just some suggestions: if you have two periods of time where the behaviour is different, by different I mean either differences in model parameters (not relevant in this particular situation), mean or variance or any other expected characteristic of time-series object ($x_t$ in your case), you can try any methods that do estimate the time (interval) of structural (or epidemic) change. In R there is a strucchange library for structural changes in linear regression models. Though it is primarily used for testing and monitoring changes in linear regression's parameters, some statistics could be used for general structural changes in time series.
Testing for stability in a time-series This short remark is far from complete answer, just some suggestions: if you have two periods of time where the behaviour is different, by different I mean either differences in model parameters (n
28,506
Testing for stability in a time-series
As I read your question "and the fluctuations around the stable point are much smaller that the fluctuations during the transient period " what I get out of it is a request to detect when and if the variance of the errors has changed and if so when ! If that is your objective then you might consider reviewing the work or R. Tsay "outliers, Level Shifts and Variance Changes in Time Series" , Journal of Forecasting Vol 7 , 1-20 (1988). I have done considerable work in this area and find it very productive in yielding good analysis. Other approaches (ols/linear regression analysis for example ) which assume independent observations and no Pulse Outliers and/or no level shifts or local time trends and time-invariant parameters are insufficient in my opinion.
Testing for stability in a time-series
As I read your question "and the fluctuations around the stable point are much smaller that the fluctuations during the transient period " what I get out of it is a request to detect when and if the v
Testing for stability in a time-series As I read your question "and the fluctuations around the stable point are much smaller that the fluctuations during the transient period " what I get out of it is a request to detect when and if the variance of the errors has changed and if so when ! If that is your objective then you might consider reviewing the work or R. Tsay "outliers, Level Shifts and Variance Changes in Time Series" , Journal of Forecasting Vol 7 , 1-20 (1988). I have done considerable work in this area and find it very productive in yielding good analysis. Other approaches (ols/linear regression analysis for example ) which assume independent observations and no Pulse Outliers and/or no level shifts or local time trends and time-invariant parameters are insufficient in my opinion.
Testing for stability in a time-series As I read your question "and the fluctuations around the stable point are much smaller that the fluctuations during the transient period " what I get out of it is a request to detect when and if the v
28,507
Testing for stability in a time-series
I was thinking more about the question and thought I would give a slight enhancement of the naive approach as an answer in hopes that people know further ideas in the direction. It also allows us to eliminate the need to know the size of the fluctuations. The easiest way to implement it is with two parameters $(T,\alpha)$. Let $y_t = x_{t + 1} - x_{t}$ be the change in the time series between timestep $t$ and $t + 1$. When the series is stable around $x^*$, $y$ will fluctuate around zero with some standard error. Here we will assume that this error is normal. Take the last $T$, $y_t$'s and fit a Gaussian with confidence $\alpha$ using a function like Matlab's normfit. The fit will give us a mean $\mu$ with $\alpha$ confidence error on the mean $E_\mu$ and a standard deviation $\sigma$ with corresponding error $E_\sigma$. If $0 \in (\mu - E_\mu, \mu + E_\mu)$, then you can accept. If you want to be extra sure, then you can also renormalize the $y_t$s by the $\sigma$ you found (so that you now have standard deviation $1$) and test with the Kolmogorov-Smirnov test at the $\alpha$ confidence level. The advantage of this method is that unlike the naive approach, you no longer need to know anything about the magnitude of the thermal fluctuations around the mean. The limitation is that you still have an arbitrary $T$ parameter, and we had to assume a normal distribution on the noise (which is not unreasonable). I am not sure if this can be modified by some weighted mean with discounting. If a different distribution is expected to model the noise, then normfit and the Kolmogorov-Smirnov test should be replaced by their equivalents for that distribution.
Testing for stability in a time-series
I was thinking more about the question and thought I would give a slight enhancement of the naive approach as an answer in hopes that people know further ideas in the direction. It also allows us to e
Testing for stability in a time-series I was thinking more about the question and thought I would give a slight enhancement of the naive approach as an answer in hopes that people know further ideas in the direction. It also allows us to eliminate the need to know the size of the fluctuations. The easiest way to implement it is with two parameters $(T,\alpha)$. Let $y_t = x_{t + 1} - x_{t}$ be the change in the time series between timestep $t$ and $t + 1$. When the series is stable around $x^*$, $y$ will fluctuate around zero with some standard error. Here we will assume that this error is normal. Take the last $T$, $y_t$'s and fit a Gaussian with confidence $\alpha$ using a function like Matlab's normfit. The fit will give us a mean $\mu$ with $\alpha$ confidence error on the mean $E_\mu$ and a standard deviation $\sigma$ with corresponding error $E_\sigma$. If $0 \in (\mu - E_\mu, \mu + E_\mu)$, then you can accept. If you want to be extra sure, then you can also renormalize the $y_t$s by the $\sigma$ you found (so that you now have standard deviation $1$) and test with the Kolmogorov-Smirnov test at the $\alpha$ confidence level. The advantage of this method is that unlike the naive approach, you no longer need to know anything about the magnitude of the thermal fluctuations around the mean. The limitation is that you still have an arbitrary $T$ parameter, and we had to assume a normal distribution on the noise (which is not unreasonable). I am not sure if this can be modified by some weighted mean with discounting. If a different distribution is expected to model the noise, then normfit and the Kolmogorov-Smirnov test should be replaced by their equivalents for that distribution.
Testing for stability in a time-series I was thinking more about the question and thought I would give a slight enhancement of the naive approach as an answer in hopes that people know further ideas in the direction. It also allows us to e
28,508
Testing for stability in a time-series
You might consider testing backward (with a rolling window) for co-integration between x and the long term mean. When x is flopping around the mean, hopefully the windowed Augmented Dickey Fuller test, or whatever co-integration test you choose, will tell you that the two series are co-integrated. Once you get into the transition period, where the two series stray away from each other, hopefully your test will tell you that the windowed series are not co-integrated. The problem with this scheme is that it is harder to detect co-integration in a smaller window. And, a window that is too big, if it includes only a small segment of the transition period, will tell you that the windowed series is co-integrated when it shouldn't. And, as you might guess, there's no way to know ahead of time what the "right" window size might be. All I can say is that you'll have to play around with it to see if you get reasonable results.
Testing for stability in a time-series
You might consider testing backward (with a rolling window) for co-integration between x and the long term mean. When x is flopping around the mean, hopefully the windowed Augmented Dickey Fuller test
Testing for stability in a time-series You might consider testing backward (with a rolling window) for co-integration between x and the long term mean. When x is flopping around the mean, hopefully the windowed Augmented Dickey Fuller test, or whatever co-integration test you choose, will tell you that the two series are co-integrated. Once you get into the transition period, where the two series stray away from each other, hopefully your test will tell you that the windowed series are not co-integrated. The problem with this scheme is that it is harder to detect co-integration in a smaller window. And, a window that is too big, if it includes only a small segment of the transition period, will tell you that the windowed series is co-integrated when it shouldn't. And, as you might guess, there's no way to know ahead of time what the "right" window size might be. All I can say is that you'll have to play around with it to see if you get reasonable results.
Testing for stability in a time-series You might consider testing backward (with a rolling window) for co-integration between x and the long term mean. When x is flopping around the mean, hopefully the windowed Augmented Dickey Fuller test
28,509
Testing for stability in a time-series
As the simulation runs, divide take the last 2N points segmenting it into the first and second half. Compute the series of changes (the value of $m_{t+1} - m_{t}$) for the metric of interest for each half. Test the distribution of these two sets of deltas for stationarity. The easiest way to do this is compute the cdf of each distribution, labeling the recent one as "observed" and the prior one as "expected". Then conduct Pearson's chi-squared test for the value of your metric at each decile.
Testing for stability in a time-series
As the simulation runs, divide take the last 2N points segmenting it into the first and second half. Compute the series of changes (the value of $m_{t+1} - m_{t}$) for the metric of interest for each
Testing for stability in a time-series As the simulation runs, divide take the last 2N points segmenting it into the first and second half. Compute the series of changes (the value of $m_{t+1} - m_{t}$) for the metric of interest for each half. Test the distribution of these two sets of deltas for stationarity. The easiest way to do this is compute the cdf of each distribution, labeling the recent one as "observed" and the prior one as "expected". Then conduct Pearson's chi-squared test for the value of your metric at each decile.
Testing for stability in a time-series As the simulation runs, divide take the last 2N points segmenting it into the first and second half. Compute the series of changes (the value of $m_{t+1} - m_{t}$) for the metric of interest for each
28,510
Testing for stability in a time-series
Aside from the obvious Kalman Filter solution, you can use wavelet decompositions and get a time and frequency localised power spectrum. This satisfies your no assumptions desire, but unfortunately does not give you a formal test of when the system settles. But, for a practical application, it's fine; just look at the time when the energy in the high frequencies dies, and when the father wavelet coefficients stabilise.
Testing for stability in a time-series
Aside from the obvious Kalman Filter solution, you can use wavelet decompositions and get a time and frequency localised power spectrum. This satisfies your no assumptions desire, but unfortunately do
Testing for stability in a time-series Aside from the obvious Kalman Filter solution, you can use wavelet decompositions and get a time and frequency localised power spectrum. This satisfies your no assumptions desire, but unfortunately does not give you a formal test of when the system settles. But, for a practical application, it's fine; just look at the time when the energy in the high frequencies dies, and when the father wavelet coefficients stabilise.
Testing for stability in a time-series Aside from the obvious Kalman Filter solution, you can use wavelet decompositions and get a time and frequency localised power spectrum. This satisfies your no assumptions desire, but unfortunately do
28,511
Clustered standard errors and multi-level models
When you cluster on some observed attribute, you are making a statistical correction to the standard errors to account for some presumed similarity in the distribution of observations within clusters. When you estimate a multi-level model with random effects, you are explicitly modeling that variation, not treating it simply as a nuisance, thus clustering is not needed.
Clustered standard errors and multi-level models
When you cluster on some observed attribute, you are making a statistical correction to the standard errors to account for some presumed similarity in the distribution of observations within clusters.
Clustered standard errors and multi-level models When you cluster on some observed attribute, you are making a statistical correction to the standard errors to account for some presumed similarity in the distribution of observations within clusters. When you estimate a multi-level model with random effects, you are explicitly modeling that variation, not treating it simply as a nuisance, thus clustering is not needed.
Clustered standard errors and multi-level models When you cluster on some observed attribute, you are making a statistical correction to the standard errors to account for some presumed similarity in the distribution of observations within clusters.
28,512
Mixed effects in Random forest (in R)
The current main popular implementation of Random Forests (RF) (i.e. the randomForest package) is available only for univariate (continuous or discrete) responses. On the other hand, mixed models are inherently multivariate models, that is models that deal with vector-valued responses. Fortunately, extensions of RF for multivariate responses, in particular for handling longitudinal data, do exist. LongitudiRF is one of the R packages that implement Random Forests for longitudinal data of which I am aware. A lot more information can be found at this recent review paper on longitudinal data with Random Forests. Related posts: How can I include random effects (or repeated measures) into a randomForest How to deal with hierarchical / nested data in machine learning Random forest for binary panel data
Mixed effects in Random forest (in R)
The current main popular implementation of Random Forests (RF) (i.e. the randomForest package) is available only for univariate (continuous or discrete) responses. On the other hand, mixed models are
Mixed effects in Random forest (in R) The current main popular implementation of Random Forests (RF) (i.e. the randomForest package) is available only for univariate (continuous or discrete) responses. On the other hand, mixed models are inherently multivariate models, that is models that deal with vector-valued responses. Fortunately, extensions of RF for multivariate responses, in particular for handling longitudinal data, do exist. LongitudiRF is one of the R packages that implement Random Forests for longitudinal data of which I am aware. A lot more information can be found at this recent review paper on longitudinal data with Random Forests. Related posts: How can I include random effects (or repeated measures) into a randomForest How to deal with hierarchical / nested data in machine learning Random forest for binary panel data
Mixed effects in Random forest (in R) The current main popular implementation of Random Forests (RF) (i.e. the randomForest package) is available only for univariate (continuous or discrete) responses. On the other hand, mixed models are
28,513
Generalized CLT for any operation
There can only be such a theorem if $g$ is well-behaved in some sense: in particular, it should only depend on the set of values provided, not the order. Here is a big class of such functions, to which the central limit theorem can be applied directly: functions $g$ for which there is an invertible function $f$ where: $$g(X_1,X_2,...,X_n) = f^{-1}(f(X_1)+\dotsm +f(X_n))$$ Your example of the lognormal distribution is such case, where $f$ is $\log$. Your example of $\max$ is the limit as $k \to \infty$ of $f(x)=x^k$, and similarly for $\min$ with $f(x)=x^{-k}$.
Generalized CLT for any operation
There can only be such a theorem if $g$ is well-behaved in some sense: in particular, it should only depend on the set of values provided, not the order. Here is a big class of such functions, to whic
Generalized CLT for any operation There can only be such a theorem if $g$ is well-behaved in some sense: in particular, it should only depend on the set of values provided, not the order. Here is a big class of such functions, to which the central limit theorem can be applied directly: functions $g$ for which there is an invertible function $f$ where: $$g(X_1,X_2,...,X_n) = f^{-1}(f(X_1)+\dotsm +f(X_n))$$ Your example of the lognormal distribution is such case, where $f$ is $\log$. Your example of $\max$ is the limit as $k \to \infty$ of $f(x)=x^k$, and similarly for $\min$ with $f(x)=x^{-k}$.
Generalized CLT for any operation There can only be such a theorem if $g$ is well-behaved in some sense: in particular, it should only depend on the set of values provided, not the order. Here is a big class of such functions, to whic
28,514
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution?
I have written rather a lot of code to test this on two machines, the first of which is a Windows machine that has an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz processor. I am using a recent MinGW compiler via the CodeBlocks IDE, with optimizations set at O3 and expensive_optimizations enabled, but no chip-specific optimizations. I have run the same code on an Intel(R) Xeon CPU E5-2650L v4 @ 1.70GHz processor, Ubuntu OS, and GCC 7.5 compiler, with the same compiler options. I am using the pcg32 random number generator to generate the required uniform variates inside the code helpfully provided by @Alex. pcg32 is extremely fast - my implementation, copied from who knows where, takes only 18% longer than the C library rand() function while returning an unsigned integer between 0 and 4294967296$ = 2^{32}$, whereas rand() returns a signed integer between 0 and 32767 (much poorer granularity), and has excellent properties. See https://www.pcg-random.org/ for more. Fast Windows machine: Generating 10 million variates using the pcg32 version of the code that R uses took 568,095 microseconds, including the overhead induced by the for loop. Generating 10 million variates using the inverse probability transform took 527,346 microseconds, including the overhead induced by the for loop. This is roughly 93% of the time that the A-D algorithm uses. Roughly 40% of the time for the inverse probability transform algorithm appears to be loop overhead and the uniform RNG. Slower Linux machine: the A-D algorithm took 666,358 microseconds, the inverse probability transform took only 555,192 microseconds. This is roughly 83% of the time that the A-D algorithm uses. These results certainly validate the OP's suspicion that things may have changed since the 1970s. Regardless of the algorithm choice, being able to generate 15-20 million exponential random numbers in one second on one thread is certainly a nice capability to have! One interesting finding is that the runtimes don't scale with the CPU speed; they are a little greater on the Linux box than on the Windows box, but not nearly the 2x+ difference in speed. Of course, the compilers are different, and GHz is by no means the sole influence on runtime. I'd be happy to post the code, but there's about 130 lines of it, including some comments and blank lines. Thoughts?
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution?
I have written rather a lot of code to test this on two machines, the first of which is a Windows machine that has an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz processor. I am using a recent MinGW com
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution? I have written rather a lot of code to test this on two machines, the first of which is a Windows machine that has an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz processor. I am using a recent MinGW compiler via the CodeBlocks IDE, with optimizations set at O3 and expensive_optimizations enabled, but no chip-specific optimizations. I have run the same code on an Intel(R) Xeon CPU E5-2650L v4 @ 1.70GHz processor, Ubuntu OS, and GCC 7.5 compiler, with the same compiler options. I am using the pcg32 random number generator to generate the required uniform variates inside the code helpfully provided by @Alex. pcg32 is extremely fast - my implementation, copied from who knows where, takes only 18% longer than the C library rand() function while returning an unsigned integer between 0 and 4294967296$ = 2^{32}$, whereas rand() returns a signed integer between 0 and 32767 (much poorer granularity), and has excellent properties. See https://www.pcg-random.org/ for more. Fast Windows machine: Generating 10 million variates using the pcg32 version of the code that R uses took 568,095 microseconds, including the overhead induced by the for loop. Generating 10 million variates using the inverse probability transform took 527,346 microseconds, including the overhead induced by the for loop. This is roughly 93% of the time that the A-D algorithm uses. Roughly 40% of the time for the inverse probability transform algorithm appears to be loop overhead and the uniform RNG. Slower Linux machine: the A-D algorithm took 666,358 microseconds, the inverse probability transform took only 555,192 microseconds. This is roughly 83% of the time that the A-D algorithm uses. These results certainly validate the OP's suspicion that things may have changed since the 1970s. Regardless of the algorithm choice, being able to generate 15-20 million exponential random numbers in one second on one thread is certainly a nice capability to have! One interesting finding is that the runtimes don't scale with the CPU speed; they are a little greater on the Linux box than on the Windows box, but not nearly the 2x+ difference in speed. Of course, the compilers are different, and GHz is by no means the sole influence on runtime. I'd be happy to post the code, but there's about 130 lines of it, including some comments and blank lines. Thoughts?
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution? I have written rather a lot of code to test this on two machines, the first of which is a Windows machine that has an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz processor. I am using a recent MinGW com
28,515
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Just for the review sake, the source code for this function in R (written in the C language): #include "nmath.h" double exp_rand(void) { /* q[k-1] = sum(log(2)^k / k!) k=1,..,n, The highest n (here 16) is determined by q[n-1] = 1.0 within standard precision */ const static double q[] = { 0.6931471805599453, 0.9333736875190459, 0.9888777961838675, 0.9984959252914960, 0.9998292811061389, 0.9999833164100727, 0.9999985691438767, 0.9999998906925558, 0.9999999924734159, 0.9999999995283275, 0.9999999999728814, 0.9999999999985598, 0.9999999999999289, 0.9999999999999968, 0.9999999999999999, 1.0000000000000000 }; double a = 0.0; double u = unif_rand(); /* precaution if u = 0 is ever returned */ while(u <= 0.0 || u >= 1.0) u = unif_rand(); for (;;) { u += u; if (u > 1.0) break; a += q[0]; } u -= 1.0; if (u <= q[0]) return a + u; int i = 0; double ustar = unif_rand(), umin = ustar; do { ustar = unif_rand(); if (umin > ustar) umin = ustar; i++; } while (u > q[i]); return a + umin * q[0]; } ```
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Just for the review sake, the source code for this function in R (written in the C language): #include "nmath.h" double exp_rand(void) { /* q[k-1] = sum(log(2)^k / k!) k=1,..,n, The highest n (here 16) is determined by q[n-1] = 1.0 within standard precision */ const static double q[] = { 0.6931471805599453, 0.9333736875190459, 0.9888777961838675, 0.9984959252914960, 0.9998292811061389, 0.9999833164100727, 0.9999985691438767, 0.9999998906925558, 0.9999999924734159, 0.9999999995283275, 0.9999999999728814, 0.9999999999985598, 0.9999999999999289, 0.9999999999999968, 0.9999999999999999, 1.0000000000000000 }; double a = 0.0; double u = unif_rand(); /* precaution if u = 0 is ever returned */ while(u <= 0.0 || u >= 1.0) u = unif_rand(); for (;;) { u += u; if (u > 1.0) break; a += q[0]; } u -= 1.0; if (u <= q[0]) return a + u; int i = 0; double ustar = unif_rand(), umin = ustar; do { ustar = unif_rand(); if (umin > ustar) umin = ustar; i++; } while (u > q[i]); return a + umin * q[0]; } ```
Why doesn't R use Inverse Transform Sampling to sample from the Exponential Distribution? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
28,516
Why are Ratios "Dangerous" in Statistical Modelling? [closed]
The "dangerous" part of the ratio is the inverted denominator If you have a ratio term involving two explanatory variables in a regression model, this can be written as the interaction term: $$\frac{x_{1,i}}{x_{2,i}} = x_{1,i} \times \frac{1}{x_{2,i}}.$$ Now, there is nothing inherently problematic or dangerous about having an interaction term involving the explanatory variable $x_{1,i}$, and indeed, we have interaction terms like this in many regression models. However, it is arguably quite "dangerous" to have a model term that inverts the explanatory variable $x_{2,i}$ --- if this value is small for some data points then this explanatory term will "explode" at those data points, which will generally cause them to have large positive or negative values yielding high leverage points in the regression (i.e., they will affect the OLS fit a lot). Be careful painting this situation with too broad a brush, because terms of this kind are not always dangerous. Indeed, if the explanatory variable $x_{2,i}$ was already "explosive" (say, because it was already the inverse of a stable random variable with a mean near zero) then inversion may actually make it more stable instead of more explosive. As a general rule, if we invert a random variable with relatively low kurtosis, and a mean near zero, we will tend to get a random variable with high kurtosis (i.e., high probability of extreme values), and vice versa. Here we have concentrated on the term involving an inverted explanatory variable. Of course, it is possible that the interaction with $x_{1,i}$ could aggravate the explosive nature of this term, particularly if large values of $x_{1,i}$ tend to go with small values of $x_{2,i}$. But as you can see, it is really the inversion that is the "dangerous" part. Whether or not the ratio term is "dangerous" largely comes down to whether or not the inverted term $1/x_{2,i}$ is "dangerous" in its own right. If $x_{2,i}$ has some small values then this term will be quite explosive and yield high-leverage data points.
Why are Ratios "Dangerous" in Statistical Modelling? [closed]
The "dangerous" part of the ratio is the inverted denominator If you have a ratio term involving two explanatory variables in a regression model, this can be written as the interaction term: $$\frac{x
Why are Ratios "Dangerous" in Statistical Modelling? [closed] The "dangerous" part of the ratio is the inverted denominator If you have a ratio term involving two explanatory variables in a regression model, this can be written as the interaction term: $$\frac{x_{1,i}}{x_{2,i}} = x_{1,i} \times \frac{1}{x_{2,i}}.$$ Now, there is nothing inherently problematic or dangerous about having an interaction term involving the explanatory variable $x_{1,i}$, and indeed, we have interaction terms like this in many regression models. However, it is arguably quite "dangerous" to have a model term that inverts the explanatory variable $x_{2,i}$ --- if this value is small for some data points then this explanatory term will "explode" at those data points, which will generally cause them to have large positive or negative values yielding high leverage points in the regression (i.e., they will affect the OLS fit a lot). Be careful painting this situation with too broad a brush, because terms of this kind are not always dangerous. Indeed, if the explanatory variable $x_{2,i}$ was already "explosive" (say, because it was already the inverse of a stable random variable with a mean near zero) then inversion may actually make it more stable instead of more explosive. As a general rule, if we invert a random variable with relatively low kurtosis, and a mean near zero, we will tend to get a random variable with high kurtosis (i.e., high probability of extreme values), and vice versa. Here we have concentrated on the term involving an inverted explanatory variable. Of course, it is possible that the interaction with $x_{1,i}$ could aggravate the explosive nature of this term, particularly if large values of $x_{1,i}$ tend to go with small values of $x_{2,i}$. But as you can see, it is really the inversion that is the "dangerous" part. Whether or not the ratio term is "dangerous" largely comes down to whether or not the inverted term $1/x_{2,i}$ is "dangerous" in its own right. If $x_{2,i}$ has some small values then this term will be quite explosive and yield high-leverage data points.
Why are Ratios "Dangerous" in Statistical Modelling? [closed] The "dangerous" part of the ratio is the inverted denominator If you have a ratio term involving two explanatory variables in a regression model, this can be written as the interaction term: $$\frac{x
28,517
Why are Ratios "Dangerous" in Statistical Modelling? [closed]
Actually, it's kind of simple why. Suppose you calculate CV multiple times from bootstrap. CV is $\frac{SD}{Mean}$. Now suppose that the mean value is not close to zero, but could be, let's say one in a million times. What happens then is we might get a CV that could be -1000 times the median of the other CV values. So the problem with ratios of random variables is that the more data we have, the wilder the mean value may be because of the divide by almost zero problem in the denominator. EDIT: For a more exact example that I am just crudely summarizing here see: Brody JP, Williams BA, Wold BJ, Quake SR (2002) Significance and statistical errors in the analysis of DNA microarray data. Proc Natl Acad Sci 99(20):12975–12978.
Why are Ratios "Dangerous" in Statistical Modelling? [closed]
Actually, it's kind of simple why. Suppose you calculate CV multiple times from bootstrap. CV is $\frac{SD}{Mean}$. Now suppose that the mean value is not close to zero, but could be, let's say one in
Why are Ratios "Dangerous" in Statistical Modelling? [closed] Actually, it's kind of simple why. Suppose you calculate CV multiple times from bootstrap. CV is $\frac{SD}{Mean}$. Now suppose that the mean value is not close to zero, but could be, let's say one in a million times. What happens then is we might get a CV that could be -1000 times the median of the other CV values. So the problem with ratios of random variables is that the more data we have, the wilder the mean value may be because of the divide by almost zero problem in the denominator. EDIT: For a more exact example that I am just crudely summarizing here see: Brody JP, Williams BA, Wold BJ, Quake SR (2002) Significance and statistical errors in the analysis of DNA microarray data. Proc Natl Acad Sci 99(20):12975–12978.
Why are Ratios "Dangerous" in Statistical Modelling? [closed] Actually, it's kind of simple why. Suppose you calculate CV multiple times from bootstrap. CV is $\frac{SD}{Mean}$. Now suppose that the mean value is not close to zero, but could be, let's say one in
28,518
Gaussian Process Regression vs Kalman Filter (for time series)?
TL; DR Kalman filters and smoothers can be viewed as solvers for a family of Gaussian process regression models, in particular, Markov Gaussian processes. Say, for example, we have a GP regression model $$ \begin{split} U(t) &\sim \mathrm{GP}(0, C(t, t')),\\ Y_k &= U(t_k) + \xi_k, \end{split}\tag{1} $$ Now, what is the goal of GP regression? The goal is to learn the posterior distribution $p(u_{1:T} \mid y_{1:T})$ jointly for any times $t_1,\ldots,t_T$ with data $y_{1:T}$. However, this is known to be expensive, as you need to solve matrix inversion of size $T$. Also, in practice, we are mostly interested with the marginal posterior $p(u_k \mid y_{1:T})$ for $k=1,2,\ldots,T$ instead of the joint one. So, is there any efficient solver for $\{p(u_k \mid y_{1:T})\}_{k=1}^T$? Yes, suppose that the covariance function $C$ is chosen properly, in the sense that $U$ is a Markov (Gaussian) process and is governed by an stochastic differential equation $$ \begin{split} \mathrm{d} \overline{U}(t) &= A \, \overline{U}(t) \,\mathrm{d}t + B \, \mathrm{d}W(t), \\ U(t_0) &= U_0 \sim N(0, P_0) \end{split}\tag{2} $$ and that $U = H \, \overline{U}$ for some matrix $H$. Then estimating $p(\overline{u}_k \mid y_{1:T})$ from the data is called a smoothing problem in stochastic calculus. This can be solved by using Kalman filters and smoothers in linear computational time. You can also discretise the SDE above into $$ \overline{U}_k = F \, \overline{U}_{k-1} + q_{k-1},\tag{3} $$ for some matrix $F$ and Gaussian r.v. $q$, if this kind of state-space form looks more familiar to you. By using Kalman filter and smoothing on (2) or (3), you will get exactly the same results for solving the GP model (1) using the standard GP regression equations (you could check Figure 4.1 in 1). So, eventually, what are the key differences between the conventional GPs and state-space GPs? In conventional GPs, you specify their mean and covariane functions. In state-space GPs, you specify the SDE coefficients instead, and their mean and covariane functions will be implicitly defined from these SDE coefficients. In conventional GPs, you usually solve their regression problem jointly at data points. In state-space, you solve their regression problem marginally at data points, in a linear computational time. This benefits from the Markov property of the state-space models. Also please note that not all GPs are Markov (Gaussian) processes thus, not all GPs can be represented in SDEs! For more detailed expositions, you are welcome to check my dissertation. Feel free to ask me more questions on this topic if you have any. Some heuristic explanations to (2) as per the request from @jbuddy_13 and for those who are not familiar with SDEs: Solutions of SDEs are stochastic processes, in particular, continuous-time Markov processes (also semimartingales, but don't think Markov <=> semimartingale). You can think of SDEs as means to construct stochastic processes. Ito's theory was really devoted for constucting diffusion processes. Now since GPs are genuinly stochastic processes, it makes perfect sense to construct GPs via SDEs. It turns out that the solutions to linear SDEs like (2) are indeed (Markov) GPs, and the mean and covariance functions of GP $\overline{U}$ are governed by certain ODEs (see, Eq. 4.9 in 1). The SDE in (2) has three main components: $A \, \overline{U}$ is called the drift of the SDE (I will call $A$ the drift matrix), and $B$ is called the dispersion of the SDE. The drift term is to model the infinitesimal change of $\overline{U}$ in time, while the dispersion term adds stochastic volatility to the change. $t\mapsto W(t)$ is a Wiener process (also interchangably called Brownian motion). This is where the randomness of (2) coming from. $U_0$ the initial condition, a Gaussian random variable. This is also where the randomness of (2) coming from. More intuitively, let's "divide both side of (2) by $\mathrm{d}t$" we get: $$ \frac{\mathrm{d} \overline{U}(t)}{\mathrm{d}t} = A \, \overline{U}(t) + w(t), $$ where $w(t)$ informally stands for $d W(t)/dt$. You can see that the SDE is essentially an ODE driven by a white noise process.
Gaussian Process Regression vs Kalman Filter (for time series)?
TL; DR Kalman filters and smoothers can be viewed as solvers for a family of Gaussian process regression models, in particular, Markov Gaussian processes. Say, for example, we have a GP regression mo
Gaussian Process Regression vs Kalman Filter (for time series)? TL; DR Kalman filters and smoothers can be viewed as solvers for a family of Gaussian process regression models, in particular, Markov Gaussian processes. Say, for example, we have a GP regression model $$ \begin{split} U(t) &\sim \mathrm{GP}(0, C(t, t')),\\ Y_k &= U(t_k) + \xi_k, \end{split}\tag{1} $$ Now, what is the goal of GP regression? The goal is to learn the posterior distribution $p(u_{1:T} \mid y_{1:T})$ jointly for any times $t_1,\ldots,t_T$ with data $y_{1:T}$. However, this is known to be expensive, as you need to solve matrix inversion of size $T$. Also, in practice, we are mostly interested with the marginal posterior $p(u_k \mid y_{1:T})$ for $k=1,2,\ldots,T$ instead of the joint one. So, is there any efficient solver for $\{p(u_k \mid y_{1:T})\}_{k=1}^T$? Yes, suppose that the covariance function $C$ is chosen properly, in the sense that $U$ is a Markov (Gaussian) process and is governed by an stochastic differential equation $$ \begin{split} \mathrm{d} \overline{U}(t) &= A \, \overline{U}(t) \,\mathrm{d}t + B \, \mathrm{d}W(t), \\ U(t_0) &= U_0 \sim N(0, P_0) \end{split}\tag{2} $$ and that $U = H \, \overline{U}$ for some matrix $H$. Then estimating $p(\overline{u}_k \mid y_{1:T})$ from the data is called a smoothing problem in stochastic calculus. This can be solved by using Kalman filters and smoothers in linear computational time. You can also discretise the SDE above into $$ \overline{U}_k = F \, \overline{U}_{k-1} + q_{k-1},\tag{3} $$ for some matrix $F$ and Gaussian r.v. $q$, if this kind of state-space form looks more familiar to you. By using Kalman filter and smoothing on (2) or (3), you will get exactly the same results for solving the GP model (1) using the standard GP regression equations (you could check Figure 4.1 in 1). So, eventually, what are the key differences between the conventional GPs and state-space GPs? In conventional GPs, you specify their mean and covariane functions. In state-space GPs, you specify the SDE coefficients instead, and their mean and covariane functions will be implicitly defined from these SDE coefficients. In conventional GPs, you usually solve their regression problem jointly at data points. In state-space, you solve their regression problem marginally at data points, in a linear computational time. This benefits from the Markov property of the state-space models. Also please note that not all GPs are Markov (Gaussian) processes thus, not all GPs can be represented in SDEs! For more detailed expositions, you are welcome to check my dissertation. Feel free to ask me more questions on this topic if you have any. Some heuristic explanations to (2) as per the request from @jbuddy_13 and for those who are not familiar with SDEs: Solutions of SDEs are stochastic processes, in particular, continuous-time Markov processes (also semimartingales, but don't think Markov <=> semimartingale). You can think of SDEs as means to construct stochastic processes. Ito's theory was really devoted for constucting diffusion processes. Now since GPs are genuinly stochastic processes, it makes perfect sense to construct GPs via SDEs. It turns out that the solutions to linear SDEs like (2) are indeed (Markov) GPs, and the mean and covariance functions of GP $\overline{U}$ are governed by certain ODEs (see, Eq. 4.9 in 1). The SDE in (2) has three main components: $A \, \overline{U}$ is called the drift of the SDE (I will call $A$ the drift matrix), and $B$ is called the dispersion of the SDE. The drift term is to model the infinitesimal change of $\overline{U}$ in time, while the dispersion term adds stochastic volatility to the change. $t\mapsto W(t)$ is a Wiener process (also interchangably called Brownian motion). This is where the randomness of (2) coming from. $U_0$ the initial condition, a Gaussian random variable. This is also where the randomness of (2) coming from. More intuitively, let's "divide both side of (2) by $\mathrm{d}t$" we get: $$ \frac{\mathrm{d} \overline{U}(t)}{\mathrm{d}t} = A \, \overline{U}(t) + w(t), $$ where $w(t)$ informally stands for $d W(t)/dt$. You can see that the SDE is essentially an ODE driven by a white noise process.
Gaussian Process Regression vs Kalman Filter (for time series)? TL; DR Kalman filters and smoothers can be viewed as solvers for a family of Gaussian process regression models, in particular, Markov Gaussian processes. Say, for example, we have a GP regression mo
28,519
A farmer is growing a magical tree
My attempt. For a flower to sprout it needs to be closer to the trunk than all other current flowers. We define $X_i \sim Unif[0, 1]$ as then location of seed at day $i$. For a flower to grow it needs to be smaller than all seeds until then current point. That is, define $Y_i = 1$ if the flower sprouted and $0$ else. Therefore, the probability for a flower to sprout is $$P(Y_i = 1) = P(X_i < min_{i < j}(X_j))$$ The minimum of uniform distribution is quite simple to find (assuming independence), define $M_i = min_{j < i}(X_j) $. $$P(M < m) = 1 - P(M > m) = 1 - P(X_1 > t, \ldots, X_{i-1} > t) = 1 - (1 - m)^{i-1}$$. The density is, $$f_M(m) = (i-1)(1-m)^{i-2}$$ Therefore, the expected value is, $$E(M) = \int_0^1 (i-1)(1-m)^{i-2} m dm = (i-1) \int_0^1 (1-t)t^{i-2}dt $$ Finally, we obtain $$ E(M_i) = \frac{1}{i}. $$ Note that this answer your question 2, the average location of a flower in day $n$, will be $\frac{1}{n+1}$ (since we are looking at the extra day). Now, to question 1, the expected number of flowers will be (note we add 1 to deal with the first seed), $$ E(\sum_{i=2}^n 1 + (Y_i)) = \sum_{i=2}^n E(Y_i) = \sum_{i=1}^n E(E(Y_i)|M_{i}) = 1 + \sum_{i=2}^n E(M_{i}) = \sum_{i=1}^n \frac{1}{i}.$$ Verifying the results using simulation, (using R): runDays <- function(days) { flowers <- Inf for (i in 1:days) { possible <- runif(1) if (possible < min(flowers)) { flowers <- c(flowers, possible) } } return(flowers[-1]) } Question 1, x <- replicate(100000, length(runDays(50))) mean(x) [1] 4.49174 sum(1 / 1:50) [1] 4.499205 x <- replicate(100000, length(runDays(100))) mean(x) [1] 5.17413 sum(1 / 1:100) [1] 5.187378 Question 2, x <- replicate(100000, rev(runDays(50))[1]) mean(x) [1] 0.0195804 1 / 51 [1] 0.01960784 x <- replicate(100000, rev(runDays(100))[1]) mean(x) [1] 0.009909106 1 / 101 [1] 0.00990099 Looks ok.
A farmer is growing a magical tree
My attempt. For a flower to sprout it needs to be closer to the trunk than all other current flowers. We define $X_i \sim Unif[0, 1]$ as then location of seed at day $i$. For a flower to grow it needs
A farmer is growing a magical tree My attempt. For a flower to sprout it needs to be closer to the trunk than all other current flowers. We define $X_i \sim Unif[0, 1]$ as then location of seed at day $i$. For a flower to grow it needs to be smaller than all seeds until then current point. That is, define $Y_i = 1$ if the flower sprouted and $0$ else. Therefore, the probability for a flower to sprout is $$P(Y_i = 1) = P(X_i < min_{i < j}(X_j))$$ The minimum of uniform distribution is quite simple to find (assuming independence), define $M_i = min_{j < i}(X_j) $. $$P(M < m) = 1 - P(M > m) = 1 - P(X_1 > t, \ldots, X_{i-1} > t) = 1 - (1 - m)^{i-1}$$. The density is, $$f_M(m) = (i-1)(1-m)^{i-2}$$ Therefore, the expected value is, $$E(M) = \int_0^1 (i-1)(1-m)^{i-2} m dm = (i-1) \int_0^1 (1-t)t^{i-2}dt $$ Finally, we obtain $$ E(M_i) = \frac{1}{i}. $$ Note that this answer your question 2, the average location of a flower in day $n$, will be $\frac{1}{n+1}$ (since we are looking at the extra day). Now, to question 1, the expected number of flowers will be (note we add 1 to deal with the first seed), $$ E(\sum_{i=2}^n 1 + (Y_i)) = \sum_{i=2}^n E(Y_i) = \sum_{i=1}^n E(E(Y_i)|M_{i}) = 1 + \sum_{i=2}^n E(M_{i}) = \sum_{i=1}^n \frac{1}{i}.$$ Verifying the results using simulation, (using R): runDays <- function(days) { flowers <- Inf for (i in 1:days) { possible <- runif(1) if (possible < min(flowers)) { flowers <- c(flowers, possible) } } return(flowers[-1]) } Question 1, x <- replicate(100000, length(runDays(50))) mean(x) [1] 4.49174 sum(1 / 1:50) [1] 4.499205 x <- replicate(100000, length(runDays(100))) mean(x) [1] 5.17413 sum(1 / 1:100) [1] 5.187378 Question 2, x <- replicate(100000, rev(runDays(50))[1]) mean(x) [1] 0.0195804 1 / 51 [1] 0.01960784 x <- replicate(100000, rev(runDays(100))[1]) mean(x) [1] 0.009909106 1 / 101 [1] 0.00990099 Looks ok.
A farmer is growing a magical tree My attempt. For a flower to sprout it needs to be closer to the trunk than all other current flowers. We define $X_i \sim Unif[0, 1]$ as then location of seed at day $i$. For a flower to grow it needs
28,520
A farmer is growing a magical tree
Kozolovska gives a good answer. This one outlines a different solution method. Let $X_n$ be the location of the leftmost flower after $n$ days. The problem states $X_0=1$ and the distribution of $X_{n+1}$ conditional on $X_n$ is a mixture of $X_n,$ with probability $1-X_n,$ and a uniform distribution on $[0,X_n),$ with probability $X_n.$ The moment generating function of the latter is $$\phi_{X_n}(t) = E\left[e^{tX_n}\right] = \int_0^{X_n} \frac{e^{t x}}{X_n}\,\mathrm{d}x =\frac{e^{tX_n}-1}{tX_n}.$$ Let's compute the moment generating function of $X_{n+1}.$ It is immediate that the conditional expectation of $\exp(tX_n)$ is the same linear combination of expectations of the mixture components, $$E\left[e^{tX_{n+1}}\mid X_n\right] = (1-X_n)e^{tX_n} + X_n\phi_{X_n}(t).$$ Taking expectations (w.r.t. $X_n$) yields $$\begin{aligned} \phi_{n+1}(t) &= E[e^{tX_{n+1}}]= E\left[E\left[e^{tX_{n+1}}\mid X_n\right] \right] \\ &= \phi_n(t) - \phi_n^\prime(t) + \frac{\phi_n(t) - 1}{t} \end{aligned}\tag{*}$$ Clearly $\phi_0(t) = \exp(t(1)) = \exp(t).$ The general solution (which you can check by plugging it into the recursion $(*)$) is $$\phi_n(t) = \frac{n!}{t^n}\left(e^t - 1 - t - \frac{t^2}{2} - \cdots - \frac{t^{n-1}}{(n-1)!}\right).$$ This is the moment generating function of a Beta$(1,n)$ variable. Since $X_n$ is bounded its m.g.f. determines its distribution, so we conclude $X_n$ has a Beta$(1,n)$ distribution. (This is more readily derived using uniform order statistics -- but it might be of interest to see it emerge using the m.g.f. method.) Here is a simulation using R. n <- 50 n.sim <- 1e5 X <- rbind(1, apply(matrix(runif(n * n.sim), n), 2, cummin)) The n rows of $X$ record values of $X_0=1, X_1, \ldots, X_{n}$ in n.sim independent simulations of this process. The histogram of the last one indeed matches the theoretical Beta density: hist(X[n, ], freq=FALSE, breaks=100, ylim=c(0, 1/beta(1,n)), col=gray(.95), main=bquote(paste("Histogram of ", X[.(n)])), xlab="Value") curve(dbeta(x,1,n), lwd=2, col="Red", add=TRUE, xlim=c(1e-6,1), n=1001) The expected numbers of remaining values ("flowers") match the theory, too, as in this scatterplot of all n simulated random variables: i <- apply(X, 2, function(x) cumsum(diff(x) < 0)) plot((cumsum(1/seq_len(n))), rowMeans(i), main="Expected Count", xlab="Theory", ylab="Simulation") abline(0:1, col="Red")
A farmer is growing a magical tree
Kozolovska gives a good answer. This one outlines a different solution method. Let $X_n$ be the location of the leftmost flower after $n$ days. The problem states $X_0=1$ and the distribution of $X_
A farmer is growing a magical tree Kozolovska gives a good answer. This one outlines a different solution method. Let $X_n$ be the location of the leftmost flower after $n$ days. The problem states $X_0=1$ and the distribution of $X_{n+1}$ conditional on $X_n$ is a mixture of $X_n,$ with probability $1-X_n,$ and a uniform distribution on $[0,X_n),$ with probability $X_n.$ The moment generating function of the latter is $$\phi_{X_n}(t) = E\left[e^{tX_n}\right] = \int_0^{X_n} \frac{e^{t x}}{X_n}\,\mathrm{d}x =\frac{e^{tX_n}-1}{tX_n}.$$ Let's compute the moment generating function of $X_{n+1}.$ It is immediate that the conditional expectation of $\exp(tX_n)$ is the same linear combination of expectations of the mixture components, $$E\left[e^{tX_{n+1}}\mid X_n\right] = (1-X_n)e^{tX_n} + X_n\phi_{X_n}(t).$$ Taking expectations (w.r.t. $X_n$) yields $$\begin{aligned} \phi_{n+1}(t) &= E[e^{tX_{n+1}}]= E\left[E\left[e^{tX_{n+1}}\mid X_n\right] \right] \\ &= \phi_n(t) - \phi_n^\prime(t) + \frac{\phi_n(t) - 1}{t} \end{aligned}\tag{*}$$ Clearly $\phi_0(t) = \exp(t(1)) = \exp(t).$ The general solution (which you can check by plugging it into the recursion $(*)$) is $$\phi_n(t) = \frac{n!}{t^n}\left(e^t - 1 - t - \frac{t^2}{2} - \cdots - \frac{t^{n-1}}{(n-1)!}\right).$$ This is the moment generating function of a Beta$(1,n)$ variable. Since $X_n$ is bounded its m.g.f. determines its distribution, so we conclude $X_n$ has a Beta$(1,n)$ distribution. (This is more readily derived using uniform order statistics -- but it might be of interest to see it emerge using the m.g.f. method.) Here is a simulation using R. n <- 50 n.sim <- 1e5 X <- rbind(1, apply(matrix(runif(n * n.sim), n), 2, cummin)) The n rows of $X$ record values of $X_0=1, X_1, \ldots, X_{n}$ in n.sim independent simulations of this process. The histogram of the last one indeed matches the theoretical Beta density: hist(X[n, ], freq=FALSE, breaks=100, ylim=c(0, 1/beta(1,n)), col=gray(.95), main=bquote(paste("Histogram of ", X[.(n)])), xlab="Value") curve(dbeta(x,1,n), lwd=2, col="Red", add=TRUE, xlim=c(1e-6,1), n=1001) The expected numbers of remaining values ("flowers") match the theory, too, as in this scatterplot of all n simulated random variables: i <- apply(X, 2, function(x) cumsum(diff(x) < 0)) plot((cumsum(1/seq_len(n))), rowMeans(i), main="Expected Count", xlab="Theory", ylab="Simulation") abline(0:1, col="Red")
A farmer is growing a magical tree Kozolovska gives a good answer. This one outlines a different solution method. Let $X_n$ be the location of the leftmost flower after $n$ days. The problem states $X_0=1$ and the distribution of $X_
28,521
Why do we multiply log likelihood times -2 when conducting MLE?
Why do we multiply log likelihood times -2 when conducting MLE? We really don't. The -2 was not about parameter estimation; for that, we'd just use the (negative log-)likelihood. It was about hypothesis testing. Your intuition about negation is correct. Traditionally, in the optimization literature, we minimize functions. It's easy enough to convert a maximization problem into a minimization problem by negating the objective. The parameters that maximize the log-likelihood are the ones that minimize the negative log-likelihood. You've shown some links that use the quantity -2LL in the specific case of linear regression. There's a computational reason for this and a statistical reason. The computational reason (which is weaker; more of an 'it doesn't matter'). An objective multiplied by a scalar constant will have the same optimum. In the Gaussian log-likelihood, every term is a fraction with denominator 2. So why bother dividing? By including the -2, you don't have to divide every term by 2. (Not that computers have much trouble with dividing by powers of 2...) The statistical reason (which argues for a meaningful benefit of the -2). This quantity, as you note, is called the deviance. The -2 factor is useful for statistical hypothesis testing. In a likelihood ratio test, this helps you to compute a $p$-value. Quoting the Wikipedia article on likelihood ratio tests: Multiplying by −2 ensures mathematically that (by Wilks' theorem) ${\displaystyle \lambda _{\text{LR}}}$ converges asymptotically to being χ²-distributed if the null hypothesis happens to be true. To add context, I'll also quote two of the articles you linked: first LR chi2(3) – This is the likelihood ratio (LR) chi-square test. The likelihood chi-square test statistic...This is minus two (i.e., -2) times the difference between the starting and ending log likelihood. and second: Multiplying it by -2 is a technical step necessary to convert the log-likelihood into a chi-square distribution, which is useful because it can then be used to ascertain statistical significance. Don't worry if you do not fully understand the technicalities of this. They both give the same message as the Wikipedia article.
Why do we multiply log likelihood times -2 when conducting MLE?
Why do we multiply log likelihood times -2 when conducting MLE? We really don't. The -2 was not about parameter estimation; for that, we'd just use the (negative log-)likelihood. It was about hypothes
Why do we multiply log likelihood times -2 when conducting MLE? Why do we multiply log likelihood times -2 when conducting MLE? We really don't. The -2 was not about parameter estimation; for that, we'd just use the (negative log-)likelihood. It was about hypothesis testing. Your intuition about negation is correct. Traditionally, in the optimization literature, we minimize functions. It's easy enough to convert a maximization problem into a minimization problem by negating the objective. The parameters that maximize the log-likelihood are the ones that minimize the negative log-likelihood. You've shown some links that use the quantity -2LL in the specific case of linear regression. There's a computational reason for this and a statistical reason. The computational reason (which is weaker; more of an 'it doesn't matter'). An objective multiplied by a scalar constant will have the same optimum. In the Gaussian log-likelihood, every term is a fraction with denominator 2. So why bother dividing? By including the -2, you don't have to divide every term by 2. (Not that computers have much trouble with dividing by powers of 2...) The statistical reason (which argues for a meaningful benefit of the -2). This quantity, as you note, is called the deviance. The -2 factor is useful for statistical hypothesis testing. In a likelihood ratio test, this helps you to compute a $p$-value. Quoting the Wikipedia article on likelihood ratio tests: Multiplying by −2 ensures mathematically that (by Wilks' theorem) ${\displaystyle \lambda _{\text{LR}}}$ converges asymptotically to being χ²-distributed if the null hypothesis happens to be true. To add context, I'll also quote two of the articles you linked: first LR chi2(3) – This is the likelihood ratio (LR) chi-square test. The likelihood chi-square test statistic...This is minus two (i.e., -2) times the difference between the starting and ending log likelihood. and second: Multiplying it by -2 is a technical step necessary to convert the log-likelihood into a chi-square distribution, which is useful because it can then be used to ascertain statistical significance. Don't worry if you do not fully understand the technicalities of this. They both give the same message as the Wikipedia article.
Why do we multiply log likelihood times -2 when conducting MLE? Why do we multiply log likelihood times -2 when conducting MLE? We really don't. The -2 was not about parameter estimation; for that, we'd just use the (negative log-)likelihood. It was about hypothes
28,522
DAGs: instrumental and adjusted variables
While drawing DAGs...what are instrumental and adjusted variables? An instrumental variable is an observed variable that is often used to help obtain an unbiased estimate for a causal effect that is confounded by another variable that is usually unobserved. The classical situation can be depicted in the following DAG: Here, X is our main exposure, and the causal effect of X on Y is confounded by U. Z is an instrumental variable for X: it is associated with X (and unconfounded) but not with Y; affects Y only via X; and both Z and Y have no common causes. An adjusted variable is simply an observed variable which is adjusted for (ie. included as a covariate) in a regression model.
DAGs: instrumental and adjusted variables
While drawing DAGs...what are instrumental and adjusted variables? An instrumental variable is an observed variable that is often used to help obtain an unbiased estimate for a causal effect that is
DAGs: instrumental and adjusted variables While drawing DAGs...what are instrumental and adjusted variables? An instrumental variable is an observed variable that is often used to help obtain an unbiased estimate for a causal effect that is confounded by another variable that is usually unobserved. The classical situation can be depicted in the following DAG: Here, X is our main exposure, and the causal effect of X on Y is confounded by U. Z is an instrumental variable for X: it is associated with X (and unconfounded) but not with Y; affects Y only via X; and both Z and Y have no common causes. An adjusted variable is simply an observed variable which is adjusted for (ie. included as a covariate) in a regression model.
DAGs: instrumental and adjusted variables While drawing DAGs...what are instrumental and adjusted variables? An instrumental variable is an observed variable that is often used to help obtain an unbiased estimate for a causal effect that is
28,523
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism
1. A sampler (or sampling algorithm) is any procedure which is designed to generate draws from a target distribution $\pi(\cdot)$. 2. Your understanding seems correct to me. Monte Carlo essentially leverages the Law of Large Numbers. Suppose that $X$ is a distributed according to a distribution $\pi(x)$ and $\theta$ is a scalar quantity $\theta = E(g(X))$ which you would like to estimate. \begin{align*} \theta &= E(g(X)) \\[1.2ex] &= \int g(x)\pi(x) dx \\[1.2ex] &\approx \frac{1}{M}\sum_{i=1}^Mg(x_i) && \text{(the MC estimator)} \end{align*} where $x_1, x_2, \cdots x_M$ are independent draws from the target distribution $\pi(x)$. Note that Monte Carlo, which is an estimation procedure, always requires that a sampler already exists for a target distribution. 3. This seems to be where your confusion stems from. The Metropolis-Hastings algorithm (which is an MCMC method) is "just a sampler" which is commonly used for parameter inference in Bayesian statistics. The common use-case may be what's confusing you, so focus on the facts the MH algorithm is used to sample from a target distribution $\pi(x)$, $x \in \mathbb R^d$. Unlike most of the other "samplers" that you mention, the MH algorithm does NOT generate independent draws from the target distribution. Regardless, as the number of samples increase, each draw (in theory) is distributed according to $\pi(x)$. This allows us to estimate $\theta$ in the same way as above (i.e. question 2.). Due to its many advantages (the target density need not be "normalized", easy to choose a fast "proposal distribution", works well in high dimensions), the MH algorithm is often used to sample from a posterior distribution $\pi(\theta|x)$. These samples from the posterior can then be used for inference, such as parameter estimation. The MH algorithm itself, however, refers to the sampler. 4. Yes, the accept-reject algorithm is a sampler. 5. Hopefully this has been mostly answered in the response to question 3. When using an MCMC algorithm to sample from a distribution (usually a posterior), each "sample" depends on the sample before it. That is, the generated samples are not independent, but can be viewed as a Markov Chain. Still, assuming the MCMC sampler has "converged" these draws can be used in the usual Monte Carlo way.
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism
1. A sampler (or sampling algorithm) is any procedure which is designed to generate draws from a target distribution $\pi(\cdot)$. 2. Your understanding seems correct to me. Monte Carlo essentially l
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism 1. A sampler (or sampling algorithm) is any procedure which is designed to generate draws from a target distribution $\pi(\cdot)$. 2. Your understanding seems correct to me. Monte Carlo essentially leverages the Law of Large Numbers. Suppose that $X$ is a distributed according to a distribution $\pi(x)$ and $\theta$ is a scalar quantity $\theta = E(g(X))$ which you would like to estimate. \begin{align*} \theta &= E(g(X)) \\[1.2ex] &= \int g(x)\pi(x) dx \\[1.2ex] &\approx \frac{1}{M}\sum_{i=1}^Mg(x_i) && \text{(the MC estimator)} \end{align*} where $x_1, x_2, \cdots x_M$ are independent draws from the target distribution $\pi(x)$. Note that Monte Carlo, which is an estimation procedure, always requires that a sampler already exists for a target distribution. 3. This seems to be where your confusion stems from. The Metropolis-Hastings algorithm (which is an MCMC method) is "just a sampler" which is commonly used for parameter inference in Bayesian statistics. The common use-case may be what's confusing you, so focus on the facts the MH algorithm is used to sample from a target distribution $\pi(x)$, $x \in \mathbb R^d$. Unlike most of the other "samplers" that you mention, the MH algorithm does NOT generate independent draws from the target distribution. Regardless, as the number of samples increase, each draw (in theory) is distributed according to $\pi(x)$. This allows us to estimate $\theta$ in the same way as above (i.e. question 2.). Due to its many advantages (the target density need not be "normalized", easy to choose a fast "proposal distribution", works well in high dimensions), the MH algorithm is often used to sample from a posterior distribution $\pi(\theta|x)$. These samples from the posterior can then be used for inference, such as parameter estimation. The MH algorithm itself, however, refers to the sampler. 4. Yes, the accept-reject algorithm is a sampler. 5. Hopefully this has been mostly answered in the response to question 3. When using an MCMC algorithm to sample from a distribution (usually a posterior), each "sample" depends on the sample before it. That is, the generated samples are not independent, but can be viewed as a Markov Chain. Still, assuming the MCMC sampler has "converged" these draws can be used in the usual Monte Carlo way.
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism 1. A sampler (or sampling algorithm) is any procedure which is designed to generate draws from a target distribution $\pi(\cdot)$. 2. Your understanding seems correct to me. Monte Carlo essentially l
28,524
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism
Samplers are algorithms used to generate observations from a probability density (or distribution) function. Two examples are algorithms that rely on the Inverse Transform Method and Accept-Reject methods. On the other hand, an estimator is an approximation of an often unknown quantity. Monte Carlo methods refer to a family of algorithms used to obtain these estimations. Monte Carlo methods have the characteristic that they rely on samples from probability distributions to obtain these approximations. This is where the two concepts connect. Markov Chain Monte Carlo (MCMC) methods combine these two ideas to generate samples and estimate quantities of interest with these samples. Metropolis-Hastings is one of many MCMC algorithms. For example, if your quantity of interest is the mean of a posterior distribution, this usually means you have to solve an integral. In higher dimensions, solving the integral is often very difficult or even impossible to solve analytically. The idea of MCMC methods is to simulate a sample from the posterior distribution, and then estimate the integral needed to calculate the mean using the average of the sample. For a friendly introduction to these concepts, I think Introducing Monte Carlo Methods with R by Robert & Casella is a great reference.
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism
Samplers are algorithms used to generate observations from a probability density (or distribution) function. Two examples are algorithms that rely on the Inverse Transform Method and Accept-Reject met
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism Samplers are algorithms used to generate observations from a probability density (or distribution) function. Two examples are algorithms that rely on the Inverse Transform Method and Accept-Reject methods. On the other hand, an estimator is an approximation of an often unknown quantity. Monte Carlo methods refer to a family of algorithms used to obtain these estimations. Monte Carlo methods have the characteristic that they rely on samples from probability distributions to obtain these approximations. This is where the two concepts connect. Markov Chain Monte Carlo (MCMC) methods combine these two ideas to generate samples and estimate quantities of interest with these samples. Metropolis-Hastings is one of many MCMC algorithms. For example, if your quantity of interest is the mean of a posterior distribution, this usually means you have to solve an integral. In higher dimensions, solving the integral is often very difficult or even impossible to solve analytically. The idea of MCMC methods is to simulate a sample from the posterior distribution, and then estimate the integral needed to calculate the mean using the average of the sample. For a friendly introduction to these concepts, I think Introducing Monte Carlo Methods with R by Robert & Casella is a great reference.
Differences between Sampler, MonteCarlo, Metropolis-Hasting method, MCMC method and Fisher formalism Samplers are algorithms used to generate observations from a probability density (or distribution) function. Two examples are algorithms that rely on the Inverse Transform Method and Accept-Reject met
28,525
How does quantile regression compare to logistic regression with the variable split at the quantile?
For simplicity, assume you have a continuous dependent variable Y and a continuous predictor variable X. Logistic Regression If I understand your post correctly, your logistic regression will categorize Y into 0 and 1 based on the quantile of the (unconditional) distribution of Y. Specifically, the q-th quantile of the distribution of observed Y values will be computed and Ycat will be defined as 0 if Y is strictly less than this quantile and 1 if Y is greater than or equal to this quantile. If the above captures your intent, then the logistic regression will model the odds of Y exceeding or being equal to the (observed) q-th quantile of the (unconditional) Y distribution as a function of X. Quantile Regression On the other hand, if you are performing a quantile regression of Y on X, you are focusing on modelling how the q-th quantile of the conditional distribution of Y given X changes as a function of X. Logistic Regression versus Quantile Regression It seems to me that these two procedures have totally different aims, since the first procedure (i.e., logistic regression) focuses on the q-th quantile of the unconditional distribution of Y, whereas the second procedure (i.e., quantile regression) focuses on the the q-th quantile of the conditional distribution of Y. The unconditional distribution of Y is the distribution of Y values (hence it ignores any information about the X values). The conditional distribution of Y given X is the distribution of those Y values for which the values of X are the same. Illustrative Example For illustration purposes, let's say Y = cholesterol and X = body weight. Then logistic regression is modelling the odds of having a 'high' cholesterol value (i.e., greater than or equal to the q-th quantile of the observed cholesterol values) as a function of body weight, where the definition of 'high' has no relation to body weight. In other words, the marker for what constitutes a 'high' cholesterol value is independent of body weight. What changes with body weight in this model is the odds that a cholesterol value would exceed this marker. On the other hand, quantile regression is looking at how the 'marker' cholesterol values for which q% of the subjects with the same body weight in the underlying population have a higher cholesterol value vary as a function of body weight. You can think of these cholesterol values as markers for identifying what cholesterol values are 'high' - but in this case, each marker depends on the corresponding body weight; furthermore, the markers are assumed to change in a predictable fashion as the value of X changes (e.g., the markers tend to increase as X increases).
How does quantile regression compare to logistic regression with the variable split at the quantile?
For simplicity, assume you have a continuous dependent variable Y and a continuous predictor variable X. Logistic Regression If I understand your post correctly, your logistic regression will catego
How does quantile regression compare to logistic regression with the variable split at the quantile? For simplicity, assume you have a continuous dependent variable Y and a continuous predictor variable X. Logistic Regression If I understand your post correctly, your logistic regression will categorize Y into 0 and 1 based on the quantile of the (unconditional) distribution of Y. Specifically, the q-th quantile of the distribution of observed Y values will be computed and Ycat will be defined as 0 if Y is strictly less than this quantile and 1 if Y is greater than or equal to this quantile. If the above captures your intent, then the logistic regression will model the odds of Y exceeding or being equal to the (observed) q-th quantile of the (unconditional) Y distribution as a function of X. Quantile Regression On the other hand, if you are performing a quantile regression of Y on X, you are focusing on modelling how the q-th quantile of the conditional distribution of Y given X changes as a function of X. Logistic Regression versus Quantile Regression It seems to me that these two procedures have totally different aims, since the first procedure (i.e., logistic regression) focuses on the q-th quantile of the unconditional distribution of Y, whereas the second procedure (i.e., quantile regression) focuses on the the q-th quantile of the conditional distribution of Y. The unconditional distribution of Y is the distribution of Y values (hence it ignores any information about the X values). The conditional distribution of Y given X is the distribution of those Y values for which the values of X are the same. Illustrative Example For illustration purposes, let's say Y = cholesterol and X = body weight. Then logistic regression is modelling the odds of having a 'high' cholesterol value (i.e., greater than or equal to the q-th quantile of the observed cholesterol values) as a function of body weight, where the definition of 'high' has no relation to body weight. In other words, the marker for what constitutes a 'high' cholesterol value is independent of body weight. What changes with body weight in this model is the odds that a cholesterol value would exceed this marker. On the other hand, quantile regression is looking at how the 'marker' cholesterol values for which q% of the subjects with the same body weight in the underlying population have a higher cholesterol value vary as a function of body weight. You can think of these cholesterol values as markers for identifying what cholesterol values are 'high' - but in this case, each marker depends on the corresponding body weight; furthermore, the markers are assumed to change in a predictable fashion as the value of X changes (e.g., the markers tend to increase as X increases).
How does quantile regression compare to logistic regression with the variable split at the quantile? For simplicity, assume you have a continuous dependent variable Y and a continuous predictor variable X. Logistic Regression If I understand your post correctly, your logistic regression will catego
28,526
How does quantile regression compare to logistic regression with the variable split at the quantile?
They won't be equal, and the reason is simple. With quantile regression you want to model the quantile conditional of the independent variables. Your approach with logistic regression fits the marginal quantile.
How does quantile regression compare to logistic regression with the variable split at the quantile?
They won't be equal, and the reason is simple. With quantile regression you want to model the quantile conditional of the independent variables. Your approach with logistic regression fits the margina
How does quantile regression compare to logistic regression with the variable split at the quantile? They won't be equal, and the reason is simple. With quantile regression you want to model the quantile conditional of the independent variables. Your approach with logistic regression fits the marginal quantile.
How does quantile regression compare to logistic regression with the variable split at the quantile? They won't be equal, and the reason is simple. With quantile regression you want to model the quantile conditional of the independent variables. Your approach with logistic regression fits the margina
28,527
How does quantile regression compare to logistic regression with the variable split at the quantile?
One asks "what is the effect on the nth quantile of the dependent variable's distribution?" The other one asks "what is the effect on the probability that the dependent variable falls into the nth quantile of its unconditional distribution?" I.e., the fact that they both have the word "quantile" in them let's them look more similar than they are. I guess if you first estimate a conditional quantile function, use this for the split and proceed from there, the two approaches would become more similar. But I don't see what you would stand to gain from such a detour. .
How does quantile regression compare to logistic regression with the variable split at the quantile?
One asks "what is the effect on the nth quantile of the dependent variable's distribution?" The other one asks "what is the effect on the probability that the dependent variable falls into the nth qua
How does quantile regression compare to logistic regression with the variable split at the quantile? One asks "what is the effect on the nth quantile of the dependent variable's distribution?" The other one asks "what is the effect on the probability that the dependent variable falls into the nth quantile of its unconditional distribution?" I.e., the fact that they both have the word "quantile" in them let's them look more similar than they are. I guess if you first estimate a conditional quantile function, use this for the split and proceed from there, the two approaches would become more similar. But I don't see what you would stand to gain from such a detour. .
How does quantile regression compare to logistic regression with the variable split at the quantile? One asks "what is the effect on the nth quantile of the dependent variable's distribution?" The other one asks "what is the effect on the probability that the dependent variable falls into the nth qua
28,528
How does quantile regression compare to logistic regression with the variable split at the quantile?
This is roughly the deal if I've transcribed these correctly. See https://en.wikipedia.org/wiki/Quantile_regression for $\rho_p$. Logistic Regression: $$ p(y_{thresh}) = \arg \min_{p} \sum_i J^{logistic}(p, y_i < y_{thresh}) $$ Quantile Regression $$ y(p_{thresh}) = \arg \min_{y} \sum_i \rho_p(y_i - y) $$ Question is (I can't remember) are the score functions for these variational problems the only ones possible for MLE? If not, is there a pairing that guarantees equivalence in the sense the the same pairings $(p, y)$ are generated?
How does quantile regression compare to logistic regression with the variable split at the quantile?
This is roughly the deal if I've transcribed these correctly. See https://en.wikipedia.org/wiki/Quantile_regression for $\rho_p$. Logistic Regression: $$ p(y_{thresh}) = \arg \min_{p} \sum_i J^{logis
How does quantile regression compare to logistic regression with the variable split at the quantile? This is roughly the deal if I've transcribed these correctly. See https://en.wikipedia.org/wiki/Quantile_regression for $\rho_p$. Logistic Regression: $$ p(y_{thresh}) = \arg \min_{p} \sum_i J^{logistic}(p, y_i < y_{thresh}) $$ Quantile Regression $$ y(p_{thresh}) = \arg \min_{y} \sum_i \rho_p(y_i - y) $$ Question is (I can't remember) are the score functions for these variational problems the only ones possible for MLE? If not, is there a pairing that guarantees equivalence in the sense the the same pairings $(p, y)$ are generated?
How does quantile regression compare to logistic regression with the variable split at the quantile? This is roughly the deal if I've transcribed these correctly. See https://en.wikipedia.org/wiki/Quantile_regression for $\rho_p$. Logistic Regression: $$ p(y_{thresh}) = \arg \min_{p} \sum_i J^{logis
28,529
Is "random projection" strictly speaking not a projection?
What is the definition of a projection in this strict (linear algebraic) sense (of the word) https://en.wikipedia.org/wiki/Projection_(linear_algebra) In linear algebra and functional analysis, a projection is a linear transformation $P$ from a vector space to itself such that $P^2 = P$. That is, whenever $P$ is applied twice to any value, it gives the same result as if it were applied once (idempotent). For orthogonal projection or vector projection you have that https://en.wikipedia.org/wiki/Projection_(linear_algebra) An orthogonal projection is a projection for which the range U and the null space V are orthogonal subspaces. Why isn't RP a projection under this definition? Michael Mahoney writes in your lecture notes that it depends on how the RP is constructed, whether or not the RP is a projection in the traditional linear algebraic sense. This he does in the third and fourth points: Third, if the random vectors were exactly orthogonal (as they actually were in the original JL constructions), then we would have that the JL projection was an orthogonal projection ... but although this is false for Gaussians, $\lbrace \pm \rbrace $ random variables, and most other constructions, one can prove that the resulting vectors are approximately unit length and approximately orthogonal ... this is “good enough.” So you could do, in principal, the random projection with a different construction that is limited to orthogonal matrices (although it is not needed). See for instance the original work: Johnson, William B., and Joram Lindenstrauss. "Extensions of Lipschitz mappings into a Hilbert space." Contemporary mathematics 26.189-206 (1984): 1. ...if one chooses at random a rank $k$ orthogonal projection on $l_2^n$ ... To make this precise, we let $Q$ be the projection onto the first $k$ coordinates of $l_2^n$ and let $\sigma$ be normalized Haar measure on $O(n)$, the orthogonal group on $l_2^n$. Then the random variable $$f: (O(n), \sigma) \to L(l_2^n)$$ defined by $$f(u) = U^\star Q U$$ determines the notion of a "random rank $k$ projection." The wikipedia entry describes random projection in this way (the same is mentioned in the lecture notes on pages 10 and 11) https://en.wikipedia.org/wiki/Random_projection#Gaussian_random_projection The first row is a random unit vector uniformly chosen from $S^{d − 1}$. The second row is a random unit vector from the space orthogonal to the first row, the third row is a random unit vector from the space orthogonal to the first two rows, and so on. But you do not generally get this orthogonality when you take all the matrix-entries in the matrix random and independent variables with a normal distribution (as Whuber mentioned in his comment with a very simple consequence "if the columns were always orthogonal, their entries could not be independent"). The matrix $R$ and the product in the case of orthonormal columns, can be seen as a projection because it relates to a projection matrix $P = R^TR$. This is a bit the same as seeing ordinary least squares regression as a projection. The product $b = R^T x$ is not the projection but it gives you a coordinate in a different basis vector. The 'real' projection is $x' = Rb = R^TRx$, and the projection matrix is $R^TR$. The projection matrix $P=R^TR$ needs to be the identity operator on the subspace $U$ that is the range of the projection (see the properties mentioned on the wikipedia page). Or differently said it needs to have eigenvalues 1 and 0, such that the subspace for which it is the identity matrix is the span of the eigenvectors associated to the eigenvalues 1. With random matrix-entries you are not going to get this property. This is the second point in the lecture notes ... it “looks like” an orthogonal matrix in many ways ... the $range(P^T P)$ is a uniformly distributed subspace ... but the eigenvalues are not in $\lbrace 0, 1 \rbrace$. note that in this quote the matrix $P$ relates to the matrix $R$ in the question and not to the projection matrix $P = R^TR$ that is implied by the matrix $R$ So random projection by different constructions, such as using random entries in the matrix, is not exactly equal to an orthogonal projection. But it is computationally simpler and, according to Michael Mahoney, it is “good enough.”
Is "random projection" strictly speaking not a projection?
What is the definition of a projection in this strict (linear algebraic) sense (of the word) https://en.wikipedia.org/wiki/Projection_(linear_algebra) In linear algebra and functional analysis, a pro
Is "random projection" strictly speaking not a projection? What is the definition of a projection in this strict (linear algebraic) sense (of the word) https://en.wikipedia.org/wiki/Projection_(linear_algebra) In linear algebra and functional analysis, a projection is a linear transformation $P$ from a vector space to itself such that $P^2 = P$. That is, whenever $P$ is applied twice to any value, it gives the same result as if it were applied once (idempotent). For orthogonal projection or vector projection you have that https://en.wikipedia.org/wiki/Projection_(linear_algebra) An orthogonal projection is a projection for which the range U and the null space V are orthogonal subspaces. Why isn't RP a projection under this definition? Michael Mahoney writes in your lecture notes that it depends on how the RP is constructed, whether or not the RP is a projection in the traditional linear algebraic sense. This he does in the third and fourth points: Third, if the random vectors were exactly orthogonal (as they actually were in the original JL constructions), then we would have that the JL projection was an orthogonal projection ... but although this is false for Gaussians, $\lbrace \pm \rbrace $ random variables, and most other constructions, one can prove that the resulting vectors are approximately unit length and approximately orthogonal ... this is “good enough.” So you could do, in principal, the random projection with a different construction that is limited to orthogonal matrices (although it is not needed). See for instance the original work: Johnson, William B., and Joram Lindenstrauss. "Extensions of Lipschitz mappings into a Hilbert space." Contemporary mathematics 26.189-206 (1984): 1. ...if one chooses at random a rank $k$ orthogonal projection on $l_2^n$ ... To make this precise, we let $Q$ be the projection onto the first $k$ coordinates of $l_2^n$ and let $\sigma$ be normalized Haar measure on $O(n)$, the orthogonal group on $l_2^n$. Then the random variable $$f: (O(n), \sigma) \to L(l_2^n)$$ defined by $$f(u) = U^\star Q U$$ determines the notion of a "random rank $k$ projection." The wikipedia entry describes random projection in this way (the same is mentioned in the lecture notes on pages 10 and 11) https://en.wikipedia.org/wiki/Random_projection#Gaussian_random_projection The first row is a random unit vector uniformly chosen from $S^{d − 1}$. The second row is a random unit vector from the space orthogonal to the first row, the third row is a random unit vector from the space orthogonal to the first two rows, and so on. But you do not generally get this orthogonality when you take all the matrix-entries in the matrix random and independent variables with a normal distribution (as Whuber mentioned in his comment with a very simple consequence "if the columns were always orthogonal, their entries could not be independent"). The matrix $R$ and the product in the case of orthonormal columns, can be seen as a projection because it relates to a projection matrix $P = R^TR$. This is a bit the same as seeing ordinary least squares regression as a projection. The product $b = R^T x$ is not the projection but it gives you a coordinate in a different basis vector. The 'real' projection is $x' = Rb = R^TRx$, and the projection matrix is $R^TR$. The projection matrix $P=R^TR$ needs to be the identity operator on the subspace $U$ that is the range of the projection (see the properties mentioned on the wikipedia page). Or differently said it needs to have eigenvalues 1 and 0, such that the subspace for which it is the identity matrix is the span of the eigenvectors associated to the eigenvalues 1. With random matrix-entries you are not going to get this property. This is the second point in the lecture notes ... it “looks like” an orthogonal matrix in many ways ... the $range(P^T P)$ is a uniformly distributed subspace ... but the eigenvalues are not in $\lbrace 0, 1 \rbrace$. note that in this quote the matrix $P$ relates to the matrix $R$ in the question and not to the projection matrix $P = R^TR$ that is implied by the matrix $R$ So random projection by different constructions, such as using random entries in the matrix, is not exactly equal to an orthogonal projection. But it is computationally simpler and, according to Michael Mahoney, it is “good enough.”
Is "random projection" strictly speaking not a projection? What is the definition of a projection in this strict (linear algebraic) sense (of the word) https://en.wikipedia.org/wiki/Projection_(linear_algebra) In linear algebra and functional analysis, a pro
28,530
Is "random projection" strictly speaking not a projection?
That is right: "random projection" is strictly speaking not a projection. Projection a is clearly defined mathematical object: https://en.wikipedia.org/wiki/Projection_(linear_algebra) -- it is a linear idempotentent operator, i.e. linear operator $P$ such that $P^2 = P$. Applying a projection twice is the same as applying it only once because after a point is projected on a subspace, it should just stay there if projected again. There is nothing about orthogonality in this definition; in fact, a projection can be oblique (see Wikipedia). Note that only square matrices can represent "projections" in this sense. "Random projection" uses a random $d\times k$ matrix $R$ with $k\ll d$, so it cannot possibly be a projection in the sense of the above definition. Even if you make the columns of $R$ orthonormal (e.g. by applying Gram-Schmidt process), this argument will still apply. Somebody has recently asked this question about PCA: What exactly should be called "projection matrix" in the context of PCA? -- a $d\times k$ matrix $U$ of orthonormal eigenvectors is strictly speaking not a projection either.
Is "random projection" strictly speaking not a projection?
That is right: "random projection" is strictly speaking not a projection. Projection a is clearly defined mathematical object: https://en.wikipedia.org/wiki/Projection_(linear_algebra) -- it is a line
Is "random projection" strictly speaking not a projection? That is right: "random projection" is strictly speaking not a projection. Projection a is clearly defined mathematical object: https://en.wikipedia.org/wiki/Projection_(linear_algebra) -- it is a linear idempotentent operator, i.e. linear operator $P$ such that $P^2 = P$. Applying a projection twice is the same as applying it only once because after a point is projected on a subspace, it should just stay there if projected again. There is nothing about orthogonality in this definition; in fact, a projection can be oblique (see Wikipedia). Note that only square matrices can represent "projections" in this sense. "Random projection" uses a random $d\times k$ matrix $R$ with $k\ll d$, so it cannot possibly be a projection in the sense of the above definition. Even if you make the columns of $R$ orthonormal (e.g. by applying Gram-Schmidt process), this argument will still apply. Somebody has recently asked this question about PCA: What exactly should be called "projection matrix" in the context of PCA? -- a $d\times k$ matrix $U$ of orthonormal eigenvectors is strictly speaking not a projection either.
Is "random projection" strictly speaking not a projection? That is right: "random projection" is strictly speaking not a projection. Projection a is clearly defined mathematical object: https://en.wikipedia.org/wiki/Projection_(linear_algebra) -- it is a line
28,531
Is "random projection" strictly speaking not a projection?
I think the key here is to consider the column space of the $d\times k$ RP matrix $R$ as the subspace onto which we perform the projection. In general, regardless of whether the columns of $R$ are orthogonal, one can project a sample $x\in \mathbb R^d$ onto the column space of $R$ ussing the following equation [1]: $p = xR(R^TR)^{-1}R^T$, where $p\in\mathbb R^d$. If as in the older versions or RP the columns of matrix $R$ are restricted to be orthonormal, then $R^TR = I\in \mathbb R^{k\times k}$, and therefore the projection of $x$ onto the column space of $R$ becomes: $p = xRR^T$, with $p\in\mathbb R^d$, and $RR^T\in\mathbb R^{d\times d}$ becomes a projection matrix, because it's square and $(RR^T)^2=RR^TRR^T=RR^T$. Perhaps the claim that the older version of Random Projection (were the columns of $R$ were orthonormal) is in fact a projection refers to the fact that in that case the embedding down to $\mathbb R^k$ and posterior reconstruction back to $\mathbb R^d$ of a sample $x\in\mathbb R^d$ given by $xRR^T$ is indeed a projection onto the column space of $R$, and $RR^T$ is a projection matrix. I would be grateful if you could confirm/correct my reasoning here. Reference: [1] http://www.dankalman.net/AUhome/classes/classesS17/linalg/projections.pdf
Is "random projection" strictly speaking not a projection?
I think the key here is to consider the column space of the $d\times k$ RP matrix $R$ as the subspace onto which we perform the projection. In general, regardless of whether the columns of $R$ are ort
Is "random projection" strictly speaking not a projection? I think the key here is to consider the column space of the $d\times k$ RP matrix $R$ as the subspace onto which we perform the projection. In general, regardless of whether the columns of $R$ are orthogonal, one can project a sample $x\in \mathbb R^d$ onto the column space of $R$ ussing the following equation [1]: $p = xR(R^TR)^{-1}R^T$, where $p\in\mathbb R^d$. If as in the older versions or RP the columns of matrix $R$ are restricted to be orthonormal, then $R^TR = I\in \mathbb R^{k\times k}$, and therefore the projection of $x$ onto the column space of $R$ becomes: $p = xRR^T$, with $p\in\mathbb R^d$, and $RR^T\in\mathbb R^{d\times d}$ becomes a projection matrix, because it's square and $(RR^T)^2=RR^TRR^T=RR^T$. Perhaps the claim that the older version of Random Projection (were the columns of $R$ were orthonormal) is in fact a projection refers to the fact that in that case the embedding down to $\mathbb R^k$ and posterior reconstruction back to $\mathbb R^d$ of a sample $x\in\mathbb R^d$ given by $xRR^T$ is indeed a projection onto the column space of $R$, and $RR^T$ is a projection matrix. I would be grateful if you could confirm/correct my reasoning here. Reference: [1] http://www.dankalman.net/AUhome/classes/classesS17/linalg/projections.pdf
Is "random projection" strictly speaking not a projection? I think the key here is to consider the column space of the $d\times k$ RP matrix $R$ as the subspace onto which we perform the projection. In general, regardless of whether the columns of $R$ are ort
28,532
Is "random projection" strictly speaking not a projection?
If you use recomputable random sign flipping or permutation prior to the Fast Walsh Hadamard transform the random projection is orthogonal.
Is "random projection" strictly speaking not a projection?
If you use recomputable random sign flipping or permutation prior to the Fast Walsh Hadamard transform the random projection is orthogonal.
Is "random projection" strictly speaking not a projection? If you use recomputable random sign flipping or permutation prior to the Fast Walsh Hadamard transform the random projection is orthogonal.
Is "random projection" strictly speaking not a projection? If you use recomputable random sign flipping or permutation prior to the Fast Walsh Hadamard transform the random projection is orthogonal.
28,533
Does the $R^2$ depend on sample size?
No, the expectation of estimated $𝑅^2$ will not change, but the variance of its estimate will decrease along the sample size. – user158565 We need to take the statement "The smaller the subsample, the closer $𝑅^2$ is to 1" advisedly. Although it's true that the chance of a sample $𝑅^2$ being close to 1 might increase with smaller sample size, that's only because the sample $𝑅^2$ becomes more variable as the sample size decreases. It definitely does not tend to grow closer to 1! The theorems therefore focus on the distribution of a sample $𝑅^2$ and, especially, on its variance. That distribution is directly related to the F ratio distribution of the regression F statistic. See your favorite regression text for details. – whuber
Does the $R^2$ depend on sample size?
No, the expectation of estimated $𝑅^2$ will not change, but the variance of its estimate will decrease along the sample size. – user158565 We need to take the statement "The smaller the subsample
Does the $R^2$ depend on sample size? No, the expectation of estimated $𝑅^2$ will not change, but the variance of its estimate will decrease along the sample size. – user158565 We need to take the statement "The smaller the subsample, the closer $𝑅^2$ is to 1" advisedly. Although it's true that the chance of a sample $𝑅^2$ being close to 1 might increase with smaller sample size, that's only because the sample $𝑅^2$ becomes more variable as the sample size decreases. It definitely does not tend to grow closer to 1! The theorems therefore focus on the distribution of a sample $𝑅^2$ and, especially, on its variance. That distribution is directly related to the F ratio distribution of the regression F statistic. See your favorite regression text for details. – whuber
Does the $R^2$ depend on sample size? No, the expectation of estimated $𝑅^2$ will not change, but the variance of its estimate will decrease along the sample size. – user158565 We need to take the statement "The smaller the subsample
28,534
Does the $R^2$ depend on sample size?
It seems that you are trying to describe what is known as the "Adjusted R-squared", which indeed depends on the number of observations n and the number of model parameters p: $$R^2 = 1- \dfrac{SSRes}{SSTotal}$$ $$R^2_{adjusted} = 1- \dfrac{n-1}{n-p}\dfrac{SSRes}{SSTotal}$$
Does the $R^2$ depend on sample size?
It seems that you are trying to describe what is known as the "Adjusted R-squared", which indeed depends on the number of observations n and the number of model parameters p: $$R^2 = 1- \dfrac{SSRes}{
Does the $R^2$ depend on sample size? It seems that you are trying to describe what is known as the "Adjusted R-squared", which indeed depends on the number of observations n and the number of model parameters p: $$R^2 = 1- \dfrac{SSRes}{SSTotal}$$ $$R^2_{adjusted} = 1- \dfrac{n-1}{n-p}\dfrac{SSRes}{SSTotal}$$
Does the $R^2$ depend on sample size? It seems that you are trying to describe what is known as the "Adjusted R-squared", which indeed depends on the number of observations n and the number of model parameters p: $$R^2 = 1- \dfrac{SSRes}{
28,535
What stops the network from learning the same weights in multi-head attention mechanism
We observe these kind of redundancies in literally all neural network architectures, starting from simple fully-connected networks (see diagram below), where same inputs are mapped to multiple hidden layers. Nothing prohibits the network from ending up with same weights in here as well. We fight this by random initialization of weights. You usually need to initialize all the weights randomly, unless some special cases where initializing with zeros or other values proved to worked better. The optimization algorithms are deterministic, so there is no reason whatsoever why the same inputs could lead to different outputs if all the initial conditions were the same. Same seems to be true for the original attention paper, but to convince yourself, you can check also this great "annotated" paper with PyTorch code (or Keras implementation if you prefer) and this blog post. Unless I missed something from the paper and the implementations, the weights are treated the same in each case, so there is not extra measures to prevent redundancy. In fact, if you look at the code in the "annotated Transformer" post, in the MultiHeadedAttention class you can see that all the weights in multi-head attention layer are generated using same kind of nn.Linear layers.
What stops the network from learning the same weights in multi-head attention mechanism
We observe these kind of redundancies in literally all neural network architectures, starting from simple fully-connected networks (see diagram below), where same inputs are mapped to multiple hidden
What stops the network from learning the same weights in multi-head attention mechanism We observe these kind of redundancies in literally all neural network architectures, starting from simple fully-connected networks (see diagram below), where same inputs are mapped to multiple hidden layers. Nothing prohibits the network from ending up with same weights in here as well. We fight this by random initialization of weights. You usually need to initialize all the weights randomly, unless some special cases where initializing with zeros or other values proved to worked better. The optimization algorithms are deterministic, so there is no reason whatsoever why the same inputs could lead to different outputs if all the initial conditions were the same. Same seems to be true for the original attention paper, but to convince yourself, you can check also this great "annotated" paper with PyTorch code (or Keras implementation if you prefer) and this blog post. Unless I missed something from the paper and the implementations, the weights are treated the same in each case, so there is not extra measures to prevent redundancy. In fact, if you look at the code in the "annotated Transformer" post, in the MultiHeadedAttention class you can see that all the weights in multi-head attention layer are generated using same kind of nn.Linear layers.
What stops the network from learning the same weights in multi-head attention mechanism We observe these kind of redundancies in literally all neural network architectures, starting from simple fully-connected networks (see diagram below), where same inputs are mapped to multiple hidden
28,536
What stops the network from learning the same weights in multi-head attention mechanism
I'm not an expert I'll try to answer your questions. :) 1) I believe it can happen, as redundant units are very common in neural networks. In another paper referenced by the transformer paper, it addresses this issue by adding a regularization term to the loss $p=\mid\mid AA^T-I\mid\mid$, which penalizes redundancy in matrix A. 2) It should be the full vector, since the dimension of the weight matrix is $d_{model}\times d_k$. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. Also if we split the vector before attention, the computational cost will be reduced by a factor h.
What stops the network from learning the same weights in multi-head attention mechanism
I'm not an expert I'll try to answer your questions. :) 1) I believe it can happen, as redundant units are very common in neural networks. In another paper referenced by the transformer paper, it addr
What stops the network from learning the same weights in multi-head attention mechanism I'm not an expert I'll try to answer your questions. :) 1) I believe it can happen, as redundant units are very common in neural networks. In another paper referenced by the transformer paper, it addresses this issue by adding a regularization term to the loss $p=\mid\mid AA^T-I\mid\mid$, which penalizes redundancy in matrix A. 2) It should be the full vector, since the dimension of the weight matrix is $d_{model}\times d_k$. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. Also if we split the vector before attention, the computational cost will be reduced by a factor h.
What stops the network from learning the same weights in multi-head attention mechanism I'm not an expert I'll try to answer your questions. :) 1) I believe it can happen, as redundant units are very common in neural networks. In another paper referenced by the transformer paper, it addr
28,537
What stops the network from learning the same weights in multi-head attention mechanism
My question is what stops the network from learning the same weights or linear combination for each of these heads i.e. basically making the multiple head bit redundant. Can that happen? Not stopping or preventing it but the different attention heads is calculating attention for different subparts of the Query and Key vectors, standard setting with 512 dimensions is 8 heads doing 64 dimensions each. Two sets of different 64 dimensions being redundant over all tokens is highly unlikely, although proven by some papers some attention heads, if to many, doesn't contribute much to the overall result. The multi-head is more about parallelization of the processing than improving the result. I also wonder if we actually use the full input vector for each of the heads. Yes. Full input sequence, all 512 tokens, but different vector dimension group for each attention head.
What stops the network from learning the same weights in multi-head attention mechanism
My question is what stops the network from learning the same weights or linear combination for each of these heads i.e. basically making the multiple head bit redundant. Can that happen? Not stopping
What stops the network from learning the same weights in multi-head attention mechanism My question is what stops the network from learning the same weights or linear combination for each of these heads i.e. basically making the multiple head bit redundant. Can that happen? Not stopping or preventing it but the different attention heads is calculating attention for different subparts of the Query and Key vectors, standard setting with 512 dimensions is 8 heads doing 64 dimensions each. Two sets of different 64 dimensions being redundant over all tokens is highly unlikely, although proven by some papers some attention heads, if to many, doesn't contribute much to the overall result. The multi-head is more about parallelization of the processing than improving the result. I also wonder if we actually use the full input vector for each of the heads. Yes. Full input sequence, all 512 tokens, but different vector dimension group for each attention head.
What stops the network from learning the same weights in multi-head attention mechanism My question is what stops the network from learning the same weights or linear combination for each of these heads i.e. basically making the multiple head bit redundant. Can that happen? Not stopping
28,538
A simple explanation of PACF plot
An intuitive description of PACF can be "the amount of correlation with each lag that is not accounted for by more recent lags". Autocorrelation satisfies a property that we could call dampened transitivity. If $x_t$ is correlated with $x_{t-1}$ by some amount $\rho<0$, then $x_{t-1}$ is correlated with $x_{t-2}$ by $\rho$. This implies that $x_t$ is correlated with $x_{t-2}$, although by some amount smaller than $\rho$. Partial autocorrelation computes the "pure" correlation between $x_t$ and $x_{t-2}$ by removing the "transitive" correlation, that is, the amount of correlation explained by the first lag, and recomputing. For the partial autocorrelation between $x_t$ and $x_{t-3}$, we will remove the correlation with both $x_{t-1}$ and $x_{t-2}$ and recompute, and so on. You can add some geometric flavour to the explanation. You can picture your time series at each lag as a vector in space. A highly autocorrelated series would look something like this. The time series with lag 0 could be the vector at the bottom, for instance, the one above the series at lag 1, and the other one is lag 2. The autocorrelation translates to this setting as a large projection of each vector onto each other. However, what happens if we remove from the original series the projection onto lag 1? The projection of the remaining length of series 0 onto series 2 is very small. This corresponds to the PACF at lag 2.
A simple explanation of PACF plot
An intuitive description of PACF can be "the amount of correlation with each lag that is not accounted for by more recent lags". Autocorrelation satisfies a property that we could call dampened transi
A simple explanation of PACF plot An intuitive description of PACF can be "the amount of correlation with each lag that is not accounted for by more recent lags". Autocorrelation satisfies a property that we could call dampened transitivity. If $x_t$ is correlated with $x_{t-1}$ by some amount $\rho<0$, then $x_{t-1}$ is correlated with $x_{t-2}$ by $\rho$. This implies that $x_t$ is correlated with $x_{t-2}$, although by some amount smaller than $\rho$. Partial autocorrelation computes the "pure" correlation between $x_t$ and $x_{t-2}$ by removing the "transitive" correlation, that is, the amount of correlation explained by the first lag, and recomputing. For the partial autocorrelation between $x_t$ and $x_{t-3}$, we will remove the correlation with both $x_{t-1}$ and $x_{t-2}$ and recompute, and so on. You can add some geometric flavour to the explanation. You can picture your time series at each lag as a vector in space. A highly autocorrelated series would look something like this. The time series with lag 0 could be the vector at the bottom, for instance, the one above the series at lag 1, and the other one is lag 2. The autocorrelation translates to this setting as a large projection of each vector onto each other. However, what happens if we remove from the original series the projection onto lag 1? The projection of the remaining length of series 0 onto series 2 is very small. This corresponds to the PACF at lag 2.
A simple explanation of PACF plot An intuitive description of PACF can be "the amount of correlation with each lag that is not accounted for by more recent lags". Autocorrelation satisfies a property that we could call dampened transi
28,539
How does the subtraction of the logit maximum improve learning?
This is a simple trick to improve the numerical stability. As you probably know, exponential function grows very fast, and so does the magnitude of any numerical errors. This trick is based on the following equality: $$\frac{e^{x+c}}{e^{x+c}+e^{y+c}} = \frac{e^x e^c}{e^x e^c+e^y e^c} = \frac{e^x e^c}{e^c (e^x+e^y)} = \frac{e^x}{e^x+e^y},$$ where $c$ is the maximum which you are subtracting. As you can see, you can subtract any value without changing the softmax output. Selecting the maximum is a convenient way to ensure numerical stability.
How does the subtraction of the logit maximum improve learning?
This is a simple trick to improve the numerical stability. As you probably know, exponential function grows very fast, and so does the magnitude of any numerical errors. This trick is based on the fol
How does the subtraction of the logit maximum improve learning? This is a simple trick to improve the numerical stability. As you probably know, exponential function grows very fast, and so does the magnitude of any numerical errors. This trick is based on the following equality: $$\frac{e^{x+c}}{e^{x+c}+e^{y+c}} = \frac{e^x e^c}{e^x e^c+e^y e^c} = \frac{e^x e^c}{e^c (e^x+e^y)} = \frac{e^x}{e^x+e^y},$$ where $c$ is the maximum which you are subtracting. As you can see, you can subtract any value without changing the softmax output. Selecting the maximum is a convenient way to ensure numerical stability.
How does the subtraction of the logit maximum improve learning? This is a simple trick to improve the numerical stability. As you probably know, exponential function grows very fast, and so does the magnitude of any numerical errors. This trick is based on the fol
28,540
Test for median difference
You could consider a permutation test. median.test <- function(x,y, NREPS=1e4) { z <- c(x,y) i <- rep.int(0:1, c(length(x), length(y))) v <- diff(tapply(z,i,median)) v.rep <- replicate(NREPS, { diff(tapply(z,sample(i),median)) }) v.rep <- c(v, v.rep) pmin(mean(v < v.rep), mean(v>v.rep))*2 } set.seed(123) n1 <- 100 n2 <- 200 ## the two samples x <- rnorm(n1, mean=1) y <- rexp(n2, rate=1) median.test(x,y) Gives a 2 sided p-value of 0.1112 which is a testament to how inefficient a median test can be when we don't appeal to any distributional tendency. If we used MLE, the 95% CI for the median for the normal can just be taken from the mean since the mean is the median in a normal distribution, so that's 1.00 to 1.18. The 95% CI for the median for the exponential can be framed as $\log(2)/\bar{X}$, which by the delta method is 0.63 to 0.80. Therefore the Wald test is statistically significant at the 0.05 level but the median test is not.
Test for median difference
You could consider a permutation test. median.test <- function(x,y, NREPS=1e4) { z <- c(x,y) i <- rep.int(0:1, c(length(x), length(y))) v <- diff(tapply(z,i,median)) v.rep <- replicate(NREPS,
Test for median difference You could consider a permutation test. median.test <- function(x,y, NREPS=1e4) { z <- c(x,y) i <- rep.int(0:1, c(length(x), length(y))) v <- diff(tapply(z,i,median)) v.rep <- replicate(NREPS, { diff(tapply(z,sample(i),median)) }) v.rep <- c(v, v.rep) pmin(mean(v < v.rep), mean(v>v.rep))*2 } set.seed(123) n1 <- 100 n2 <- 200 ## the two samples x <- rnorm(n1, mean=1) y <- rexp(n2, rate=1) median.test(x,y) Gives a 2 sided p-value of 0.1112 which is a testament to how inefficient a median test can be when we don't appeal to any distributional tendency. If we used MLE, the 95% CI for the median for the normal can just be taken from the mean since the mean is the median in a normal distribution, so that's 1.00 to 1.18. The 95% CI for the median for the exponential can be framed as $\log(2)/\bar{X}$, which by the delta method is 0.63 to 0.80. Therefore the Wald test is statistically significant at the 0.05 level but the median test is not.
Test for median difference You could consider a permutation test. median.test <- function(x,y, NREPS=1e4) { z <- c(x,y) i <- rep.int(0:1, c(length(x), length(y))) v <- diff(tapply(z,i,median)) v.rep <- replicate(NREPS,
28,541
Test for median difference
Assuming your outcome is ordinal or interval-valued, you can use the nonparametric median test with k=2. Here's a description from Stata's implementation of it: The median test examines whether it is likely that two or more samples came from populations with the same median. The null hypothesis is that the samples were drawn from populations with the same median. The alternative hypothesis is that at least one sample was drawn from a population with a different median. The test should be used only with ordinal or interval data. Assume that there are score values for k independent samples to be compared. The median test is performed by first computing the median score for all observations combined, regardless of the sample group. Each score is compared with this computed grand median and is classified as being above the grand median, below the grand median, or equal to the grand median. Observations with scores equal to the grand median can be dropped, added to the “above” group, added to the “below” group, or split between the two groups. Once all observations are classified, the data are cast into a 2xk contingency table, and a Pearson’s chi-squared test or Fisher’s exact test is performed.
Test for median difference
Assuming your outcome is ordinal or interval-valued, you can use the nonparametric median test with k=2. Here's a description from Stata's implementation of it: The median test examines whether it is
Test for median difference Assuming your outcome is ordinal or interval-valued, you can use the nonparametric median test with k=2. Here's a description from Stata's implementation of it: The median test examines whether it is likely that two or more samples came from populations with the same median. The null hypothesis is that the samples were drawn from populations with the same median. The alternative hypothesis is that at least one sample was drawn from a population with a different median. The test should be used only with ordinal or interval data. Assume that there are score values for k independent samples to be compared. The median test is performed by first computing the median score for all observations combined, regardless of the sample group. Each score is compared with this computed grand median and is classified as being above the grand median, below the grand median, or equal to the grand median. Observations with scores equal to the grand median can be dropped, added to the “above” group, added to the “below” group, or split between the two groups. Once all observations are classified, the data are cast into a 2xk contingency table, and a Pearson’s chi-squared test or Fisher’s exact test is performed.
Test for median difference Assuming your outcome is ordinal or interval-valued, you can use the nonparametric median test with k=2. Here's a description from Stata's implementation of it: The median test examines whether it is
28,542
Help interpreting count data GLMM using lme4 glmer and glmer.nb - Negative binomial versus Poisson
I believe there are some important problems to be addressed with your estimation. From what I gathered by examining your data, your units are not geographically grouped, i.e. census tracts within counties. Thus, using tracts as a grouping factor is not appropriate to capture spatial heterogeneity as this means that you have the same number of individuals as groups (or to put another way, all your groups have only one observation each). Using a multilevel modelling strategy allows us to estimate individual-level variance, while controlling for between-group variance. Since your groups have only one individual each, your between-group variance is the same as your individual-level variance, thus defeating the purpose of the multilevel approach. On the other hand, the grouping factor can represent repeated measurements over time. For example, in the case of a longitudinal study, an individual's "maths" scores may be recored yearly, thus we would have a yearly value for each student for n years (in this case, the grouping factor is the student as we have n number of observations "nested" within students). In your case, you have repeated measures of each census tract by decade. Thus, you could use your TRTID10 variable as a grouping factor to capture "between decade variance". This leads to 3142 observations nested in 635 tracts, with approximately 4 and 5 observations per census tract. As mentioned in a comment, using decade as a grouping factor is not very appropriate, as you have only around 5 decades for each census tract, and their effect can be better captured introducing decade as a covariate. Second, to determine whether your data ought to be modelled using a poisson or negative binomial model (or a zero inflated approach). Consider the amount of overdispersion in your data. The fundamental characteristic of a Poisson distribution is equidispersion, meaning that the mean is equal to the variance of the distribution. Looking at your data, it is pretty clear that there is much overdispersion. Variances are much much greater than the means. library(dplyr) dispersionstats <- scaled.mydata %>% + group_by(decade) %>% + summarise( + means = mean(R_VAC), + variances = var(R_VAC), + ratio = variances/means) ## dispersionstats ## # A tibble: 5 x 5 ## decade means variances ratio ## <int> <dbl> <dbl> <dbl> ## 1 1970 45.43513 4110.89 90.47822 ## 2 1980 103.52365 17323.34 167.33707 ## 3 1990 177.68038 62129.65 349.67087 ## 4 2000 190.23150 91059.60 478.67784 ## 5 2010 247.68246 126265.60 509.78821 Nonetheless, to determine if the negative binomial is more appropriate statistically, a standard method is to do a likelihood ratio test between a Poisson and a negative binomial model, which suggests that the negbin is a better fit. library(MASS) library(lmtest) modelformula <- formula(R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln)) poismodel <- glm(modelformula, data = scaled.mydata, family = "poisson") nbmodel <- glm.nb(modelformula, data = scaled.mydata) lrtest(poismodel, nbmodel) ## Likelihood ratio test ## Model 1: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## Model 2: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## #Df LogLik Df Chisq Pr(>Chisq) ## 1 8 -154269 ## 2 9 -17452 1 273634 < 2.2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 After establishing this, a next test could consider whether the multilevel (mixed model) approach is warranted using similar approach, which suggests that the multilevel version provides a better fit. (A similar test could be used to compare a glmer fit assuming a poisson distribution to the glmer.nb object, as long as the models are otherwise the same.) library(lme4) glmmformula <- update(modelformula, . ~ . + (1|TRTID10)) nbglmm <- glmer.nb(glmmformula, data = scaled.mydata) lrtest(nbmodel, nbglmm) ## Model 1: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## Model 2: R_VAC ~ factor(decade) + P_NONWHT + a_hinc + (1 | TRTID10) + ## P_NONWHT:a_hinc + offset(HU_ln) ## #Df LogLik Df Chisq Pr(>Chisq) ## 1 9 -17452 ## 2 10 -17332 1 239.3 < 2.2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Regarding the estimates of the poisson and nb models they are actually expected to be very similar to each other, with the main distinction being the standard errors, ie if overdispersion is present, the poisson model tends to provide biased standard errors. Taking your data as an example: poissonglmm <- glmer(glmmformula, data = scaled.mydata) summary(poissonglmm) ## Random effects: ## Groups Name Variance Std.Dev. ## TRTID10 (Intercept) 0.2001 0.4473 ## Number of obs: 3142, groups: TRTID10, 635 ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.876013 0.020602 -139.60 <2e-16 *** ## factor(decade)1980 0.092597 0.007602 12.18 <2e-16 *** ## factor(decade)1990 0.903543 0.007045 128.26 <2e-16 *** ## factor(decade)2000 0.854821 0.006913 123.65 <2e-16 *** ## factor(decade)2010 0.986126 0.006723 146.67 <2e-16 *** ## P_NONWHT -0.125500 0.014007 -8.96 <2e-16 *** ## a_hinc -0.107335 0.001480 -72.52 <2e-16 *** ## P_NONWHT:a_hinc 0.160937 0.003117 51.64 <2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 summary(nbglmm) ## Random effects: ## Groups Name Variance Std.Dev. ## TRTID10 (Intercept) 0.09073 0.3012 ## Number of obs: 3142, groups: TRTID10, 635 ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.797861 0.056214 -49.77 < 2e-16 *** ## factor(decade)1980 0.118588 0.039589 3.00 0.00274 ** ## factor(decade)1990 0.903440 0.038255 23.62 < 2e-16 *** ## factor(decade)2000 0.843949 0.038172 22.11 < 2e-16 *** ## factor(decade)2010 1.068025 0.037376 28.58 < 2e-16 *** ## P_NONWHT 0.020012 0.089224 0.22 0.82253 ## a_hinc -0.129094 0.008109 -15.92 < 2e-16 *** ## P_NONWHT:a_hinc 0.149223 0.018967 7.87 3.61e-15 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Notice how the coefficient estimates are all very similar, the main difference being only the significance of one of your covariates, as well as the difference in the random effects variance, which suggests that the unit-level variance captured by the overdispersion parameter in the nb model (the theta value in the glmer.nb object) captures some of the between tract variance captured by the random effects. Regarding exponentiated coefficients (and associated confidence intervals), you can use the following: fixed <- fixef(nbglmm) confnitfixed <- confint(nbglmm, parm = "beta_", method = "Wald") # Beware: The Wald method is less accurate but much, much faster. # The exponentiated coefficients are also known as Incidence Rate Ratios (IRR) IRR <- exp(cbind(fixed, confintfixed) IRR ## fixed 2.5 % 97.5 % ## (Intercept) 0.06094028 0.05458271 0.06803835 ## factor(decade)1980 1.12590641 1.04184825 1.21674652 ## factor(decade)1990 2.46807856 2.28979339 2.66024515 ## factor(decade)2000 2.32553168 2.15789585 2.50619029 ## factor(decade)2010 2.90962703 2.70410073 3.13077444 ## P_NONWHT 1.02021383 0.85653208 1.21517487 ## a_hinc 0.87889172 0.86503341 0.89297205 ## P_NONWHT:a_hinc 1.16093170 1.11856742 1.20490048 Final thoughts, regarding zero inflation. There is no multilevel implementation (at least that I am aware of) of a zero inflated poisson or negbin model that allows you to specify an equation for zero inflated component of the mixture. the glmmADMB model lets you estimate a constant zero inflation parameter. An alternative approach would be to use the zeroinfl function in the pscl package, though this does not support multilevel models. Thus, you could compare the fit of a single level negative binomial, to the single level zero inflated negative binomial. Chances are that if zero inflation is not significant for single level models, it is likely that it would not be significant for the multilevel specification. Addendum If you are concerned about spatial autocorrelation, you could control for this using some form of geographical weighted regression (though I believe this uses point data, not areas). Alternatively, you could group your census tracts by an additional grouping factor (states, counties) and include this as a random effect. Lastly, and I am not sure if this is entirely feasible, it may be possible to incorporate spatial dependence using, for example, the average count of R_VAC in first order neighbours as a covariate. In any case, prior to such approaches, it would be sensible to determine if spatial autocorrelation is indeed present (using Global Moran's I, LISA tests, and similar approaches).
Help interpreting count data GLMM using lme4 glmer and glmer.nb - Negative binomial versus Poisson
I believe there are some important problems to be addressed with your estimation. From what I gathered by examining your data, your units are not geographically grouped, i.e. census tracts within coun
Help interpreting count data GLMM using lme4 glmer and glmer.nb - Negative binomial versus Poisson I believe there are some important problems to be addressed with your estimation. From what I gathered by examining your data, your units are not geographically grouped, i.e. census tracts within counties. Thus, using tracts as a grouping factor is not appropriate to capture spatial heterogeneity as this means that you have the same number of individuals as groups (or to put another way, all your groups have only one observation each). Using a multilevel modelling strategy allows us to estimate individual-level variance, while controlling for between-group variance. Since your groups have only one individual each, your between-group variance is the same as your individual-level variance, thus defeating the purpose of the multilevel approach. On the other hand, the grouping factor can represent repeated measurements over time. For example, in the case of a longitudinal study, an individual's "maths" scores may be recored yearly, thus we would have a yearly value for each student for n years (in this case, the grouping factor is the student as we have n number of observations "nested" within students). In your case, you have repeated measures of each census tract by decade. Thus, you could use your TRTID10 variable as a grouping factor to capture "between decade variance". This leads to 3142 observations nested in 635 tracts, with approximately 4 and 5 observations per census tract. As mentioned in a comment, using decade as a grouping factor is not very appropriate, as you have only around 5 decades for each census tract, and their effect can be better captured introducing decade as a covariate. Second, to determine whether your data ought to be modelled using a poisson or negative binomial model (or a zero inflated approach). Consider the amount of overdispersion in your data. The fundamental characteristic of a Poisson distribution is equidispersion, meaning that the mean is equal to the variance of the distribution. Looking at your data, it is pretty clear that there is much overdispersion. Variances are much much greater than the means. library(dplyr) dispersionstats <- scaled.mydata %>% + group_by(decade) %>% + summarise( + means = mean(R_VAC), + variances = var(R_VAC), + ratio = variances/means) ## dispersionstats ## # A tibble: 5 x 5 ## decade means variances ratio ## <int> <dbl> <dbl> <dbl> ## 1 1970 45.43513 4110.89 90.47822 ## 2 1980 103.52365 17323.34 167.33707 ## 3 1990 177.68038 62129.65 349.67087 ## 4 2000 190.23150 91059.60 478.67784 ## 5 2010 247.68246 126265.60 509.78821 Nonetheless, to determine if the negative binomial is more appropriate statistically, a standard method is to do a likelihood ratio test between a Poisson and a negative binomial model, which suggests that the negbin is a better fit. library(MASS) library(lmtest) modelformula <- formula(R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln)) poismodel <- glm(modelformula, data = scaled.mydata, family = "poisson") nbmodel <- glm.nb(modelformula, data = scaled.mydata) lrtest(poismodel, nbmodel) ## Likelihood ratio test ## Model 1: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## Model 2: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## #Df LogLik Df Chisq Pr(>Chisq) ## 1 8 -154269 ## 2 9 -17452 1 273634 < 2.2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 After establishing this, a next test could consider whether the multilevel (mixed model) approach is warranted using similar approach, which suggests that the multilevel version provides a better fit. (A similar test could be used to compare a glmer fit assuming a poisson distribution to the glmer.nb object, as long as the models are otherwise the same.) library(lme4) glmmformula <- update(modelformula, . ~ . + (1|TRTID10)) nbglmm <- glmer.nb(glmmformula, data = scaled.mydata) lrtest(nbmodel, nbglmm) ## Model 1: R_VAC ~ factor(decade) + P_NONWHT * a_hinc + offset(HU_ln) ## Model 2: R_VAC ~ factor(decade) + P_NONWHT + a_hinc + (1 | TRTID10) + ## P_NONWHT:a_hinc + offset(HU_ln) ## #Df LogLik Df Chisq Pr(>Chisq) ## 1 9 -17452 ## 2 10 -17332 1 239.3 < 2.2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Regarding the estimates of the poisson and nb models they are actually expected to be very similar to each other, with the main distinction being the standard errors, ie if overdispersion is present, the poisson model tends to provide biased standard errors. Taking your data as an example: poissonglmm <- glmer(glmmformula, data = scaled.mydata) summary(poissonglmm) ## Random effects: ## Groups Name Variance Std.Dev. ## TRTID10 (Intercept) 0.2001 0.4473 ## Number of obs: 3142, groups: TRTID10, 635 ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.876013 0.020602 -139.60 <2e-16 *** ## factor(decade)1980 0.092597 0.007602 12.18 <2e-16 *** ## factor(decade)1990 0.903543 0.007045 128.26 <2e-16 *** ## factor(decade)2000 0.854821 0.006913 123.65 <2e-16 *** ## factor(decade)2010 0.986126 0.006723 146.67 <2e-16 *** ## P_NONWHT -0.125500 0.014007 -8.96 <2e-16 *** ## a_hinc -0.107335 0.001480 -72.52 <2e-16 *** ## P_NONWHT:a_hinc 0.160937 0.003117 51.64 <2e-16 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 summary(nbglmm) ## Random effects: ## Groups Name Variance Std.Dev. ## TRTID10 (Intercept) 0.09073 0.3012 ## Number of obs: 3142, groups: TRTID10, 635 ## Fixed effects: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.797861 0.056214 -49.77 < 2e-16 *** ## factor(decade)1980 0.118588 0.039589 3.00 0.00274 ** ## factor(decade)1990 0.903440 0.038255 23.62 < 2e-16 *** ## factor(decade)2000 0.843949 0.038172 22.11 < 2e-16 *** ## factor(decade)2010 1.068025 0.037376 28.58 < 2e-16 *** ## P_NONWHT 0.020012 0.089224 0.22 0.82253 ## a_hinc -0.129094 0.008109 -15.92 < 2e-16 *** ## P_NONWHT:a_hinc 0.149223 0.018967 7.87 3.61e-15 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Notice how the coefficient estimates are all very similar, the main difference being only the significance of one of your covariates, as well as the difference in the random effects variance, which suggests that the unit-level variance captured by the overdispersion parameter in the nb model (the theta value in the glmer.nb object) captures some of the between tract variance captured by the random effects. Regarding exponentiated coefficients (and associated confidence intervals), you can use the following: fixed <- fixef(nbglmm) confnitfixed <- confint(nbglmm, parm = "beta_", method = "Wald") # Beware: The Wald method is less accurate but much, much faster. # The exponentiated coefficients are also known as Incidence Rate Ratios (IRR) IRR <- exp(cbind(fixed, confintfixed) IRR ## fixed 2.5 % 97.5 % ## (Intercept) 0.06094028 0.05458271 0.06803835 ## factor(decade)1980 1.12590641 1.04184825 1.21674652 ## factor(decade)1990 2.46807856 2.28979339 2.66024515 ## factor(decade)2000 2.32553168 2.15789585 2.50619029 ## factor(decade)2010 2.90962703 2.70410073 3.13077444 ## P_NONWHT 1.02021383 0.85653208 1.21517487 ## a_hinc 0.87889172 0.86503341 0.89297205 ## P_NONWHT:a_hinc 1.16093170 1.11856742 1.20490048 Final thoughts, regarding zero inflation. There is no multilevel implementation (at least that I am aware of) of a zero inflated poisson or negbin model that allows you to specify an equation for zero inflated component of the mixture. the glmmADMB model lets you estimate a constant zero inflation parameter. An alternative approach would be to use the zeroinfl function in the pscl package, though this does not support multilevel models. Thus, you could compare the fit of a single level negative binomial, to the single level zero inflated negative binomial. Chances are that if zero inflation is not significant for single level models, it is likely that it would not be significant for the multilevel specification. Addendum If you are concerned about spatial autocorrelation, you could control for this using some form of geographical weighted regression (though I believe this uses point data, not areas). Alternatively, you could group your census tracts by an additional grouping factor (states, counties) and include this as a random effect. Lastly, and I am not sure if this is entirely feasible, it may be possible to incorporate spatial dependence using, for example, the average count of R_VAC in first order neighbours as a covariate. In any case, prior to such approaches, it would be sensible to determine if spatial autocorrelation is indeed present (using Global Moran's I, LISA tests, and similar approaches).
Help interpreting count data GLMM using lme4 glmer and glmer.nb - Negative binomial versus Poisson I believe there are some important problems to be addressed with your estimation. From what I gathered by examining your data, your units are not geographically grouped, i.e. census tracts within coun
28,543
What are the consequences of rare events in logistic regression?
The standard rule of thumb for linear (OLS) regression is that you need at least $10$ data per variable or you will be 'approaching' saturation. However, for logistic regression, the corresponding rule of thumb is that you want $15$ data of the less commonly occurring category for every variable. The issue here is that binary data just don't contain as much information as continuous data. Moreover, you can have perfect predictions with a lot of data, if you only have a couple of actual events. To make an example that is rather extreme, but should be immediately clear, consider a case where you have $N = 300$, and so tried to fit a model with $30$ predictors, but had only $3$ events. You simply can't even estimate the association between most of your $X$-variables and $Y$.
What are the consequences of rare events in logistic regression?
The standard rule of thumb for linear (OLS) regression is that you need at least $10$ data per variable or you will be 'approaching' saturation. However, for logistic regression, the corresponding ru
What are the consequences of rare events in logistic regression? The standard rule of thumb for linear (OLS) regression is that you need at least $10$ data per variable or you will be 'approaching' saturation. However, for logistic regression, the corresponding rule of thumb is that you want $15$ data of the less commonly occurring category for every variable. The issue here is that binary data just don't contain as much information as continuous data. Moreover, you can have perfect predictions with a lot of data, if you only have a couple of actual events. To make an example that is rather extreme, but should be immediately clear, consider a case where you have $N = 300$, and so tried to fit a model with $30$ predictors, but had only $3$ events. You simply can't even estimate the association between most of your $X$-variables and $Y$.
What are the consequences of rare events in logistic regression? The standard rule of thumb for linear (OLS) regression is that you need at least $10$ data per variable or you will be 'approaching' saturation. However, for logistic regression, the corresponding ru
28,544
Understanding early stopping in neural networks and its implications when using cross-validation
Determining the number of epochs by e.g. averaging the number of epochs for the folds and use it for the test run later on? Shortest possible answer: Yes! But let me add some context... I believe you are referring to Section 7.8, pages 246ff, on Early Stopping in the Deep Learning book. The described procedure there, however, is significantly different from yours. Goodfellow et al. suggest to split your data in three sets first: a training, dev, and test set. Then, you train (on the training set) until the error from that model increases (on the dev set), at which point you stop. Finally, you use the trained model that had the lowest dev set error and evaluate it on the test set. No cross-validation involved at all. However, you seem to be trying to do both early stopping (ES) and cross-validation (CV), as well as model evaluation all on the same set. That is, you seem to be using all your data for CV, training on each split with ES, and then using the average performance over those CV splits as your final evaluation results. If that is the case, that indeed is stark over-fitting (and certainly not what is described by Goodfellow et al.), and your approach gives you exactly the opposite result of what ES is meant for -- as a regularization technique to prevent over-fitting. If it is not clear why: Because you've "peaked" at your final evaluation instances during training time to figure out when to ("early") stop training; That is, you are optimizing against the evaluation instances during training, which is (over-) fitting your model (on that evaluation data), by definition. So by now, I hope to have answered your other [two] questions. The answer by the higgs broson (to your last question, as cited above) already gives a meaningful way to combine CV and ES to save you some training time: You could split your full data in two sets only - a dev and a test set - and use the dev set to do CV while applying ES on each split. That is, you train on each split of your dev set, and stop once the lowest error on the training instances you set aside for evaluating that split has been reached [1]. Then you average the number of epochs needed to reach that lowest error from each split and train on the full dev set for that (averaged) number of epochs. Finally, you validate that outcome on the test set you set aside and haven't touched yet. [1] Though unlike the higgs broson I would recommend to evaluate after every epoch. Two reasons for that: (1), comparative to training, the evaluation time will be negligible. (2), imagine your min. error is at epoch 51, but you evaluate at epoch 50 and 60. It isn't unlikely that the error at epoch 60 will be lower than at epoch 50; Yet, you would choose 60 as your epoch parameter, which clearly is sub-optimal and in fact even going a bit against the purpose of using ES in the first place.
Understanding early stopping in neural networks and its implications when using cross-validation
Determining the number of epochs by e.g. averaging the number of epochs for the folds and use it for the test run later on? Shortest possible answer: Yes! But let me add some context... I believe you
Understanding early stopping in neural networks and its implications when using cross-validation Determining the number of epochs by e.g. averaging the number of epochs for the folds and use it for the test run later on? Shortest possible answer: Yes! But let me add some context... I believe you are referring to Section 7.8, pages 246ff, on Early Stopping in the Deep Learning book. The described procedure there, however, is significantly different from yours. Goodfellow et al. suggest to split your data in three sets first: a training, dev, and test set. Then, you train (on the training set) until the error from that model increases (on the dev set), at which point you stop. Finally, you use the trained model that had the lowest dev set error and evaluate it on the test set. No cross-validation involved at all. However, you seem to be trying to do both early stopping (ES) and cross-validation (CV), as well as model evaluation all on the same set. That is, you seem to be using all your data for CV, training on each split with ES, and then using the average performance over those CV splits as your final evaluation results. If that is the case, that indeed is stark over-fitting (and certainly not what is described by Goodfellow et al.), and your approach gives you exactly the opposite result of what ES is meant for -- as a regularization technique to prevent over-fitting. If it is not clear why: Because you've "peaked" at your final evaluation instances during training time to figure out when to ("early") stop training; That is, you are optimizing against the evaluation instances during training, which is (over-) fitting your model (on that evaluation data), by definition. So by now, I hope to have answered your other [two] questions. The answer by the higgs broson (to your last question, as cited above) already gives a meaningful way to combine CV and ES to save you some training time: You could split your full data in two sets only - a dev and a test set - and use the dev set to do CV while applying ES on each split. That is, you train on each split of your dev set, and stop once the lowest error on the training instances you set aside for evaluating that split has been reached [1]. Then you average the number of epochs needed to reach that lowest error from each split and train on the full dev set for that (averaged) number of epochs. Finally, you validate that outcome on the test set you set aside and haven't touched yet. [1] Though unlike the higgs broson I would recommend to evaluate after every epoch. Two reasons for that: (1), comparative to training, the evaluation time will be negligible. (2), imagine your min. error is at epoch 51, but you evaluate at epoch 50 and 60. It isn't unlikely that the error at epoch 60 will be lower than at epoch 50; Yet, you would choose 60 as your epoch parameter, which clearly is sub-optimal and in fact even going a bit against the purpose of using ES in the first place.
Understanding early stopping in neural networks and its implications when using cross-validation Determining the number of epochs by e.g. averaging the number of epochs for the folds and use it for the test run later on? Shortest possible answer: Yes! But let me add some context... I believe you
28,545
Understanding early stopping in neural networks and its implications when using cross-validation
The way that you can use cross-validation to determine the optimal number of epochs to train with early stopping is this: suppose we were training for between 1 to 100 epochs. For each fold, train your model and record the validation error every, say, 10 epochs. Save these trajectories of validation error vs number of epochs trained and average them together over all folds. This will yield an "average test error vs epoch" curve. The stopping point to use is the number of epochs that minimizes the average test error. You can then train your network on the full training set (no cross validation) for that many epochs. The purpose of early stopping is to avoid overfitting. You use N-fold cross-validation to estimate the generalization error of your model by creating N synthetic train/test sets and (usually) averaging together the results. Hopefully, the test set (aka new real-world data) that you are given later is going to be similar enough to the synethetic test sets that you generated with CV so that the stopping point you found earlier is close to optimal given this new testing data.
Understanding early stopping in neural networks and its implications when using cross-validation
The way that you can use cross-validation to determine the optimal number of epochs to train with early stopping is this: suppose we were training for between 1 to 100 epochs. For each fold, train you
Understanding early stopping in neural networks and its implications when using cross-validation The way that you can use cross-validation to determine the optimal number of epochs to train with early stopping is this: suppose we were training for between 1 to 100 epochs. For each fold, train your model and record the validation error every, say, 10 epochs. Save these trajectories of validation error vs number of epochs trained and average them together over all folds. This will yield an "average test error vs epoch" curve. The stopping point to use is the number of epochs that minimizes the average test error. You can then train your network on the full training set (no cross validation) for that many epochs. The purpose of early stopping is to avoid overfitting. You use N-fold cross-validation to estimate the generalization error of your model by creating N synthetic train/test sets and (usually) averaging together the results. Hopefully, the test set (aka new real-world data) that you are given later is going to be similar enough to the synethetic test sets that you generated with CV so that the stopping point you found earlier is close to optimal given this new testing data.
Understanding early stopping in neural networks and its implications when using cross-validation The way that you can use cross-validation to determine the optimal number of epochs to train with early stopping is this: suppose we were training for between 1 to 100 epochs. For each fold, train you
28,546
How to avoid collinearity of categorical variables in logistic regression?
I would second @EdM's comment (+1) and suggest using a regularised regression approach. I think that an elastic-net/ridge regression approach should allow you to deal with collinear predictors. Just be careful to normalise your feature matrix $X$ appropriately before using it, otherwise you will risk regularising each feature disproportionately (yes, I mean the $0/1$ columns, you should scale them such that each column has unit variance and mean $0$). Clearly you would have to cross-validate your results to ensure some notion of stability. Let me also note that instability is not a huge problem because it actually suggests that there is not obvious solution/inferential result and simply interpreting the GLM procedure as "ground truth" is incoherent.
How to avoid collinearity of categorical variables in logistic regression?
I would second @EdM's comment (+1) and suggest using a regularised regression approach. I think that an elastic-net/ridge regression approach should allow you to deal with collinear predictors. Just
How to avoid collinearity of categorical variables in logistic regression? I would second @EdM's comment (+1) and suggest using a regularised regression approach. I think that an elastic-net/ridge regression approach should allow you to deal with collinear predictors. Just be careful to normalise your feature matrix $X$ appropriately before using it, otherwise you will risk regularising each feature disproportionately (yes, I mean the $0/1$ columns, you should scale them such that each column has unit variance and mean $0$). Clearly you would have to cross-validate your results to ensure some notion of stability. Let me also note that instability is not a huge problem because it actually suggests that there is not obvious solution/inferential result and simply interpreting the GLM procedure as "ground truth" is incoherent.
How to avoid collinearity of categorical variables in logistic regression? I would second @EdM's comment (+1) and suggest using a regularised regression approach. I think that an elastic-net/ridge regression approach should allow you to deal with collinear predictors. Just
28,547
How to avoid collinearity of categorical variables in logistic regression?
The ViF is still a useful measure in your case, but the condition number of your design matrix is a more common approach for categorical data. The original reference is here: Belsley, David A.; Kuh, Edwin; Welsch, Roy E. (1980). "The Condition Number". Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons. pp. 100–104. And here are more useful links: https://en.wikipedia.org/wiki/Condition_number https://epub.ub.uni-muenchen.de/2081/1/report008_statistics.pdf
How to avoid collinearity of categorical variables in logistic regression?
The ViF is still a useful measure in your case, but the condition number of your design matrix is a more common approach for categorical data. The original reference is here: Belsley, David A.; Kuh,
How to avoid collinearity of categorical variables in logistic regression? The ViF is still a useful measure in your case, but the condition number of your design matrix is a more common approach for categorical data. The original reference is here: Belsley, David A.; Kuh, Edwin; Welsch, Roy E. (1980). "The Condition Number". Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons. pp. 100–104. And here are more useful links: https://en.wikipedia.org/wiki/Condition_number https://epub.ub.uni-muenchen.de/2081/1/report008_statistics.pdf
How to avoid collinearity of categorical variables in logistic regression? The ViF is still a useful measure in your case, but the condition number of your design matrix is a more common approach for categorical data. The original reference is here: Belsley, David A.; Kuh,
28,548
How to avoid collinearity of categorical variables in logistic regression?
Another approach would be to perform Multiple Correspondence Analysis (MCA) on your multicollinear independent variables. After that you will end up with orthogonal (perfectly independent) components which you can use as IV in your model. There will be no collinearity present, but it will be hard to intepret effects of your original variables. At the other hand if there is multicollinearity, MCA will unite your correlated IV variables effects into more general effects, which you can find even more interpretable and plausible.
How to avoid collinearity of categorical variables in logistic regression?
Another approach would be to perform Multiple Correspondence Analysis (MCA) on your multicollinear independent variables. After that you will end up with orthogonal (perfectly independent) components
How to avoid collinearity of categorical variables in logistic regression? Another approach would be to perform Multiple Correspondence Analysis (MCA) on your multicollinear independent variables. After that you will end up with orthogonal (perfectly independent) components which you can use as IV in your model. There will be no collinearity present, but it will be hard to intepret effects of your original variables. At the other hand if there is multicollinearity, MCA will unite your correlated IV variables effects into more general effects, which you can find even more interpretable and plausible.
How to avoid collinearity of categorical variables in logistic regression? Another approach would be to perform Multiple Correspondence Analysis (MCA) on your multicollinear independent variables. After that you will end up with orthogonal (perfectly independent) components
28,549
How to avoid collinearity of categorical variables in logistic regression?
You can check bi-variate correlation by using rank-order or other non-parametric test for categorical variables. It is the same as you check the correlation matrix for a group of continuous variables, just use different test.
How to avoid collinearity of categorical variables in logistic regression?
You can check bi-variate correlation by using rank-order or other non-parametric test for categorical variables. It is the same as you check the correlation matrix for a group of continuous variables
How to avoid collinearity of categorical variables in logistic regression? You can check bi-variate correlation by using rank-order or other non-parametric test for categorical variables. It is the same as you check the correlation matrix for a group of continuous variables, just use different test.
How to avoid collinearity of categorical variables in logistic regression? You can check bi-variate correlation by using rank-order or other non-parametric test for categorical variables. It is the same as you check the correlation matrix for a group of continuous variables
28,550
What is the distribution of the sum of squared chi-square random variables?
If $a, d\sim\chi^2_{2M}$ are independent, then $X=a+d$ will have $\chi^2_{4M}$ distribution. Since $X$ is non-negative, CDF of $Y=a^2+2ad+d^2=(a+d)^2=X^2$ can be found by noting $$F_Y(y)=P(Y\leq y)=P(X^2\leq y)=P(X\leq \sqrt{y})=F_X(\sqrt{y}).$$ Therefore, $$f_Y(y)=\frac{1}{2\sqrt{y}}f_X(\sqrt{y})=\frac{1}{2^{2M+1}\Gamma(2M)}y^{M-1}e^{-\sqrt{y}/2}.$$ If $a$ and $d$ are correlated then things are much more intricate. See for example N. H. Gordon & P. F. Ramig's Cumulative distribution function of the sum of correlated chi-squared random variables (1983) for a definition of multivariate chi-squared and distribution of its sum. If $\mu\neq 2M$ then you are dealing with non-central chi-squared so the above will no longer be valid. This post may provide some insight. EDIT: Based on the new information it seems $a$ and $d$ are formed by summing up normal r.v. with non-unit variance. Recall if $Z\sim N(0, 1)$ then $\sqrt{c}Z\sim N(0, c)$. Since now $$a=c\sum_{i=1}^{2M}Z_i^2=d,$$ both $a,d$ will have chi-squared distribution scaled by $c$, i.e. $\Gamma(M, 2c)$ distribution. In this case $X=a+d$ will be $\Gamma(2M, 2c)$ distributed. As a result, for $Y=X^2$ we have $$f_Y(y)=\frac{1}{2(2c)^{2M}\Gamma(2M)}y^{M-1}e^{-\sqrt{y}/2c}.$$
What is the distribution of the sum of squared chi-square random variables?
If $a, d\sim\chi^2_{2M}$ are independent, then $X=a+d$ will have $\chi^2_{4M}$ distribution. Since $X$ is non-negative, CDF of $Y=a^2+2ad+d^2=(a+d)^2=X^2$ can be found by noting $$F_Y(y)=P(Y\leq y)=P(
What is the distribution of the sum of squared chi-square random variables? If $a, d\sim\chi^2_{2M}$ are independent, then $X=a+d$ will have $\chi^2_{4M}$ distribution. Since $X$ is non-negative, CDF of $Y=a^2+2ad+d^2=(a+d)^2=X^2$ can be found by noting $$F_Y(y)=P(Y\leq y)=P(X^2\leq y)=P(X\leq \sqrt{y})=F_X(\sqrt{y}).$$ Therefore, $$f_Y(y)=\frac{1}{2\sqrt{y}}f_X(\sqrt{y})=\frac{1}{2^{2M+1}\Gamma(2M)}y^{M-1}e^{-\sqrt{y}/2}.$$ If $a$ and $d$ are correlated then things are much more intricate. See for example N. H. Gordon & P. F. Ramig's Cumulative distribution function of the sum of correlated chi-squared random variables (1983) for a definition of multivariate chi-squared and distribution of its sum. If $\mu\neq 2M$ then you are dealing with non-central chi-squared so the above will no longer be valid. This post may provide some insight. EDIT: Based on the new information it seems $a$ and $d$ are formed by summing up normal r.v. with non-unit variance. Recall if $Z\sim N(0, 1)$ then $\sqrt{c}Z\sim N(0, c)$. Since now $$a=c\sum_{i=1}^{2M}Z_i^2=d,$$ both $a,d$ will have chi-squared distribution scaled by $c$, i.e. $\Gamma(M, 2c)$ distribution. In this case $X=a+d$ will be $\Gamma(2M, 2c)$ distributed. As a result, for $Y=X^2$ we have $$f_Y(y)=\frac{1}{2(2c)^{2M}\Gamma(2M)}y^{M-1}e^{-\sqrt{y}/2c}.$$
What is the distribution of the sum of squared chi-square random variables? If $a, d\sim\chi^2_{2M}$ are independent, then $X=a+d$ will have $\chi^2_{4M}$ distribution. Since $X$ is non-negative, CDF of $Y=a^2+2ad+d^2=(a+d)^2=X^2$ can be found by noting $$F_Y(y)=P(Y\leq y)=P(
28,551
What is the distribution of the sum of squared chi-square random variables?
Since a non-central chi-square is a sum of independent rv's, then the sum of two independent non-central chi-squares $X = a+b$ is also a non-central chi-square with parameters the sum of the corresponding parameters of the two components, $k_x = k_a+k_b$ (degrees of freedom), $\lambda_x = \lambda_a+\lambda_b$ (non-centrality parameter). To obtain the distribution function of its square $Y =X^2$ , one can apply the "CDF method" (as in @francis answer), $$F_Y(y)=P(Y\leq y)=P(X^2\leq y)=P(X\leq \sqrt{y})=F_X(\sqrt{y})$$ and where $$F_X(x)=1 - Q_{k_x/2} \left( \sqrt{\lambda_x}, \sqrt{x} \right)$$ so $$F_Y(y)=1 - Q_{k_x/2} \left( \sqrt{\lambda_x}, y^{1/4} \right)$$ where $Q$ here is Marcum's Q-function. The above apply to non-central chi-squares formed as sums of independent squared normals each with unitary variance but different mean. ADDENDUM RESPONDING TO QUESTION'S EDIT If the base rv's are $N(0,c)$, then the square of each is a $Gamma (1/2,2c)$ see https://stats.stackexchange.com/a/122864/28746 . So the rv $a \sim Gamma (M, 2c)$ and $b \sim Gamma (M, 2c)$ so also $X = a+b \sim Gamma(2M, 2c)$ (shape-scale parametrization, and see the wikipedia article for the additive properties for Gamma). Then one can apply again the CDF method to find the CDF of the square $Y = X^2$
What is the distribution of the sum of squared chi-square random variables?
Since a non-central chi-square is a sum of independent rv's, then the sum of two independent non-central chi-squares $X = a+b$ is also a non-central chi-square with parameters the sum of the correspon
What is the distribution of the sum of squared chi-square random variables? Since a non-central chi-square is a sum of independent rv's, then the sum of two independent non-central chi-squares $X = a+b$ is also a non-central chi-square with parameters the sum of the corresponding parameters of the two components, $k_x = k_a+k_b$ (degrees of freedom), $\lambda_x = \lambda_a+\lambda_b$ (non-centrality parameter). To obtain the distribution function of its square $Y =X^2$ , one can apply the "CDF method" (as in @francis answer), $$F_Y(y)=P(Y\leq y)=P(X^2\leq y)=P(X\leq \sqrt{y})=F_X(\sqrt{y})$$ and where $$F_X(x)=1 - Q_{k_x/2} \left( \sqrt{\lambda_x}, \sqrt{x} \right)$$ so $$F_Y(y)=1 - Q_{k_x/2} \left( \sqrt{\lambda_x}, y^{1/4} \right)$$ where $Q$ here is Marcum's Q-function. The above apply to non-central chi-squares formed as sums of independent squared normals each with unitary variance but different mean. ADDENDUM RESPONDING TO QUESTION'S EDIT If the base rv's are $N(0,c)$, then the square of each is a $Gamma (1/2,2c)$ see https://stats.stackexchange.com/a/122864/28746 . So the rv $a \sim Gamma (M, 2c)$ and $b \sim Gamma (M, 2c)$ so also $X = a+b \sim Gamma(2M, 2c)$ (shape-scale parametrization, and see the wikipedia article for the additive properties for Gamma). Then one can apply again the CDF method to find the CDF of the square $Y = X^2$
What is the distribution of the sum of squared chi-square random variables? Since a non-central chi-square is a sum of independent rv's, then the sum of two independent non-central chi-squares $X = a+b$ is also a non-central chi-square with parameters the sum of the correspon
28,552
Do two quantiles of a beta distribution determine its parameters?
The answer is yes, provided the data satisfy obvious consistency requirements. The argument is straightforward, based on a simple construction, but it requires some setting up. It comes down to an intuitively appealing fact: increasing the parameter $a$ in a Beta$(a,b)$ distribution increases the value of its density (PDF) more for larger $x$ than smaller $x$; and increasing $b$ does the opposite: the smaller $x$ is, the more the value of the PDF increases. The details follow. Let the desired $q_1$ quantile be $x_1$ and the desired $q_2$ quantile be $x_2$ with $1 \gt q_2 \gt q_1 \gt 0$ and (therefore) $1 \gt x_2 \gt x_1 \gt 0$. Then there are unique $a$ and $b$ for which the Beta$(a,b)$ distribution has these quantiles. The difficulty with demonstrating this is that the Beta distribution involves a recalcitrant normalizing constant. Recall the definition: for $a\gt 0$ and $b \gt 0$, the Beta$(a,b)$ distribution has a density function (PDF) $$f(x;a,b) = \frac{1}{B(a,b)} x^{a-1}(1-x)^{b-1}.$$ The normalizing constant is the Beta function $$B(a,b) = \int_0^1 x^{a-1}(1-x)^{b-1}\,\mathrm{d}x = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}.$$ Everything gets messy if we try to differentiate $f(x;a,b)$ directly with respect to $a$ and $b$, which would be the brute force way to attempt a demonstration. One way to avoid having to analyze the Beta function is to note that quantiles are relative areas. That is, $$q_i = F(x_i;a,b)=\frac{\int_0^{x_i} f(x;a,b)\,\mathrm{d}x}{\int_0^1 f(x;a,b)\,\mathrm{d}x}$$ for $i=1,2$. Here, for example, are the PDF and cumulative distribution function (CDF) $F$ of a Beta$(1.15, 0.57)$ distribution for which $x_1=1/3$ and $q_1=1/6$. The density function $x\to f(x;a,b)$ is plotted at the left. $q_1$ is the area under the curve to the left of $x_1$, shown in red, relative to the total area under the curve. $q_2$ is the area to the left of $x_2$, equal to the sum of the red and blue regions, again relative to the total area. The CDF at the right shows how $(x_1,q_1)$ and $(x_2,q_2)$ mark two distinct points on it. In this figure, $(x_1,q_1)$ was fixed at $(1/3,1/6)$, $a$ was selected to be $1.15$, and then a value of $b$ was found for which $(x_1,q_1)$ lies on the Beta$(a,b)$ CDF. Lemma: Such a $b$ can always be found. To be specific, let $(x_1, q_1)$ be fixed once and for all. (They stay the same in the illustrations that follow: in all three cases, the relative area to the left of $x_1$ equals $q_1$.) For any $a\gt 0$, the Lemma claims there is a unique value of $b$, written $b(a),$ for which $x_1$ is the $q_1$ quantile of the Beta$(a,b(a))$ distribution. To see why, note first that as $b$ approaches zero, all the probability piles up near values of $0$, whence $F(x_1;a,b)$ approaches $1$. As $b$ approaches infinity, all the probability piles up near values of $1$, whence $F(x_1;a,b)$ approaches $0$. In between, the function $b\to F(x_1;a,b)$ is strictly increasing in $b$. This claim is geometrically obvious: it amounts to saying that if we look at the area to the left under the curve $x\to x^{a-1}(1-x)^{b-1}$ relative to the total area under the curve and compare that to the relative area under the curve $x\to x^{a-1}(1-x)^{b^\prime-1}$ for $b^\prime \gt b$, then the latter area is relatively larger. The ratio of these two functions is $(1-x)^{b^\prime-b}$. This is a function equal to $1$ when $x=0,$ dropping steadily to $0$ when $x=1.$ Therefore the heights of the function $x\to f(x;a,b^\prime)$ are relatively larger than the heights of $x\to f(x;a,b)$ for $x$ to the left of $x_1$ than they are for $x$ to the right of $x_1.$ Consequently, the area to the left of $x_1$ in the former must be relatively larger than the area to the right of $x_1.$ (This is straightforward to translate into a rigorous argument using a Riemann sum, for instance.) We have seen that the function $b\to f(x_1;a,b)$ is strictly monotonically increasing with limiting values at $0$ and $1$ as $b\to 0$ and $b\to\infty,$ respectively. It is also (clearly) continuous. Consequently there exists a number $b(a)$ where $f(x_1;a,b(a))=q_1$ and that number is unique, proving the lemma. The same argument shows that as $b$ increases, the area to the left of $x_2$ increases. Consequently the values of $f(x_2;a, b(a))$ range over some interval of numbers as $a$ progresses from almost $0$ to almost $\infty.$ The limit of $f(x_2;a,b(a))$ as $a\to 0$ is $q_1.$ Here is an example where $a$ is close to $0$ (it equals $0.1$). With $x_1=1/3$ and $q_1=1/6$ (as in the previous figure), $b(a) \approx 0.02.$ There is almost no area between $x_1$ and $x_2:$ The CDF is practically flat between $x_1$ and $x_2,$ whence $q_2$ is practically on top of $q_1.$ In the limit as $a\to 0$, $q_2 \to q_1.$ At the other extreme, sufficiently large values of $a$ lead to $F(x_2;a,b(a))$ arbitrarily close to $1.$ Here is an example with $(x_1,q_1)$ as before. Here $a=8$ and $b(a)$ is nearly $10.$ Now $F(x_2;a,b(a))$ is essentially $1:$ there is almost no area to the right of $x_2.$ Consequently, you may select any $q_2$ between $q_1$ and $1$ and adjust $a$ until $F(x_2;a,a(b))=q_2.$ Just as before, this $a$ must be unique, QED. Working R code to find solutions is posted at Determining beta distribution parameters $\alpha$ and $\beta$ from two arbitrary points (quantiles) .
Do two quantiles of a beta distribution determine its parameters?
The answer is yes, provided the data satisfy obvious consistency requirements. The argument is straightforward, based on a simple construction, but it requires some setting up. It comes down to an i
Do two quantiles of a beta distribution determine its parameters? The answer is yes, provided the data satisfy obvious consistency requirements. The argument is straightforward, based on a simple construction, but it requires some setting up. It comes down to an intuitively appealing fact: increasing the parameter $a$ in a Beta$(a,b)$ distribution increases the value of its density (PDF) more for larger $x$ than smaller $x$; and increasing $b$ does the opposite: the smaller $x$ is, the more the value of the PDF increases. The details follow. Let the desired $q_1$ quantile be $x_1$ and the desired $q_2$ quantile be $x_2$ with $1 \gt q_2 \gt q_1 \gt 0$ and (therefore) $1 \gt x_2 \gt x_1 \gt 0$. Then there are unique $a$ and $b$ for which the Beta$(a,b)$ distribution has these quantiles. The difficulty with demonstrating this is that the Beta distribution involves a recalcitrant normalizing constant. Recall the definition: for $a\gt 0$ and $b \gt 0$, the Beta$(a,b)$ distribution has a density function (PDF) $$f(x;a,b) = \frac{1}{B(a,b)} x^{a-1}(1-x)^{b-1}.$$ The normalizing constant is the Beta function $$B(a,b) = \int_0^1 x^{a-1}(1-x)^{b-1}\,\mathrm{d}x = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}.$$ Everything gets messy if we try to differentiate $f(x;a,b)$ directly with respect to $a$ and $b$, which would be the brute force way to attempt a demonstration. One way to avoid having to analyze the Beta function is to note that quantiles are relative areas. That is, $$q_i = F(x_i;a,b)=\frac{\int_0^{x_i} f(x;a,b)\,\mathrm{d}x}{\int_0^1 f(x;a,b)\,\mathrm{d}x}$$ for $i=1,2$. Here, for example, are the PDF and cumulative distribution function (CDF) $F$ of a Beta$(1.15, 0.57)$ distribution for which $x_1=1/3$ and $q_1=1/6$. The density function $x\to f(x;a,b)$ is plotted at the left. $q_1$ is the area under the curve to the left of $x_1$, shown in red, relative to the total area under the curve. $q_2$ is the area to the left of $x_2$, equal to the sum of the red and blue regions, again relative to the total area. The CDF at the right shows how $(x_1,q_1)$ and $(x_2,q_2)$ mark two distinct points on it. In this figure, $(x_1,q_1)$ was fixed at $(1/3,1/6)$, $a$ was selected to be $1.15$, and then a value of $b$ was found for which $(x_1,q_1)$ lies on the Beta$(a,b)$ CDF. Lemma: Such a $b$ can always be found. To be specific, let $(x_1, q_1)$ be fixed once and for all. (They stay the same in the illustrations that follow: in all three cases, the relative area to the left of $x_1$ equals $q_1$.) For any $a\gt 0$, the Lemma claims there is a unique value of $b$, written $b(a),$ for which $x_1$ is the $q_1$ quantile of the Beta$(a,b(a))$ distribution. To see why, note first that as $b$ approaches zero, all the probability piles up near values of $0$, whence $F(x_1;a,b)$ approaches $1$. As $b$ approaches infinity, all the probability piles up near values of $1$, whence $F(x_1;a,b)$ approaches $0$. In between, the function $b\to F(x_1;a,b)$ is strictly increasing in $b$. This claim is geometrically obvious: it amounts to saying that if we look at the area to the left under the curve $x\to x^{a-1}(1-x)^{b-1}$ relative to the total area under the curve and compare that to the relative area under the curve $x\to x^{a-1}(1-x)^{b^\prime-1}$ for $b^\prime \gt b$, then the latter area is relatively larger. The ratio of these two functions is $(1-x)^{b^\prime-b}$. This is a function equal to $1$ when $x=0,$ dropping steadily to $0$ when $x=1.$ Therefore the heights of the function $x\to f(x;a,b^\prime)$ are relatively larger than the heights of $x\to f(x;a,b)$ for $x$ to the left of $x_1$ than they are for $x$ to the right of $x_1.$ Consequently, the area to the left of $x_1$ in the former must be relatively larger than the area to the right of $x_1.$ (This is straightforward to translate into a rigorous argument using a Riemann sum, for instance.) We have seen that the function $b\to f(x_1;a,b)$ is strictly monotonically increasing with limiting values at $0$ and $1$ as $b\to 0$ and $b\to\infty,$ respectively. It is also (clearly) continuous. Consequently there exists a number $b(a)$ where $f(x_1;a,b(a))=q_1$ and that number is unique, proving the lemma. The same argument shows that as $b$ increases, the area to the left of $x_2$ increases. Consequently the values of $f(x_2;a, b(a))$ range over some interval of numbers as $a$ progresses from almost $0$ to almost $\infty.$ The limit of $f(x_2;a,b(a))$ as $a\to 0$ is $q_1.$ Here is an example where $a$ is close to $0$ (it equals $0.1$). With $x_1=1/3$ and $q_1=1/6$ (as in the previous figure), $b(a) \approx 0.02.$ There is almost no area between $x_1$ and $x_2:$ The CDF is practically flat between $x_1$ and $x_2,$ whence $q_2$ is practically on top of $q_1.$ In the limit as $a\to 0$, $q_2 \to q_1.$ At the other extreme, sufficiently large values of $a$ lead to $F(x_2;a,b(a))$ arbitrarily close to $1.$ Here is an example with $(x_1,q_1)$ as before. Here $a=8$ and $b(a)$ is nearly $10.$ Now $F(x_2;a,b(a))$ is essentially $1:$ there is almost no area to the right of $x_2.$ Consequently, you may select any $q_2$ between $q_1$ and $1$ and adjust $a$ until $F(x_2;a,a(b))=q_2.$ Just as before, this $a$ must be unique, QED. Working R code to find solutions is posted at Determining beta distribution parameters $\alpha$ and $\beta$ from two arbitrary points (quantiles) .
Do two quantiles of a beta distribution determine its parameters? The answer is yes, provided the data satisfy obvious consistency requirements. The argument is straightforward, based on a simple construction, but it requires some setting up. It comes down to an i
28,553
Estimating quantiles by bootstrap
The problem is more with extreme values of distributions rather than with quantiles per se. If the true minimum or maximum of the distribution lies beyond the limits of your data, then no amount of bootstrap re-sampling of your data will provide estimates closer to the true minimum or maximum. This answer provides a more formal description of how big this problem is, in the case of bootstrap estimation of a maximum (or minimum) order statistic from samples of a uniform distribution. There are also problems in trying to estimate extreme quantiles, like 1% or 99%, with the bootstrap. This answer provides a good explanation. The distribution of extreme values among bootstrap samples then has more to do with the vagaries of the re-sampling than with the underlying distribution of the population of interest. The median, a frequently used quantile, is quite amenable to bootstrap estimation. This Cross Validated page covers that issue in some detail, with several links to further useful reading that should help in considering these issues for other quantiles.
Estimating quantiles by bootstrap
The problem is more with extreme values of distributions rather than with quantiles per se. If the true minimum or maximum of the distribution lies beyond the limits of your data, then no amount of bo
Estimating quantiles by bootstrap The problem is more with extreme values of distributions rather than with quantiles per se. If the true minimum or maximum of the distribution lies beyond the limits of your data, then no amount of bootstrap re-sampling of your data will provide estimates closer to the true minimum or maximum. This answer provides a more formal description of how big this problem is, in the case of bootstrap estimation of a maximum (or minimum) order statistic from samples of a uniform distribution. There are also problems in trying to estimate extreme quantiles, like 1% or 99%, with the bootstrap. This answer provides a good explanation. The distribution of extreme values among bootstrap samples then has more to do with the vagaries of the re-sampling than with the underlying distribution of the population of interest. The median, a frequently used quantile, is quite amenable to bootstrap estimation. This Cross Validated page covers that issue in some detail, with several links to further useful reading that should help in considering these issues for other quantiles.
Estimating quantiles by bootstrap The problem is more with extreme values of distributions rather than with quantiles per se. If the true minimum or maximum of the distribution lies beyond the limits of your data, then no amount of bo
28,554
Number of neurons in the output layer
I am a total novice to this, but my understanding is the following: input layer - one neuron per input (feature), these are not typical neurons but simply pass the data through to the next layer hidden layers - simplest structure is to have one neuron in the hidden layer, but deep networks have many neurons and many hidden layers. output layer - this is the final hidden layer and should have as many neurons as there are outputs to the classification problem. For instance: regression - may have a single neuron binary classification - Single neuron with an activation function multi-class classification - Multiple neurons, one for each class, and a Softmax function to output the proper class based on the probabilities of the input belonging to each class. Reference: https://machinelearningmastery.com/deep-learning-with-python/
Number of neurons in the output layer
I am a total novice to this, but my understanding is the following: input layer - one neuron per input (feature), these are not typical neurons but simply pass the data through to the next layer hidde
Number of neurons in the output layer I am a total novice to this, but my understanding is the following: input layer - one neuron per input (feature), these are not typical neurons but simply pass the data through to the next layer hidden layers - simplest structure is to have one neuron in the hidden layer, but deep networks have many neurons and many hidden layers. output layer - this is the final hidden layer and should have as many neurons as there are outputs to the classification problem. For instance: regression - may have a single neuron binary classification - Single neuron with an activation function multi-class classification - Multiple neurons, one for each class, and a Softmax function to output the proper class based on the probabilities of the input belonging to each class. Reference: https://machinelearningmastery.com/deep-learning-with-python/
Number of neurons in the output layer I am a total novice to this, but my understanding is the following: input layer - one neuron per input (feature), these are not typical neurons but simply pass the data through to the next layer hidde
28,555
Cohen's d for dependent sample t-test
Geoff Cumming has a few comments on the matter (taken from Cumming, 2013): In many cases, however, the best choice of standardizer is not the SD needed to conduct inference on the effect in question. Consider, for example, the paired design, such as a simple pre–post experiment in which a single group of participants provide both pretest and posttest data. The most appropriate standardizer is virtually always (Cumming, 2012, pp. 290–294; Cumming & Finch, 2001, pp. 568–570) an estimate of the SD in the pretest population, perhaps $s_1$, the pretest SD in our data. By contrast, inference about the difference requires $s_{diff}$, the SD of the paired differences—whether for a paired t test or to calculate a CI on the difference (Cumming & Finch, 2005). To the extent the pretest and posttest scores are correlated, $s_{diff}$ will be smaller than $s_1$, our experiment will be more sensitive, and a value of d calculated erroneously using $s_{diff}$ as standardizer will be too large. The primary reason for choosing $s_{pre}$ as standardizer in the paired design is that the pretest population SD virtually always makes the best conceptual sense as a reference unit. Another important reason is to get d values that are likely to be comparable to d values given by other paired-design experiments possibly having different pretest–posttest correlations and by experiments with different designs, including the independent-groups design, all of which examine the same effect. The d values in all such cases are likely to be comparable because they use the same standardizer—the control or pretest SD. Such comparability is essential for meta-analysis, as well as for meaningful interpretation in context.
Cohen's d for dependent sample t-test
Geoff Cumming has a few comments on the matter (taken from Cumming, 2013): In many cases, however, the best choice of standardizer is not the SD needed to conduct inference on the effect in question.
Cohen's d for dependent sample t-test Geoff Cumming has a few comments on the matter (taken from Cumming, 2013): In many cases, however, the best choice of standardizer is not the SD needed to conduct inference on the effect in question. Consider, for example, the paired design, such as a simple pre–post experiment in which a single group of participants provide both pretest and posttest data. The most appropriate standardizer is virtually always (Cumming, 2012, pp. 290–294; Cumming & Finch, 2001, pp. 568–570) an estimate of the SD in the pretest population, perhaps $s_1$, the pretest SD in our data. By contrast, inference about the difference requires $s_{diff}$, the SD of the paired differences—whether for a paired t test or to calculate a CI on the difference (Cumming & Finch, 2005). To the extent the pretest and posttest scores are correlated, $s_{diff}$ will be smaller than $s_1$, our experiment will be more sensitive, and a value of d calculated erroneously using $s_{diff}$ as standardizer will be too large. The primary reason for choosing $s_{pre}$ as standardizer in the paired design is that the pretest population SD virtually always makes the best conceptual sense as a reference unit. Another important reason is to get d values that are likely to be comparable to d values given by other paired-design experiments possibly having different pretest–posttest correlations and by experiments with different designs, including the independent-groups design, all of which examine the same effect. The d values in all such cases are likely to be comparable because they use the same standardizer—the control or pretest SD. Such comparability is essential for meta-analysis, as well as for meaningful interpretation in context.
Cohen's d for dependent sample t-test Geoff Cumming has a few comments on the matter (taken from Cumming, 2013): In many cases, however, the best choice of standardizer is not the SD needed to conduct inference on the effect in question.
28,556
Cohen's d for dependent sample t-test
I found the formal answer in Frontiers in Psychology. If $t$ is the test statistic, and $N$ is the number observations, then: $$ d ≈ 2* \frac{t}{\sqrt{N}} $$ Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4. But be aware that some report a slightly different formula, namely $$ d ≈ 2* \frac{t}{\sqrt{N-2}} ≈ 2* \frac{t}{\sqrt{df}} $$ See here, for example.
Cohen's d for dependent sample t-test
I found the formal answer in Frontiers in Psychology. If $t$ is the test statistic, and $N$ is the number observations, then: $$ d ≈ 2* \frac{t}{\sqrt{N}} $$ Lakens, D. (2013). Calculating and repor
Cohen's d for dependent sample t-test I found the formal answer in Frontiers in Psychology. If $t$ is the test statistic, and $N$ is the number observations, then: $$ d ≈ 2* \frac{t}{\sqrt{N}} $$ Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4. But be aware that some report a slightly different formula, namely $$ d ≈ 2* \frac{t}{\sqrt{N-2}} ≈ 2* \frac{t}{\sqrt{df}} $$ See here, for example.
Cohen's d for dependent sample t-test I found the formal answer in Frontiers in Psychology. If $t$ is the test statistic, and $N$ is the number observations, then: $$ d ≈ 2* \frac{t}{\sqrt{N}} $$ Lakens, D. (2013). Calculating and repor
28,557
Cohen's d for dependent sample t-test
Here is a suggested R function that compute Hedges' g (the unbiased version of Cohen's d) along with its confidence interval for either between or within-subject design: gethedgesg <-function( x1, x2, design = "between", coverage = 0.95) { # mandatory arguments are x1 and x2, both a vector of data require(psych) # for the functions SD and harmonic.mean. # store the columns in a dataframe: more convenient to handle one variable than two X <- data.frame(x1,x2) # get basic descriptive statistics ns <- lengths(X) mns <- colMeans(X) sds <- SD(X) # get pairwise statistics ntilde <- harmonic.mean(ns) dmn <- abs(mns[2]-mns[1]) sdp <- sqrt( (ns[1]-1) *sds[1]^2 + (ns[2]-1)*sds[2]^2) / sqrt(ns[1]+ns[2]-2) # compute biased Cohen's d (equation 1) cohend <- dmn / sdp # compute unbiased Hedges' g (equations 2a and 3) eta <- ns[1] + ns[2] - 2 J <- gamma(eta/2) / (sqrt(eta/2) * gamma((eta-1)/2) ) hedgesg <- cohend * J # compute noncentrality parameter (equation 5a or 5b depending on the design) lambda <- if(design == "between") { hedgesg * sqrt( ntilde/2) } else { r <- cor(X)[1,2] hedgesg * sqrt( ntilde/(2 * (1-r)) ) } # confidence interval of the hedges g (equations 6 and 7) tlow <- qt(1/2 - coverage/2, df = eta, ncp = lambda ) thig <- qt(1/2 + coverage/2, df = eta, ncp = lambda ) dlow <- tlow / lambda * hedgesg dhig <- thig / lambda * hedgesg # all done! display the results cat("Hedges'g = ", hedgesg, "\n", coverage*100, "% CI = [", dlow, dhig, "]\n") } Here is how it could be used: x1 <- c(53, 68, 66, 69, 83, 91) x2 <- c(49, 60, 67, 75, 78, 89) # using the defaults: between design and 95% coverage gethedgesg(x1, x2) # changing the defaults explicitely gethedgesg(x1, x2, design = "within", coverage = 0.90 ) I hope it helps.
Cohen's d for dependent sample t-test
Here is a suggested R function that compute Hedges' g (the unbiased version of Cohen's d) along with its confidence interval for either between or within-subject design: gethedgesg <-function( x1, x2,
Cohen's d for dependent sample t-test Here is a suggested R function that compute Hedges' g (the unbiased version of Cohen's d) along with its confidence interval for either between or within-subject design: gethedgesg <-function( x1, x2, design = "between", coverage = 0.95) { # mandatory arguments are x1 and x2, both a vector of data require(psych) # for the functions SD and harmonic.mean. # store the columns in a dataframe: more convenient to handle one variable than two X <- data.frame(x1,x2) # get basic descriptive statistics ns <- lengths(X) mns <- colMeans(X) sds <- SD(X) # get pairwise statistics ntilde <- harmonic.mean(ns) dmn <- abs(mns[2]-mns[1]) sdp <- sqrt( (ns[1]-1) *sds[1]^2 + (ns[2]-1)*sds[2]^2) / sqrt(ns[1]+ns[2]-2) # compute biased Cohen's d (equation 1) cohend <- dmn / sdp # compute unbiased Hedges' g (equations 2a and 3) eta <- ns[1] + ns[2] - 2 J <- gamma(eta/2) / (sqrt(eta/2) * gamma((eta-1)/2) ) hedgesg <- cohend * J # compute noncentrality parameter (equation 5a or 5b depending on the design) lambda <- if(design == "between") { hedgesg * sqrt( ntilde/2) } else { r <- cor(X)[1,2] hedgesg * sqrt( ntilde/(2 * (1-r)) ) } # confidence interval of the hedges g (equations 6 and 7) tlow <- qt(1/2 - coverage/2, df = eta, ncp = lambda ) thig <- qt(1/2 + coverage/2, df = eta, ncp = lambda ) dlow <- tlow / lambda * hedgesg dhig <- thig / lambda * hedgesg # all done! display the results cat("Hedges'g = ", hedgesg, "\n", coverage*100, "% CI = [", dlow, dhig, "]\n") } Here is how it could be used: x1 <- c(53, 68, 66, 69, 83, 91) x2 <- c(49, 60, 67, 75, 78, 89) # using the defaults: between design and 95% coverage gethedgesg(x1, x2) # changing the defaults explicitely gethedgesg(x1, x2, design = "within", coverage = 0.90 ) I hope it helps.
Cohen's d for dependent sample t-test Here is a suggested R function that compute Hedges' g (the unbiased version of Cohen's d) along with its confidence interval for either between or within-subject design: gethedgesg <-function( x1, x2,
28,558
Is it ever a good idea to give "partial credit" (continuous outcome) in training a logistic regression?
This seems like a job for survival analysis, like Cox proportional hazards analysis or possibly some parametric survival model. Think about this problem in reverse from the way you're explaining it: what are the predictor variables associated with earlier distances to quitting? Quitting is the event. The distance covered might be considered equivalent to time-to-event in standard survival analysis. You then have a number of events equal to the number of individuals who quit, so your problem with limited numbers of predictors will diminish. All those who quit provide information. A Cox model, if it works on your data, will provide a linear predictor based on all the predictor variable values, ranking contestants in order of predicted distances to quitting.
Is it ever a good idea to give "partial credit" (continuous outcome) in training a logistic regressi
This seems like a job for survival analysis, like Cox proportional hazards analysis or possibly some parametric survival model. Think about this problem in reverse from the way you're explaining it: w
Is it ever a good idea to give "partial credit" (continuous outcome) in training a logistic regression? This seems like a job for survival analysis, like Cox proportional hazards analysis or possibly some parametric survival model. Think about this problem in reverse from the way you're explaining it: what are the predictor variables associated with earlier distances to quitting? Quitting is the event. The distance covered might be considered equivalent to time-to-event in standard survival analysis. You then have a number of events equal to the number of individuals who quit, so your problem with limited numbers of predictors will diminish. All those who quit provide information. A Cox model, if it works on your data, will provide a linear predictor based on all the predictor variable values, ranking contestants in order of predicted distances to quitting.
Is it ever a good idea to give "partial credit" (continuous outcome) in training a logistic regressi This seems like a job for survival analysis, like Cox proportional hazards analysis or possibly some parametric survival model. Think about this problem in reverse from the way you're explaining it: w
28,559
What's the difference between Bayesian Optimization (Gaussian Processes) and Simulated Annealing in practice
Simulated Annealing (SA) is a very simple algorithm in comparison with Bayesian Optimization (BO). Neither method assumes convexity of the cost function and neither method relays heavily on gradient information. SA is in a way a slightly educated random walk. The candidate solution jumps around over the solution space having a particular jump schedule (the cooling parameter). You do not care where you landed before, you don't know where you will land next. It is a typical Markov Chain approach. You do not model any strong assumptions about the underlaying solution surface. MCMC optimization has gone a long way from SA (see for example Hamiltonian Monte Carlo) but we will not expand further. One of the key issues with SA is that you need to evaluate a lot of times "fast". And it makes sense, you need as many samples as possible to explore as many states (ie. candidate solutions) as possible. You use only a tiny bit of gradient information (that you almost always accept "better" solutions). Look now at BO. BO (or simplistically Gaussian Process (GP) regression over your cost function evaluations) tries to do exactly the opposite in terms of function evaluation. It tries to minimize the number of evaluation you do. It builds a particular non-parametric model (usually a GP) for your cost function that often assumes noise. It does not use gradient information at all. BO allows you to build an informative model of your cost function with a small number of function evaluations. Afterwards you "query" this fitted function for its extrema. Again the devil is in the details; you need to sample intelligently (and assume that your prior is half-reasonable too). There is work on where to evaluate your function next especially when you know that your function actually evolves slightly over time (eg. here). An obvious advantage of SA over BO is that within SA is very straightforward to put constraints on your solution space. For example if you want non-negative solutions you just confine your sample distribution in non negative solutions. The same is not so direct in BO because even you evaluate your functions according your constraints (say non-negativity) you will need to actually constraint your process too; this taske while not impossible is more involved. In general, one would prefer SA in cases that the cost function is cheap to evaluate and BO in cases that the cost function is expensive to evaluate. I think SA is slowly but steadily falling out of favour; especially the work of gradient-free optimization (eg. NEWQUA, BOBYQA) takes away one of its major advantages in comparsion with the standard gradient descent methods which is not having to evaluate a derivative. Similarly the work on adaptive MCMC (eg. see reference above) renders it wasteful in terms of MCMC optimization for almost all cases.
What's the difference between Bayesian Optimization (Gaussian Processes) and Simulated Annealing in
Simulated Annealing (SA) is a very simple algorithm in comparison with Bayesian Optimization (BO). Neither method assumes convexity of the cost function and neither method relays heavily on gradient i
What's the difference between Bayesian Optimization (Gaussian Processes) and Simulated Annealing in practice Simulated Annealing (SA) is a very simple algorithm in comparison with Bayesian Optimization (BO). Neither method assumes convexity of the cost function and neither method relays heavily on gradient information. SA is in a way a slightly educated random walk. The candidate solution jumps around over the solution space having a particular jump schedule (the cooling parameter). You do not care where you landed before, you don't know where you will land next. It is a typical Markov Chain approach. You do not model any strong assumptions about the underlaying solution surface. MCMC optimization has gone a long way from SA (see for example Hamiltonian Monte Carlo) but we will not expand further. One of the key issues with SA is that you need to evaluate a lot of times "fast". And it makes sense, you need as many samples as possible to explore as many states (ie. candidate solutions) as possible. You use only a tiny bit of gradient information (that you almost always accept "better" solutions). Look now at BO. BO (or simplistically Gaussian Process (GP) regression over your cost function evaluations) tries to do exactly the opposite in terms of function evaluation. It tries to minimize the number of evaluation you do. It builds a particular non-parametric model (usually a GP) for your cost function that often assumes noise. It does not use gradient information at all. BO allows you to build an informative model of your cost function with a small number of function evaluations. Afterwards you "query" this fitted function for its extrema. Again the devil is in the details; you need to sample intelligently (and assume that your prior is half-reasonable too). There is work on where to evaluate your function next especially when you know that your function actually evolves slightly over time (eg. here). An obvious advantage of SA over BO is that within SA is very straightforward to put constraints on your solution space. For example if you want non-negative solutions you just confine your sample distribution in non negative solutions. The same is not so direct in BO because even you evaluate your functions according your constraints (say non-negativity) you will need to actually constraint your process too; this taske while not impossible is more involved. In general, one would prefer SA in cases that the cost function is cheap to evaluate and BO in cases that the cost function is expensive to evaluate. I think SA is slowly but steadily falling out of favour; especially the work of gradient-free optimization (eg. NEWQUA, BOBYQA) takes away one of its major advantages in comparsion with the standard gradient descent methods which is not having to evaluate a derivative. Similarly the work on adaptive MCMC (eg. see reference above) renders it wasteful in terms of MCMC optimization for almost all cases.
What's the difference between Bayesian Optimization (Gaussian Processes) and Simulated Annealing in Simulated Annealing (SA) is a very simple algorithm in comparison with Bayesian Optimization (BO). Neither method assumes convexity of the cost function and neither method relays heavily on gradient i
28,560
Assessing variable importance in generalized additive models (GAM)
Variable importance doesn't have a universally agreed-upon definition, but usually it means something like how much variance is explained by a predictor in your model. What you're describing isn't really conventional variable importance, but sensitivity to change in a covariate. Variance explained and sensitivity are not the same thing, and can be very different. A model could be highly sensitive to change in a covariate, but if that covariate itself has low variance, it might not explain much variance in the response. You can make variance explained and sensitivity correlate better numerically by rescaling predictors to have unit variance, but the concepts remain distinct. Sensitivity can be changed simply by rescaling a variable, while variance explained is invariant to scaling in linear models. Sensitivity isn't a single well-defined number for a GAM precisely because of the nonlinearity. In the mgcv package, the significance of model terms can be measured through the $\chi^2$ and $p$ values reported by summary.gam and anova.gam. However, significance again is yet another somewhat different concept than importance.
Assessing variable importance in generalized additive models (GAM)
Variable importance doesn't have a universally agreed-upon definition, but usually it means something like how much variance is explained by a predictor in your model. What you're describing isn't rea
Assessing variable importance in generalized additive models (GAM) Variable importance doesn't have a universally agreed-upon definition, but usually it means something like how much variance is explained by a predictor in your model. What you're describing isn't really conventional variable importance, but sensitivity to change in a covariate. Variance explained and sensitivity are not the same thing, and can be very different. A model could be highly sensitive to change in a covariate, but if that covariate itself has low variance, it might not explain much variance in the response. You can make variance explained and sensitivity correlate better numerically by rescaling predictors to have unit variance, but the concepts remain distinct. Sensitivity can be changed simply by rescaling a variable, while variance explained is invariant to scaling in linear models. Sensitivity isn't a single well-defined number for a GAM precisely because of the nonlinearity. In the mgcv package, the significance of model terms can be measured through the $\chi^2$ and $p$ values reported by summary.gam and anova.gam. However, significance again is yet another somewhat different concept than importance.
Assessing variable importance in generalized additive models (GAM) Variable importance doesn't have a universally agreed-upon definition, but usually it means something like how much variance is explained by a predictor in your model. What you're describing isn't rea
28,561
What is scikit-learn's LogisticRegression minimizing?
Nevermind my question, I did the derivation once again and found out that the scikit equation is correct, and it is minimizing the negative log likelihood. Here are the steps: Let $(X_i,y_i), i=1,\dots,m$ be pairs of (features, class), where $X_i$ is a column-vector of $N$ features. The class $y_i\in\{1,-1\}$ will be limited to these two values (instead of 0 and 1), which will be useful later. We are trying to model the probability of a $X$ feature vector to be of class $y=1$ as: $$p(y=1|X;w,c) = g(X_i^Tw+c) = \frac{1}{1+\exp(-(X_i^Tw+c))}\,,$$ where $w,c$ are the weights and intercept of the logistic regression model. To obtain the optimal $w,c$, we want to maximize the likelihood given the database of labeled data. The optimization problem is: $$\begin{align} \mathop{argmax}_\theta\quad& \mathcal{L}(w,c;X_1,\dots,X_m) \\ &= \prod_{i,y_i=1} p(y=1|X_i;w,c) \prod_{i,y_i=-1} p(y=-1|X_i;w,c) \\ &\langle \text{There are only two classes, so } p(y=-1|\dots) = 1-p(y=1|\dots)\rangle\\ &= \prod_{i,y_i=1} p(y=1|X_i;w,c) \prod_{i,y_i=-1} (1-p(y=1|X_i;w,c)) \\ &\langle \text{Definition of } p\rangle\\ &= \prod_{i,y_i=1} g(X_i^Tw +c) \prod_{i,y_i=-1} (1-g(X_i^Tw +c)) \\ &\langle \text{Useful property: } 1-g(z) = g(-z) \rangle\\ &= \prod_{i,y_i=1} g(X_i^Tw +c) \prod_{i,y_i=-1} g(-(X_i^Tw +c)) \\ &\langle \text{Handy trick of using +1/-1 classes: multiply by } y_i \text{ to have a common product}\rangle\\ &= \prod_{i=1}^m g(y_i (X_i^Tw +c)) \\ \end{align}$$ At this point I decided to apply the logarithm function (since its monotonically increasing) and flip the maximization problem to a minimization, by multiplying by -1: $$\begin{align} \mathop{argmin}_\theta\quad & -\log \left(\prod_{i=1}^m g(y_i (X_i^Tw +c))\right) \\ &\langle \log (a\cdot b) = \log a + \log b \rangle\\ &= -\sum_{i=1}^{m} \log g(y_i (X_i^Tw +c)) \\ &\langle \text{definition of } g \rangle\\ &= -\sum_{i=1}^{m} \log \frac{1}{1+\exp(-y_i (X_i^Tw +c))} \\ &\langle \log (a/b) = \log a - \log b \rangle\\ &= -\sum_{i=1}^{m} \log 1 - \log (1+\exp(-y_i (X_i^Tw +c))) \\ &\langle \log 1 \text{ is a constant, so it can be ignored} \rangle\\ &= -\sum_{i=1}^{m} - \log (1+\exp(-y_i (X_i^Tw +c))) \\ &= \sum_{i=1}^{m} \log (\exp(-y_i (X_i^Tw +c))+1)\,, \end{align}$$ which is exactly the equation minimized by scikit-learn (without the L1 regularization term, and with $C=1$)
What is scikit-learn's LogisticRegression minimizing?
Nevermind my question, I did the derivation once again and found out that the scikit equation is correct, and it is minimizing the negative log likelihood. Here are the steps: Let $(X_i,y_i), i=1,\dot
What is scikit-learn's LogisticRegression minimizing? Nevermind my question, I did the derivation once again and found out that the scikit equation is correct, and it is minimizing the negative log likelihood. Here are the steps: Let $(X_i,y_i), i=1,\dots,m$ be pairs of (features, class), where $X_i$ is a column-vector of $N$ features. The class $y_i\in\{1,-1\}$ will be limited to these two values (instead of 0 and 1), which will be useful later. We are trying to model the probability of a $X$ feature vector to be of class $y=1$ as: $$p(y=1|X;w,c) = g(X_i^Tw+c) = \frac{1}{1+\exp(-(X_i^Tw+c))}\,,$$ where $w,c$ are the weights and intercept of the logistic regression model. To obtain the optimal $w,c$, we want to maximize the likelihood given the database of labeled data. The optimization problem is: $$\begin{align} \mathop{argmax}_\theta\quad& \mathcal{L}(w,c;X_1,\dots,X_m) \\ &= \prod_{i,y_i=1} p(y=1|X_i;w,c) \prod_{i,y_i=-1} p(y=-1|X_i;w,c) \\ &\langle \text{There are only two classes, so } p(y=-1|\dots) = 1-p(y=1|\dots)\rangle\\ &= \prod_{i,y_i=1} p(y=1|X_i;w,c) \prod_{i,y_i=-1} (1-p(y=1|X_i;w,c)) \\ &\langle \text{Definition of } p\rangle\\ &= \prod_{i,y_i=1} g(X_i^Tw +c) \prod_{i,y_i=-1} (1-g(X_i^Tw +c)) \\ &\langle \text{Useful property: } 1-g(z) = g(-z) \rangle\\ &= \prod_{i,y_i=1} g(X_i^Tw +c) \prod_{i,y_i=-1} g(-(X_i^Tw +c)) \\ &\langle \text{Handy trick of using +1/-1 classes: multiply by } y_i \text{ to have a common product}\rangle\\ &= \prod_{i=1}^m g(y_i (X_i^Tw +c)) \\ \end{align}$$ At this point I decided to apply the logarithm function (since its monotonically increasing) and flip the maximization problem to a minimization, by multiplying by -1: $$\begin{align} \mathop{argmin}_\theta\quad & -\log \left(\prod_{i=1}^m g(y_i (X_i^Tw +c))\right) \\ &\langle \log (a\cdot b) = \log a + \log b \rangle\\ &= -\sum_{i=1}^{m} \log g(y_i (X_i^Tw +c)) \\ &\langle \text{definition of } g \rangle\\ &= -\sum_{i=1}^{m} \log \frac{1}{1+\exp(-y_i (X_i^Tw +c))} \\ &\langle \log (a/b) = \log a - \log b \rangle\\ &= -\sum_{i=1}^{m} \log 1 - \log (1+\exp(-y_i (X_i^Tw +c))) \\ &\langle \log 1 \text{ is a constant, so it can be ignored} \rangle\\ &= -\sum_{i=1}^{m} - \log (1+\exp(-y_i (X_i^Tw +c))) \\ &= \sum_{i=1}^{m} \log (\exp(-y_i (X_i^Tw +c))+1)\,, \end{align}$$ which is exactly the equation minimized by scikit-learn (without the L1 regularization term, and with $C=1$)
What is scikit-learn's LogisticRegression minimizing? Nevermind my question, I did the derivation once again and found out that the scikit equation is correct, and it is minimizing the negative log likelihood. Here are the steps: Let $(X_i,y_i), i=1,\dot
28,562
How is gradient boosting like gradient descent?
Suppose we are in the following situation. We have some data $\{ x_i, y_i \}$, where each $x_i$ can be a number or vector, and we would like to determine a function $f$ that approximates the relationship $f(x_i) \approx y_i$, in the sense that the least squares error: $$ \frac{1}{2} \sum_i (y_i - f(x_i))^2 $$ is small. Now, the question enters of what we would like the domain of $f$ to be. A degenerate choice for the domain is just the points in our training data. In this case, we may just define $f(x_i) = y$, covering the entire desired domain, and be done with it. A round about way to arrive at this answer is by doing gradient descent with this discrete space as the domain. This takes a bit of a change in point of view. Let's view the loss as a function of the point true $y$ and the prediction $f$ (for the moment, $f$ is not a function, but just the value of the prediction) $$ L(f; y) = \frac{1}{2} (y - f)^2 $$ and then take the gradient with respect to the prediction $$ \nabla_f L(f; y) = f - y $$ Then the gradient update, starting from an initial value of $y_0$ is $$ y_1 = y_0 - \nabla_f (y_0, y) = y_0 - (y_0 - y) = y $$ So we recover our perfect prediction in a gradient step with this setup, which is nice! The flaw here is, of course, that we want $f$ to be defined at much more than just our training data points. To do this, we must make a few concessions, for we are not able to evaluate the loss function, or its gradient, at any points other than our training data set. The big idea is to weakly approximate $\nabla L$. Start with an initial guess at $f$, almost always a simple constant function $f(x) = f_0$, this is defined everywhere. Now generate a new working dataset by evaluating the gradient of the loss function at the training data, using the initial guess for $f$: $$ W = \{ x_i, f_0 - y \} $$ Now approximate $\nabla L$ by fitting weak learner to $W$. Say we get the approximation $F \approx \nabla L$. We have gained an extension of the data $W$ across the entire domain in the form of $F(X)$, though we have lost precision at the training points, since we fit a small learner. Finally, use $F$ in place of $\nabla L$ in the gradient update of $f_0$ over the entire domain: $$ f_1(x) = f_0(x) - F(x) $$ We get out $f_1$, a new approximation of $f$, a bit better than $f_0$. Start over with $f_1$, and iterate until satisfied. Hopefully, you see that what is really important is approximating the gradient of the loss. In the case of least squares minimization this takes the form of raw residuals, but in more sophisticated cases it does not. The machinery still applies though. As long as one can construct an algorithm for computing the loss and gradient of loss at the training data, we can use this algorithm to approximate a function minimizing that loss.
How is gradient boosting like gradient descent?
Suppose we are in the following situation. We have some data $\{ x_i, y_i \}$, where each $x_i$ can be a number or vector, and we would like to determine a function $f$ that approximates the relation
How is gradient boosting like gradient descent? Suppose we are in the following situation. We have some data $\{ x_i, y_i \}$, where each $x_i$ can be a number or vector, and we would like to determine a function $f$ that approximates the relationship $f(x_i) \approx y_i$, in the sense that the least squares error: $$ \frac{1}{2} \sum_i (y_i - f(x_i))^2 $$ is small. Now, the question enters of what we would like the domain of $f$ to be. A degenerate choice for the domain is just the points in our training data. In this case, we may just define $f(x_i) = y$, covering the entire desired domain, and be done with it. A round about way to arrive at this answer is by doing gradient descent with this discrete space as the domain. This takes a bit of a change in point of view. Let's view the loss as a function of the point true $y$ and the prediction $f$ (for the moment, $f$ is not a function, but just the value of the prediction) $$ L(f; y) = \frac{1}{2} (y - f)^2 $$ and then take the gradient with respect to the prediction $$ \nabla_f L(f; y) = f - y $$ Then the gradient update, starting from an initial value of $y_0$ is $$ y_1 = y_0 - \nabla_f (y_0, y) = y_0 - (y_0 - y) = y $$ So we recover our perfect prediction in a gradient step with this setup, which is nice! The flaw here is, of course, that we want $f$ to be defined at much more than just our training data points. To do this, we must make a few concessions, for we are not able to evaluate the loss function, or its gradient, at any points other than our training data set. The big idea is to weakly approximate $\nabla L$. Start with an initial guess at $f$, almost always a simple constant function $f(x) = f_0$, this is defined everywhere. Now generate a new working dataset by evaluating the gradient of the loss function at the training data, using the initial guess for $f$: $$ W = \{ x_i, f_0 - y \} $$ Now approximate $\nabla L$ by fitting weak learner to $W$. Say we get the approximation $F \approx \nabla L$. We have gained an extension of the data $W$ across the entire domain in the form of $F(X)$, though we have lost precision at the training points, since we fit a small learner. Finally, use $F$ in place of $\nabla L$ in the gradient update of $f_0$ over the entire domain: $$ f_1(x) = f_0(x) - F(x) $$ We get out $f_1$, a new approximation of $f$, a bit better than $f_0$. Start over with $f_1$, and iterate until satisfied. Hopefully, you see that what is really important is approximating the gradient of the loss. In the case of least squares minimization this takes the form of raw residuals, but in more sophisticated cases it does not. The machinery still applies though. As long as one can construct an algorithm for computing the loss and gradient of loss at the training data, we can use this algorithm to approximate a function minimizing that loss.
How is gradient boosting like gradient descent? Suppose we are in the following situation. We have some data $\{ x_i, y_i \}$, where each $x_i$ can be a number or vector, and we would like to determine a function $f$ that approximates the relation
28,563
What's a good way of graphically representing a very large number of paired datapoints?
Given how I understand your aim, I'd just calculate paired differences (bars - dots), then plot these differences in a histogram or kernel density estimate plot. You could also add any combination of (1) a vertical line corresponding to zero difference (2) any choice of percentiles. This would highlight what portion of the data have bars exceeding dots, and generally what the observed differences are. (I've assumed that you're not interested in displaying the actual, raw values of bars and dots in the same plot.) One could also plot confidence or posterior credible intervals to indicate whether these differences are significant. (H/T @MrMeritology!)
What's a good way of graphically representing a very large number of paired datapoints?
Given how I understand your aim, I'd just calculate paired differences (bars - dots), then plot these differences in a histogram or kernel density estimate plot. You could also add any combination of
What's a good way of graphically representing a very large number of paired datapoints? Given how I understand your aim, I'd just calculate paired differences (bars - dots), then plot these differences in a histogram or kernel density estimate plot. You could also add any combination of (1) a vertical line corresponding to zero difference (2) any choice of percentiles. This would highlight what portion of the data have bars exceeding dots, and generally what the observed differences are. (I've assumed that you're not interested in displaying the actual, raw values of bars and dots in the same plot.) One could also plot confidence or posterior credible intervals to indicate whether these differences are significant. (H/T @MrMeritology!)
What's a good way of graphically representing a very large number of paired datapoints? Given how I understand your aim, I'd just calculate paired differences (bars - dots), then plot these differences in a histogram or kernel density estimate plot. You could also add any combination of
28,564
What's a good way of graphically representing a very large number of paired datapoints?
With so many pairs you have the possibility of investigating more profoundly the structure, like if the difference $y_B - y_A$ depends on the "starting point" $y_A$! You could fit a model like $$ y_B=\mu+\text{offset}(y_A) +\Delta (y_A-\bar{y}_A) + \epsilon $$ and you could even add a quadratic term $+\Delta_2 (y_A-\bar{y}_A)^2$ or you could replace the linear+quadratic term with a spline using a generalized additive model (or regression splines). Graphically you could show the lines as you have shown, with a reduced alpha factor (*), maybe reducing further by only showing a random sample of lines. Then you could color the lines according to slope ... For Bland-Altman plots, mentioned in a comment by Nick Cox, see for instance for an example Agreement between methods with multiple observations per individual or look through the tag bland-altman-plot. (*) alpha factor here is a graphical parameter making points in the plot transparent, so the first plotted points is not totally occulted by later overplotting.
What's a good way of graphically representing a very large number of paired datapoints?
With so many pairs you have the possibility of investigating more profoundly the structure, like if the difference $y_B - y_A$ depends on the "starting point" $y_A$! You could fit a model like $$
What's a good way of graphically representing a very large number of paired datapoints? With so many pairs you have the possibility of investigating more profoundly the structure, like if the difference $y_B - y_A$ depends on the "starting point" $y_A$! You could fit a model like $$ y_B=\mu+\text{offset}(y_A) +\Delta (y_A-\bar{y}_A) + \epsilon $$ and you could even add a quadratic term $+\Delta_2 (y_A-\bar{y}_A)^2$ or you could replace the linear+quadratic term with a spline using a generalized additive model (or regression splines). Graphically you could show the lines as you have shown, with a reduced alpha factor (*), maybe reducing further by only showing a random sample of lines. Then you could color the lines according to slope ... For Bland-Altman plots, mentioned in a comment by Nick Cox, see for instance for an example Agreement between methods with multiple observations per individual or look through the tag bland-altman-plot. (*) alpha factor here is a graphical parameter making points in the plot transparent, so the first plotted points is not totally occulted by later overplotting.
What's a good way of graphically representing a very large number of paired datapoints? With so many pairs you have the possibility of investigating more profoundly the structure, like if the difference $y_B - y_A$ depends on the "starting point" $y_A$! You could fit a model like $$
28,565
What's a good way of graphically representing a very large number of paired datapoints?
I would prefer the 2D scatter plot. I would draw the reference line in light gray for more contrast in the crowded region. To alleviate crowding, draw the markers without border, further reduce alpha, reduce marker size. That said, if you are more interested in the typical pairs than in the wings of the distribution, try line-plotting the cumulative sum of the dots versus the cumulative sum of the bars. The plot is still 2D but with much less ink. To save also plotting area, you may rotate the trace by 45° so that the frame serves as the reference direction. That plot would also show any trend in the data. If the process is known to be stationary, sort the pairs by, eg, their geometric mean, sqrt(bars*dots).
What's a good way of graphically representing a very large number of paired datapoints?
I would prefer the 2D scatter plot. I would draw the reference line in light gray for more contrast in the crowded region. To alleviate crowding, draw the markers without border, further reduce alpha,
What's a good way of graphically representing a very large number of paired datapoints? I would prefer the 2D scatter plot. I would draw the reference line in light gray for more contrast in the crowded region. To alleviate crowding, draw the markers without border, further reduce alpha, reduce marker size. That said, if you are more interested in the typical pairs than in the wings of the distribution, try line-plotting the cumulative sum of the dots versus the cumulative sum of the bars. The plot is still 2D but with much less ink. To save also plotting area, you may rotate the trace by 45° so that the frame serves as the reference direction. That plot would also show any trend in the data. If the process is known to be stationary, sort the pairs by, eg, their geometric mean, sqrt(bars*dots).
What's a good way of graphically representing a very large number of paired datapoints? I would prefer the 2D scatter plot. I would draw the reference line in light gray for more contrast in the crowded region. To alleviate crowding, draw the markers without border, further reduce alpha,
28,566
What's a good way of graphically representing a very large number of paired datapoints?
I would recommend plotting the lines as you have them for the median and the quartiles, or as many percentiles as you would like for that matter. The median could remain thicker/more discernible than than other percentile lines. This would help preserve the ability to see how the data behave across the distribution without compromising the simplicity and familiarity of the plot that is currently used in your field. Also, with such a high sample size, the mean or median trend with error bars would likely be sufficient since you would so thoroughly be enjoying the central limit theorem. The biomedical field also relies on those paired line plots, but this is often the case because the sample size could be on the order of 10-20, so it is important to visualise potential leverage points.
What's a good way of graphically representing a very large number of paired datapoints?
I would recommend plotting the lines as you have them for the median and the quartiles, or as many percentiles as you would like for that matter. The median could remain thicker/more discernible than
What's a good way of graphically representing a very large number of paired datapoints? I would recommend plotting the lines as you have them for the median and the quartiles, or as many percentiles as you would like for that matter. The median could remain thicker/more discernible than than other percentile lines. This would help preserve the ability to see how the data behave across the distribution without compromising the simplicity and familiarity of the plot that is currently used in your field. Also, with such a high sample size, the mean or median trend with error bars would likely be sufficient since you would so thoroughly be enjoying the central limit theorem. The biomedical field also relies on those paired line plots, but this is often the case because the sample size could be on the order of 10-20, so it is important to visualise potential leverage points.
What's a good way of graphically representing a very large number of paired datapoints? I would recommend plotting the lines as you have them for the median and the quartiles, or as many percentiles as you would like for that matter. The median could remain thicker/more discernible than
28,567
What's a good way of graphically representing a very large number of paired datapoints?
My first suggestion is a scatter plot. If 10000 dots unevenly spread in your plot is still a vague cloud, consider a heat map. The colour of the pixel at x = 10.5, y = 11.5 would indicate how many times value between 10.45 and 10.55 is mapped onto a value between 11.45 and 11.55 : 0 = white = RGB(255,255,255), 1 = blue = RGB(0,0,255), 2 = RGB(1,0,254), ... 256 and above = RGB(255,0,0) = red
What's a good way of graphically representing a very large number of paired datapoints?
My first suggestion is a scatter plot. If 10000 dots unevenly spread in your plot is still a vague cloud, consider a heat map. The colour of the pixel at x = 10.5, y = 11.5 would indicate how many tim
What's a good way of graphically representing a very large number of paired datapoints? My first suggestion is a scatter plot. If 10000 dots unevenly spread in your plot is still a vague cloud, consider a heat map. The colour of the pixel at x = 10.5, y = 11.5 would indicate how many times value between 10.45 and 10.55 is mapped onto a value between 11.45 and 11.55 : 0 = white = RGB(255,255,255), 1 = blue = RGB(0,0,255), 2 = RGB(1,0,254), ... 256 and above = RGB(255,0,0) = red
What's a good way of graphically representing a very large number of paired datapoints? My first suggestion is a scatter plot. If 10000 dots unevenly spread in your plot is still a vague cloud, consider a heat map. The colour of the pixel at x = 10.5, y = 11.5 would indicate how many tim
28,568
Taming of the skew... Why are there so many skew functions?
Let's start with the one you describe as "an old method"; this is the second Pearson skewness, or median-skewness; in fact the moment-skewness and that are of broadly the same vintage (the median skewness is actually a bit younger since the moment skewness precedes Pearson's efforts). A little discussion of some of the history can be found here; that post may also throw a little light on a couple of your other questions. If you search our site using second Pearson skewness you'll hit quite a few posts that contain some discussion of the behavior of this measure. It's not really any weirder than the moment skewness measures in my mind; they both sometimes do some odd things that don't match people's expectations of a skewness measure. The usual form of $b_1$ is discussed in Wikipedia here; as it says, it's a method of moments estimator, and a natural thing to use given the population calculation in terms of standardized third moment. If one uses $s_n$ for $s_{n-1}$ (i.e. without Bessel correction) you get the $g_1$ type you mention; either of those are what I'd call "method of moments". It's not clear to me there's much point trying to unbias the denominator since that doesn't necessarily unbias the ratio; it may make sense to do it so that the calculation matches what people might expect to do by hand. However, there's a second (equivalent) way to define population skewness, in terms of cumulants (see the above Wikipedia link), and if for a sample skewness you used unbiased estimates of those, you get $G_1$. [Note further that multiplying the numerator in $b_1$ by $\frac{n^2}{(n-1)(n-2)}$ unbiases it, so that can be another reason people look at that form. If one attempts to unbias both the third and second moment calculations, one obtains a slightly different factor in $n,(n-1)$ and $(n-2)$ coming out the front.] All three of those are simply slightly different variations on third-moment skewness. In very large samples there's really no difference which you use. In smaller samples they all have slightly different biases and variance. The forms discussed here don't exhaust the definitions of skewness (I've seen about a dozen, I think - the Wikipedia article lists quite a few, but even that doesn't cover the gamut), nor even the definitions related to third-moment skewness, of which I've seen more than the three you raise here. Why are there many measures of skewness? So (treating all those third-moment skewnesses as one for a moment) why so many different skewnesses? Partly it's because skewness as a notion is actually quite hard to pin down. It's a slippery thing you can't really pin down to a single number. As a result, all the definitions are less than adequate in some way, but nevertheless usually accord with our broad sense of what we think a skewness measure should do. People keep trying to come up with better definitions, but the old measures, like QWERTY keyboards, aren't going anywhere. Why are there several measures of skewness based on the 3rd moment? As for why so many third-moment skewnesses, that's simply because there's more than one way to turn a population-measure into a sample measure. We saw two routes based on moments and one based on cumulants. We could construct still more; we might for example try to get a (small-sample) unbiased measure under some distributional assumption, or a minimum-mean-square-error measure or some other such quantity. You might find some of the posts on site relating to skewness enlightening; there are some that show examples of distributions which are not symmetric but have zero third moment skewness. There's some that show the Pearson median-skewness and the third moment skewness can have opposite signs. Here are links to a few posts relating to skewness: Does mean = median imply that a unimodal distribution is symmetric? In left skewed data, what is the relationship between mean and median? how to determine skewness from histogram with outliers? In relation to your final question about the calculation of $b_1$: $\sqrt{n} \cdot \frac{\sum{(x-\bar{x})^3}}{(\sum({x - \bar{x}})^2)^{3/2}}\qquad$ #from e1071::skewness source $\frac{\sum(x - \bar{x})^3/n}{(\sum(x - \bar{x})^2/n)^{3/2}}\qquad$ #from moments and e1071 help page The two forms are algebraically identical; the second is clearly written in the form "third moment on second moment to power $\frac32$, while the first just cancels out terms in $n$ and brings the leftovers out the front. I don't think it was done for reasons of avoiding overflow/underflow; I imagine it was done because it was thought to be a little faster. [If overflow or underflow are a concern one would probably arrange the calculations differently.]
Taming of the skew... Why are there so many skew functions?
Let's start with the one you describe as "an old method"; this is the second Pearson skewness, or median-skewness; in fact the moment-skewness and that are of broadly the same vintage (the median skew
Taming of the skew... Why are there so many skew functions? Let's start with the one you describe as "an old method"; this is the second Pearson skewness, or median-skewness; in fact the moment-skewness and that are of broadly the same vintage (the median skewness is actually a bit younger since the moment skewness precedes Pearson's efforts). A little discussion of some of the history can be found here; that post may also throw a little light on a couple of your other questions. If you search our site using second Pearson skewness you'll hit quite a few posts that contain some discussion of the behavior of this measure. It's not really any weirder than the moment skewness measures in my mind; they both sometimes do some odd things that don't match people's expectations of a skewness measure. The usual form of $b_1$ is discussed in Wikipedia here; as it says, it's a method of moments estimator, and a natural thing to use given the population calculation in terms of standardized third moment. If one uses $s_n$ for $s_{n-1}$ (i.e. without Bessel correction) you get the $g_1$ type you mention; either of those are what I'd call "method of moments". It's not clear to me there's much point trying to unbias the denominator since that doesn't necessarily unbias the ratio; it may make sense to do it so that the calculation matches what people might expect to do by hand. However, there's a second (equivalent) way to define population skewness, in terms of cumulants (see the above Wikipedia link), and if for a sample skewness you used unbiased estimates of those, you get $G_1$. [Note further that multiplying the numerator in $b_1$ by $\frac{n^2}{(n-1)(n-2)}$ unbiases it, so that can be another reason people look at that form. If one attempts to unbias both the third and second moment calculations, one obtains a slightly different factor in $n,(n-1)$ and $(n-2)$ coming out the front.] All three of those are simply slightly different variations on third-moment skewness. In very large samples there's really no difference which you use. In smaller samples they all have slightly different biases and variance. The forms discussed here don't exhaust the definitions of skewness (I've seen about a dozen, I think - the Wikipedia article lists quite a few, but even that doesn't cover the gamut), nor even the definitions related to third-moment skewness, of which I've seen more than the three you raise here. Why are there many measures of skewness? So (treating all those third-moment skewnesses as one for a moment) why so many different skewnesses? Partly it's because skewness as a notion is actually quite hard to pin down. It's a slippery thing you can't really pin down to a single number. As a result, all the definitions are less than adequate in some way, but nevertheless usually accord with our broad sense of what we think a skewness measure should do. People keep trying to come up with better definitions, but the old measures, like QWERTY keyboards, aren't going anywhere. Why are there several measures of skewness based on the 3rd moment? As for why so many third-moment skewnesses, that's simply because there's more than one way to turn a population-measure into a sample measure. We saw two routes based on moments and one based on cumulants. We could construct still more; we might for example try to get a (small-sample) unbiased measure under some distributional assumption, or a minimum-mean-square-error measure or some other such quantity. You might find some of the posts on site relating to skewness enlightening; there are some that show examples of distributions which are not symmetric but have zero third moment skewness. There's some that show the Pearson median-skewness and the third moment skewness can have opposite signs. Here are links to a few posts relating to skewness: Does mean = median imply that a unimodal distribution is symmetric? In left skewed data, what is the relationship between mean and median? how to determine skewness from histogram with outliers? In relation to your final question about the calculation of $b_1$: $\sqrt{n} \cdot \frac{\sum{(x-\bar{x})^3}}{(\sum({x - \bar{x}})^2)^{3/2}}\qquad$ #from e1071::skewness source $\frac{\sum(x - \bar{x})^3/n}{(\sum(x - \bar{x})^2/n)^{3/2}}\qquad$ #from moments and e1071 help page The two forms are algebraically identical; the second is clearly written in the form "third moment on second moment to power $\frac32$, while the first just cancels out terms in $n$ and brings the leftovers out the front. I don't think it was done for reasons of avoiding overflow/underflow; I imagine it was done because it was thought to be a little faster. [If overflow or underflow are a concern one would probably arrange the calculations differently.]
Taming of the skew... Why are there so many skew functions? Let's start with the one you describe as "an old method"; this is the second Pearson skewness, or median-skewness; in fact the moment-skewness and that are of broadly the same vintage (the median skew
28,569
Why doesn't a sample proportion also have a binomial distribution
As you state, the sample proportion is a scaled binomial (under a few assumptions). But a scaled binomial is not a binomial distribution; a binomial can only take on integer values, for example. Of course, it is very easy to figure out the pmf, cdf, expected value, variance, etc. from what we know of the binomial distribution, which I think is what you're getting at. But if you were to say something like "the sample proportion is a binomial, so the expected value is $np$, as is for all binomials", you would be clearly wrong. If you wanted to be really technical, if $n$ = 1, then the sample proportion is still a binomial distribution.
Why doesn't a sample proportion also have a binomial distribution
As you state, the sample proportion is a scaled binomial (under a few assumptions). But a scaled binomial is not a binomial distribution; a binomial can only take on integer values, for example. Of co
Why doesn't a sample proportion also have a binomial distribution As you state, the sample proportion is a scaled binomial (under a few assumptions). But a scaled binomial is not a binomial distribution; a binomial can only take on integer values, for example. Of course, it is very easy to figure out the pmf, cdf, expected value, variance, etc. from what we know of the binomial distribution, which I think is what you're getting at. But if you were to say something like "the sample proportion is a binomial, so the expected value is $np$, as is for all binomials", you would be clearly wrong. If you wanted to be really technical, if $n$ = 1, then the sample proportion is still a binomial distribution.
Why doesn't a sample proportion also have a binomial distribution As you state, the sample proportion is a scaled binomial (under a few assumptions). But a scaled binomial is not a binomial distribution; a binomial can only take on integer values, for example. Of co
28,570
Does non-stationarity in logit/probit matter?
Whatever model you are using, the fundamentals of econometrics theory should be checked and respected. Researchers strut about their use of very sophisticated models, but often –more or less voluntarily- they forgot about the fundamentals of econometrics; they hence become quite ridicolus. Econometrics is no more than estimating the mean and variance of your parameters, but if the mean, variance and covariance of your variables change over time, suitable devices and analysis must be performed. In my opinion, probit/logit models with non stationary data make no sense because you want to fit the right hand side of your equation (that is non stationary) into the lefthand side that is a binary variable. The structure of the time dynamics of your independent variables must be coherent with the dependent ones. If some of your regressors are non stationary, your are miss-specifying your relation; indeed it must be that the combination of your regressors must be stationary. So I believe that probably you have to do a two step regression. In the first one you find a stationary relation of your variables, then you put this relation into your probit/logit model and estimate only one parameter. Obviously in the first step you must have at list two integrated variables (in the cointegration case) or at least two variables with the same type of trend trend. If this is not the case you have a problem of omitted variables. The altertnative to all this is that you change the scope of your analysis and transform all your regressors into a stationary ones.
Does non-stationarity in logit/probit matter?
Whatever model you are using, the fundamentals of econometrics theory should be checked and respected. Researchers strut about their use of very sophisticated models, but often –more or less voluntar
Does non-stationarity in logit/probit matter? Whatever model you are using, the fundamentals of econometrics theory should be checked and respected. Researchers strut about their use of very sophisticated models, but often –more or less voluntarily- they forgot about the fundamentals of econometrics; they hence become quite ridicolus. Econometrics is no more than estimating the mean and variance of your parameters, but if the mean, variance and covariance of your variables change over time, suitable devices and analysis must be performed. In my opinion, probit/logit models with non stationary data make no sense because you want to fit the right hand side of your equation (that is non stationary) into the lefthand side that is a binary variable. The structure of the time dynamics of your independent variables must be coherent with the dependent ones. If some of your regressors are non stationary, your are miss-specifying your relation; indeed it must be that the combination of your regressors must be stationary. So I believe that probably you have to do a two step regression. In the first one you find a stationary relation of your variables, then you put this relation into your probit/logit model and estimate only one parameter. Obviously in the first step you must have at list two integrated variables (in the cointegration case) or at least two variables with the same type of trend trend. If this is not the case you have a problem of omitted variables. The altertnative to all this is that you change the scope of your analysis and transform all your regressors into a stationary ones.
Does non-stationarity in logit/probit matter? Whatever model you are using, the fundamentals of econometrics theory should be checked and respected. Researchers strut about their use of very sophisticated models, but often –more or less voluntar
28,571
Does non-stationarity in logit/probit matter?
I suggest looking at the results in Chang Jiang Park (2006) and Park, Phillips (2000).* According to the first paper, logit estimators are consistent even in the case of integrated series (theorem 2 at page 6-7) and usual t-statistics can be used for the parameters of interest in your case (the coefficients on the regressors). Other papers of the same authors develop econometric theory for other cases of non-stationary processes in non-linear models. *These papers treat only theory, unfortunately I am unable to find an example of an empirical paper actually mentioning the issue of non-stationarity in this context.
Does non-stationarity in logit/probit matter?
I suggest looking at the results in Chang Jiang Park (2006) and Park, Phillips (2000).* According to the first paper, logit estimators are consistent even in the case of integrated series (theorem 2 a
Does non-stationarity in logit/probit matter? I suggest looking at the results in Chang Jiang Park (2006) and Park, Phillips (2000).* According to the first paper, logit estimators are consistent even in the case of integrated series (theorem 2 at page 6-7) and usual t-statistics can be used for the parameters of interest in your case (the coefficients on the regressors). Other papers of the same authors develop econometric theory for other cases of non-stationary processes in non-linear models. *These papers treat only theory, unfortunately I am unable to find an example of an empirical paper actually mentioning the issue of non-stationarity in this context.
Does non-stationarity in logit/probit matter? I suggest looking at the results in Chang Jiang Park (2006) and Park, Phillips (2000).* According to the first paper, logit estimators are consistent even in the case of integrated series (theorem 2 a
28,572
Does non-stationarity in logit/probit matter?
I know this post is old but people do searches and often use this stuff as reference. Let's keep it simple. Let's have a model of individual probability of defaulting on a mortgage as our Y. Now lets run level GDP on it. Lets say your data is 2002-2017, quarterly. You have millions of observations that at time T all share the same econ variables. I pick this time frame for a good reason. What will you get as a relationship? Oh man, you will find that shazaam, lower GDP is correlated with higher defaults. Looks good right? But now lets forecast this out, say 50 years (for the fun of it). Take the expected GDP at historical growth rate, say 2%, and extrapolate GDP. Now run the forecast. What do you find? Shazaam, like magic the probability of default will trend towards 0%. You would get the opposite if you picked total number of unemployed (not rate). You will find that shazaam, forecast it out in the future and the probability of default trends to 100%. Both are ridiculous. And here is the kicker. If you did a stationary test on some of these time frames you would find that they are stationary. The reason is you can dice a non stationary series into stationary parts. Particularly because Real GDP increased, decreased, and increased in the time period. Yes, your in sample fit will look good. But your forecasts will be meaningless. I see this frequently in risk metric modeling.
Does non-stationarity in logit/probit matter?
I know this post is old but people do searches and often use this stuff as reference. Let's keep it simple. Let's have a model of individual probability of defaulting on a mortgage as our Y. Now lets
Does non-stationarity in logit/probit matter? I know this post is old but people do searches and often use this stuff as reference. Let's keep it simple. Let's have a model of individual probability of defaulting on a mortgage as our Y. Now lets run level GDP on it. Lets say your data is 2002-2017, quarterly. You have millions of observations that at time T all share the same econ variables. I pick this time frame for a good reason. What will you get as a relationship? Oh man, you will find that shazaam, lower GDP is correlated with higher defaults. Looks good right? But now lets forecast this out, say 50 years (for the fun of it). Take the expected GDP at historical growth rate, say 2%, and extrapolate GDP. Now run the forecast. What do you find? Shazaam, like magic the probability of default will trend towards 0%. You would get the opposite if you picked total number of unemployed (not rate). You will find that shazaam, forecast it out in the future and the probability of default trends to 100%. Both are ridiculous. And here is the kicker. If you did a stationary test on some of these time frames you would find that they are stationary. The reason is you can dice a non stationary series into stationary parts. Particularly because Real GDP increased, decreased, and increased in the time period. Yes, your in sample fit will look good. But your forecasts will be meaningless. I see this frequently in risk metric modeling.
Does non-stationarity in logit/probit matter? I know this post is old but people do searches and often use this stuff as reference. Let's keep it simple. Let's have a model of individual probability of defaulting on a mortgage as our Y. Now lets
28,573
Does non-stationarity in logit/probit matter?
You are clearly fine from a theoretical perspective. It is a mistaken understanding of non-stationary series that they have changing means. They have no mean. The sample average is a random number because it converges to no point and so appears to change. This is also no problem for logit or probit. Statistical models are mappings and there is no reason one cannot wrap an unbound series into a bounded series. For example, the real number line is normally thought of as having no length at all, but wrap it around a circle with the south pole being 0 and the north pole being $\infty$ and for a unit circle, the entire number line now has length $\pi$. By mapping a non-stationary series to a well bounded set, you have created a well bounded problem as the ultimate solution has to map to the interval [0,1]. All accounting ratios must lack a variance and all financial returns must lack a variance. See the paper at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2828744 You do not need to intrinsically worry about robust errors. It is a misunderstanding of non-stationary series that they are heteroskedastic. They are not; they are askedastic because they have no mean to form a variance about in the first place, so it is again a random number. The error terms structure has more to do with the model that maps than the lack of stationarity. Where you could face a problem is with the concept of covariance. The distribution of equity returns is from a distribution that lacks a covariance matrix. It isn't that stocks cannot comove, but they cannot covary. The same thing is true for economies. It is a more complex concept than covariance which is a simple relationship. You will want to read the paper above and think through your model relationships carefully.
Does non-stationarity in logit/probit matter?
You are clearly fine from a theoretical perspective. It is a mistaken understanding of non-stationary series that they have changing means. They have no mean. The sample average is a random number
Does non-stationarity in logit/probit matter? You are clearly fine from a theoretical perspective. It is a mistaken understanding of non-stationary series that they have changing means. They have no mean. The sample average is a random number because it converges to no point and so appears to change. This is also no problem for logit or probit. Statistical models are mappings and there is no reason one cannot wrap an unbound series into a bounded series. For example, the real number line is normally thought of as having no length at all, but wrap it around a circle with the south pole being 0 and the north pole being $\infty$ and for a unit circle, the entire number line now has length $\pi$. By mapping a non-stationary series to a well bounded set, you have created a well bounded problem as the ultimate solution has to map to the interval [0,1]. All accounting ratios must lack a variance and all financial returns must lack a variance. See the paper at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2828744 You do not need to intrinsically worry about robust errors. It is a misunderstanding of non-stationary series that they are heteroskedastic. They are not; they are askedastic because they have no mean to form a variance about in the first place, so it is again a random number. The error terms structure has more to do with the model that maps than the lack of stationarity. Where you could face a problem is with the concept of covariance. The distribution of equity returns is from a distribution that lacks a covariance matrix. It isn't that stocks cannot comove, but they cannot covary. The same thing is true for economies. It is a more complex concept than covariance which is a simple relationship. You will want to read the paper above and think through your model relationships carefully.
Does non-stationarity in logit/probit matter? You are clearly fine from a theoretical perspective. It is a mistaken understanding of non-stationary series that they have changing means. They have no mean. The sample average is a random number
28,574
Conclusions from output of a principal component analysis
Yes. This is the correct interpretation. Yes, rotation values indicate the component loading values. This is confirmed by the prcomp documentation, though I'm not sure why they label this part of the aspect "Rotation", as it implies the loadings have been rotated using some orthogonal (likely) or oblique (less likely) method. While it does appear to be the case that Sepal.Length, Petal.Length, and Petal.Width are all positively associated, I would not put as much stock in the small negative loading of Sepal.Width on PC1; it loads much more strongly (almost exclusively) on PC2. To be clear, Sepal.Width is still likely negatively associated with the other three variables, but it just doesn't seem to be strongly related to the first principle component. Based on this question, I wonder whether you would be better served by using a common factor (CF) analysis, rather than a principle components analysis (PCA). CF is more of an appropriate data-reducing technique when your goal is to uncover meaningful theoretical dimensions--such as the plant-factor that you are hypothesizing may affect Sepal.Length, Petal.Length, and Petal.Width. I appreciate you're from some sort of biological science--botany perhaps--but there's some good writing in Psychology on the PCA v. CF distinction by Fabrigar et al., 1999, Widaman, 2007, and others. The core distinction between the two is that PCA assumes that all variances is true-score variance--no error is assumed--whereas CF partitions true score variance from error variance, before factors are extracted and factor loadings are estimated. Ultimately you might get a similar-looking solution--people sometimes do--but when they do diverge, it tends to be the case that PCA overestimate loading values, and underestimates the correlations between components. An additional perk of the CF approach is that you can use maximum likelihood estimation to perform significance tests of loading values, while also getting some indexes of how well your chosen solution (1 factor, 2 factors, 3 factors, or 4 factors) explains your data. I would plot the factor loading values as you have, without weighting their bars by the proportion of variance for their respective components. I understand what you want to try to show by such an approach, but I think it would likely lead to readers to misunderstanding the component loading values from your analysis. However, if you wanted a visual way of showing the relative magnitude of variance accounted for by each component, you might consider manipulating the opacity of the groups bars (if you're using ggplot2, I believe this is done with the alpha aesthetic), based on the proportion of variance explained by each component (i.e., more solid colors = more variance explained). However, in my experience, your figure is not a typical way of presenting the results of a PCA--I think a table or two (loadings + variance explained in one, component correlations in another) would be much more straightforward. References Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299. Widaman, K. F. (2007). Common factors versus components: Principals and principles, errors, and misconceptions. In R. Cudeck & R. C. MacCallum (Eds.), Factor analysis at 100: Historic developments and future directions (pp. 177-203). Mahwah, NJ: Lawrence Erlbaum.
Conclusions from output of a principal component analysis
Yes. This is the correct interpretation. Yes, rotation values indicate the component loading values. This is confirmed by the prcomp documentation, though I'm not sure why they label this part of the
Conclusions from output of a principal component analysis Yes. This is the correct interpretation. Yes, rotation values indicate the component loading values. This is confirmed by the prcomp documentation, though I'm not sure why they label this part of the aspect "Rotation", as it implies the loadings have been rotated using some orthogonal (likely) or oblique (less likely) method. While it does appear to be the case that Sepal.Length, Petal.Length, and Petal.Width are all positively associated, I would not put as much stock in the small negative loading of Sepal.Width on PC1; it loads much more strongly (almost exclusively) on PC2. To be clear, Sepal.Width is still likely negatively associated with the other three variables, but it just doesn't seem to be strongly related to the first principle component. Based on this question, I wonder whether you would be better served by using a common factor (CF) analysis, rather than a principle components analysis (PCA). CF is more of an appropriate data-reducing technique when your goal is to uncover meaningful theoretical dimensions--such as the plant-factor that you are hypothesizing may affect Sepal.Length, Petal.Length, and Petal.Width. I appreciate you're from some sort of biological science--botany perhaps--but there's some good writing in Psychology on the PCA v. CF distinction by Fabrigar et al., 1999, Widaman, 2007, and others. The core distinction between the two is that PCA assumes that all variances is true-score variance--no error is assumed--whereas CF partitions true score variance from error variance, before factors are extracted and factor loadings are estimated. Ultimately you might get a similar-looking solution--people sometimes do--but when they do diverge, it tends to be the case that PCA overestimate loading values, and underestimates the correlations between components. An additional perk of the CF approach is that you can use maximum likelihood estimation to perform significance tests of loading values, while also getting some indexes of how well your chosen solution (1 factor, 2 factors, 3 factors, or 4 factors) explains your data. I would plot the factor loading values as you have, without weighting their bars by the proportion of variance for their respective components. I understand what you want to try to show by such an approach, but I think it would likely lead to readers to misunderstanding the component loading values from your analysis. However, if you wanted a visual way of showing the relative magnitude of variance accounted for by each component, you might consider manipulating the opacity of the groups bars (if you're using ggplot2, I believe this is done with the alpha aesthetic), based on the proportion of variance explained by each component (i.e., more solid colors = more variance explained). However, in my experience, your figure is not a typical way of presenting the results of a PCA--I think a table or two (loadings + variance explained in one, component correlations in another) would be much more straightforward. References Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299. Widaman, K. F. (2007). Common factors versus components: Principals and principles, errors, and misconceptions. In R. Cudeck & R. C. MacCallum (Eds.), Factor analysis at 100: Historic developments and future directions (pp. 177-203). Mahwah, NJ: Lawrence Erlbaum.
Conclusions from output of a principal component analysis Yes. This is the correct interpretation. Yes, rotation values indicate the component loading values. This is confirmed by the prcomp documentation, though I'm not sure why they label this part of the
28,575
Conclusions from output of a principal component analysis
No, not the total variance of the data. The total variance of the data given you want to express it in 4 principle components. You can always find more total variance by adding more principle components. But this decays rapidly.
Conclusions from output of a principal component analysis
No, not the total variance of the data. The total variance of the data given you want to express it in 4 principle components. You can always find more total variance by adding more principle componen
Conclusions from output of a principal component analysis No, not the total variance of the data. The total variance of the data given you want to express it in 4 principle components. You can always find more total variance by adding more principle components. But this decays rapidly.
Conclusions from output of a principal component analysis No, not the total variance of the data. The total variance of the data given you want to express it in 4 principle components. You can always find more total variance by adding more principle componen
28,576
Using LASSO for variable selection, then using Logit
There is a package in R called glmnet that can fit a LASSO logistic model for you! This will be more straightforward than the approach you are considering. More precisely, glmnet is a hybrid between LASSO and Ridge regression but you may set a parameter $\alpha=1$ to do a pure LASSO model. Since you are interested in logistic regression you will set family="binomial". You can read more here: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html#intro
Using LASSO for variable selection, then using Logit
There is a package in R called glmnet that can fit a LASSO logistic model for you! This will be more straightforward than the approach you are considering. More precisely, glmnet is a hybrid between L
Using LASSO for variable selection, then using Logit There is a package in R called glmnet that can fit a LASSO logistic model for you! This will be more straightforward than the approach you are considering. More precisely, glmnet is a hybrid between LASSO and Ridge regression but you may set a parameter $\alpha=1$ to do a pure LASSO model. Since you are interested in logistic regression you will set family="binomial". You can read more here: http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html#intro
Using LASSO for variable selection, then using Logit There is a package in R called glmnet that can fit a LASSO logistic model for you! This will be more straightforward than the approach you are considering. More precisely, glmnet is a hybrid between L
28,577
Using LASSO for variable selection, then using Logit
First, there's no guarantee that a linear probability model will approximate a logit model very well; consequently the subset of variables selected for one may be less appropriate for the other. Second, the re-fitting applies no shrinkage at all, despite the variable selection that's taken place in the first step; risking serious mis-calibration & perhaps a little loss of discrimination. You may be able to validate the procedure on a particular data-set, but it doesn't seem safe in general, or to offer any advantage over a stepwise logistic regression. And of course it's unnecessary; LASSO's $L_1$-norm penalty can be used for shrinkage & selection in logistic regression.
Using LASSO for variable selection, then using Logit
First, there's no guarantee that a linear probability model will approximate a logit model very well; consequently the subset of variables selected for one may be less appropriate for the other. Secon
Using LASSO for variable selection, then using Logit First, there's no guarantee that a linear probability model will approximate a logit model very well; consequently the subset of variables selected for one may be less appropriate for the other. Second, the re-fitting applies no shrinkage at all, despite the variable selection that's taken place in the first step; risking serious mis-calibration & perhaps a little loss of discrimination. You may be able to validate the procedure on a particular data-set, but it doesn't seem safe in general, or to offer any advantage over a stepwise logistic regression. And of course it's unnecessary; LASSO's $L_1$-norm penalty can be used for shrinkage & selection in logistic regression.
Using LASSO for variable selection, then using Logit First, there's no guarantee that a linear probability model will approximate a logit model very well; consequently the subset of variables selected for one may be less appropriate for the other. Secon
28,578
How to rearrange 2D data to get given correlation?
Here is one way to rearrange the data that is based on generating additional random numbers. We draw samples from a bivariate normal distribution with specified correlation. Next, we compute the ranks of the $x$ and $y$ values we obtain. These ranks are used to order the original values. For this approach, we have top sort both the original $x$ and $y$ values. First, we create the actual data set (like in your example). set.seed(1) d <- data.frame(x = runif(100, 0, 100), y = runif(100, 0, 100)) cor(d$x, d$y) # [1] 0.01703215 Now, we specify a correlation matrix. corr <- 0.7 # target correlation corr_mat <- matrix(corr, ncol = 2, nrow = 2) diag(corr_mat) <- 1 corr_mat # [,1] [,2] # [1,] 1.0 0.7 # [2,] 0.7 1.0 We generate random data following a bivariate normal distribution with $\mu = 0$, $\sigma = 1$ (for both variables) and the specified correlation. In R, this can be done with the mvrnorm function from the MASS package. We use empirical = TRUE to indicate that the correlation is the empirical correlation (not the population correlation). library(MASS) mvdat <- mvrnorm(n = nrow(d), mu = c(0, 0), Sigma = corr_mat, empirical = TRUE) cor(mvdat) # [,1] [,2] # [1,] 1.0 0.7 # [2,] 0.7 1.0 The random data perfectly matches the specified correlation. Next, we compute the ranks of the random data. rx <- rank(mvdat[ , 1], ties.method = "first") ry <- rank(mvdat[ , 2], ties.method = "first") To use the ranks for the original data in d, we have to sort the original data. dx_sorted <- sort(d$x) dy_sorted <- sort(d$y) Now, we can use the ranks to specify the order of the sorted data. cor(dx_sorted[rx], dy_sorted[ry]) # [1] 0.6868986 The obtained correlation does not perfectly match the specified one, but the difference is relatively small. Here, dx_sorted[rx] and dy_sorted[ry] are resampled versions of the original data in d.
How to rearrange 2D data to get given correlation?
Here is one way to rearrange the data that is based on generating additional random numbers. We draw samples from a bivariate normal distribution with specified correlation. Next, we compute the ranks
How to rearrange 2D data to get given correlation? Here is one way to rearrange the data that is based on generating additional random numbers. We draw samples from a bivariate normal distribution with specified correlation. Next, we compute the ranks of the $x$ and $y$ values we obtain. These ranks are used to order the original values. For this approach, we have top sort both the original $x$ and $y$ values. First, we create the actual data set (like in your example). set.seed(1) d <- data.frame(x = runif(100, 0, 100), y = runif(100, 0, 100)) cor(d$x, d$y) # [1] 0.01703215 Now, we specify a correlation matrix. corr <- 0.7 # target correlation corr_mat <- matrix(corr, ncol = 2, nrow = 2) diag(corr_mat) <- 1 corr_mat # [,1] [,2] # [1,] 1.0 0.7 # [2,] 0.7 1.0 We generate random data following a bivariate normal distribution with $\mu = 0$, $\sigma = 1$ (for both variables) and the specified correlation. In R, this can be done with the mvrnorm function from the MASS package. We use empirical = TRUE to indicate that the correlation is the empirical correlation (not the population correlation). library(MASS) mvdat <- mvrnorm(n = nrow(d), mu = c(0, 0), Sigma = corr_mat, empirical = TRUE) cor(mvdat) # [,1] [,2] # [1,] 1.0 0.7 # [2,] 0.7 1.0 The random data perfectly matches the specified correlation. Next, we compute the ranks of the random data. rx <- rank(mvdat[ , 1], ties.method = "first") ry <- rank(mvdat[ , 2], ties.method = "first") To use the ranks for the original data in d, we have to sort the original data. dx_sorted <- sort(d$x) dy_sorted <- sort(d$y) Now, we can use the ranks to specify the order of the sorted data. cor(dx_sorted[rx], dy_sorted[ry]) # [1] 0.6868986 The obtained correlation does not perfectly match the specified one, but the difference is relatively small. Here, dx_sorted[rx] and dy_sorted[ry] are resampled versions of the original data in d.
How to rearrange 2D data to get given correlation? Here is one way to rearrange the data that is based on generating additional random numbers. We draw samples from a bivariate normal distribution with specified correlation. Next, we compute the ranks
28,579
How to rearrange 2D data to get given correlation?
To generate two uniform distributions with a specified correlation, the Ruscio & Kaczetow (2008) algorithm will work. They provide R code. You can then transform with a simple linear function to get your target min, max, mean, and SD. Ruscio & Kaczetow Algorithm I'll summarize the bivariate case, but it can also work with multivariate problems. Uncorrelated $X_o$ and $Y_o$ are generated with any shape (e.g., uniform). Then, $X_1$ and $Y_1$ are generated as bivariate normal with an intermediate correlation. $X_1$ and $Y_1$ are replaced by $X_0$ and $Y_0$ in a rank-preserving fashion. Adjust the intermediate correlation to be higher or lower depending on whether the r($X_1,Y_1$) is too low or too high. $X_2$ and $Y_2$ are generated as bivariate normal with the new intermediate correlation. Repeat. Notice that this is very similar to @Sven Hohenstein's solution, except that it's iterative, so the intermediate correlation will get closer and closer to the target correlation until they are indistinguishable. Also, note that this algorithm can be used to generate a large population (e.g., N=1 million) from which to draw smaller samples - that is useful if you need to have sampling error. For a related post: Correlation and non-normal distributions Preserving Descriptive Statistics There is no guarantee that the algorithm will produce the exact same descriptives. However, because a uniform distribution's mean and SD are determined by its min and max, you can simply adjust the min and max to fix everything. Let $X_g$ and $Y_g$ be your generated variables from the last iteration of the Ruscio & Kaczetow algorithm, $X_f$ and $Y_f$ be your final variables that you hope to have (with target descriptives), and $X$ and $Y$ be your the original variables in your dataset. Calculate $X_f=(X_g - min(X))*(max(X)-min(x))/(max(X_g)-min(X_g))$ Do the same for $Y_f$ Reference: Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43, 355–381. doi:10.1080/00273170802285693
How to rearrange 2D data to get given correlation?
To generate two uniform distributions with a specified correlation, the Ruscio & Kaczetow (2008) algorithm will work. They provide R code. You can then transform with a simple linear function to get
How to rearrange 2D data to get given correlation? To generate two uniform distributions with a specified correlation, the Ruscio & Kaczetow (2008) algorithm will work. They provide R code. You can then transform with a simple linear function to get your target min, max, mean, and SD. Ruscio & Kaczetow Algorithm I'll summarize the bivariate case, but it can also work with multivariate problems. Uncorrelated $X_o$ and $Y_o$ are generated with any shape (e.g., uniform). Then, $X_1$ and $Y_1$ are generated as bivariate normal with an intermediate correlation. $X_1$ and $Y_1$ are replaced by $X_0$ and $Y_0$ in a rank-preserving fashion. Adjust the intermediate correlation to be higher or lower depending on whether the r($X_1,Y_1$) is too low or too high. $X_2$ and $Y_2$ are generated as bivariate normal with the new intermediate correlation. Repeat. Notice that this is very similar to @Sven Hohenstein's solution, except that it's iterative, so the intermediate correlation will get closer and closer to the target correlation until they are indistinguishable. Also, note that this algorithm can be used to generate a large population (e.g., N=1 million) from which to draw smaller samples - that is useful if you need to have sampling error. For a related post: Correlation and non-normal distributions Preserving Descriptive Statistics There is no guarantee that the algorithm will produce the exact same descriptives. However, because a uniform distribution's mean and SD are determined by its min and max, you can simply adjust the min and max to fix everything. Let $X_g$ and $Y_g$ be your generated variables from the last iteration of the Ruscio & Kaczetow algorithm, $X_f$ and $Y_f$ be your final variables that you hope to have (with target descriptives), and $X$ and $Y$ be your the original variables in your dataset. Calculate $X_f=(X_g - min(X))*(max(X)-min(x))/(max(X_g)-min(X_g))$ Do the same for $Y_f$ Reference: Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43, 355–381. doi:10.1080/00273170802285693
How to rearrange 2D data to get given correlation? To generate two uniform distributions with a specified correlation, the Ruscio & Kaczetow (2008) algorithm will work. They provide R code. You can then transform with a simple linear function to get
28,580
How to rearrange 2D data to get given correlation?
I'm guessing that when you say "resample" you mean "simulate," which is more general. The following is the most concise way I know to simulate normal, bivariate data with a specified correlation. Substitute your own desired values for r and n. r = .6 n = 1000 x = rnorm(n) z = rnorm(n) y = (r/(1-r^2)^.5)*x + z cor(x,y) plot(x,y) abline(lm(y~x), col="red")
How to rearrange 2D data to get given correlation?
I'm guessing that when you say "resample" you mean "simulate," which is more general. The following is the most concise way I know to simulate normal, bivariate data with a specified correlation. Su
How to rearrange 2D data to get given correlation? I'm guessing that when you say "resample" you mean "simulate," which is more general. The following is the most concise way I know to simulate normal, bivariate data with a specified correlation. Substitute your own desired values for r and n. r = .6 n = 1000 x = rnorm(n) z = rnorm(n) y = (r/(1-r^2)^.5)*x + z cor(x,y) plot(x,y) abline(lm(y~x), col="red")
How to rearrange 2D data to get given correlation? I'm guessing that when you say "resample" you mean "simulate," which is more general. The following is the most concise way I know to simulate normal, bivariate data with a specified correlation. Su
28,581
Intrinsic spatial stationarity: doesn't it only apply for small lags?
Yes and no. Yes I recall that Andre Journel long ago emphasized the points that Stationarity assumptions are decisions made by the analyst concerning what kind of model to use. They are not inherent properties of the phenomenon. Such assumptions are robust to departures because kriging (at least as practiced 20+ years ago) was almost always a local estimator based on selection of nearby data within moving search neighborhoods. These points support the impression that intrinsic stationarity is purely a local property by suggesting that in practice it need only hold within a typical search neighborhood, and then only approximately. No However, mathematically it is indeed the case that the expected differences must all be exactly zero, regardless of the distance $|h|$. In fact, if all you assumed were that the expected differences are continuous in the lag $h$, you wouldn't be assuming much at all! That weaker assumption would be tantamount to asserting a lack of structural breaks in the expectation (which wouldn't even imply a lack of structural breaks in the realizations of the process), but otherwise it could not be exploited to construct the kriging equations nor even estimate a variogram. To appreciate just how weak (and practically useless) the assumption of mean continuity can be, consider a process $Z$ on the real line for which $$Z(x) = U\text{ if } x \lt 0;\ Z(x) = -U\text{ otherwise }$$ where $U$ has a standard Normal distribution. The graph of a realization will consist of a half-line at height $u$ for negative $x$ and another half-line at height $-u$ for positive $x$. For any $x$ and $h$, $$E(Z(x)-Z(x-h)) = E(Z(x)) - E(Z(x-h)) = E(\pm U) - E(\pm U) = 0 - 0 = 0$$ yet almost surely $U\ne -U$, showing that almost all realizations of this process are discontinuous at $0$, even though the mean of the process is continuous everywhere. Interpretation Diggle and Ribeiro discuss this issue [at p. 66]. They are talking about intrinsic random functions, for which the increments $Z(x)-Z(x-h)$ are assumed stationary (not just weakly stationary): Intrinsic random functions embrace a wider class of models than do stationary random functions. With regard to spatial prediction, the main difference between predictions obtained from intrinsic and from stationary models is that if intrinsic models are used, the prediction at a point $x$ is influenced by the local behaviour of the data; i.e., by the observed measurement at locations relatively close to $x$, whereas predictions from stationary models are also affected by global behaviour. One way to understand this is to remember that the mean of an intrinsic process is indeterminate. As a consequence, predictions derived from an assumed intrinsic model tend to fluctuate around a local average. In contrast, predictions derived from an assumed stationary model tend to revert to the global mean of the assumed model in areas where the data are sparse. Which of these two types of behaviour is the more natural depends on the scientific context in which the models are being used. Comment Instead, if you want control over the local behavior of the process, you should be making assumptions about the second moment of the increments, $E([Z(x)-Z(x-h)]^{2})$. For instance, when this approaches $0$ as $h\to 0$, the process is mean-square continuous. When there exists a process $Z^\prime$ for which $$E([Z(x)-Z(x-h) - h Z^\prime(x)]^{2}) = O(h^2)$$ for all $x$, then the process is mean-square differentiable (with derivative $Z^\prime$). References Peter J. Diggle and Paulo J. Ribeiro Jr., Model-based Geostatistics. Springer (2007)
Intrinsic spatial stationarity: doesn't it only apply for small lags?
Yes and no. Yes I recall that Andre Journel long ago emphasized the points that Stationarity assumptions are decisions made by the analyst concerning what kind of model to use. They are not inherent
Intrinsic spatial stationarity: doesn't it only apply for small lags? Yes and no. Yes I recall that Andre Journel long ago emphasized the points that Stationarity assumptions are decisions made by the analyst concerning what kind of model to use. They are not inherent properties of the phenomenon. Such assumptions are robust to departures because kriging (at least as practiced 20+ years ago) was almost always a local estimator based on selection of nearby data within moving search neighborhoods. These points support the impression that intrinsic stationarity is purely a local property by suggesting that in practice it need only hold within a typical search neighborhood, and then only approximately. No However, mathematically it is indeed the case that the expected differences must all be exactly zero, regardless of the distance $|h|$. In fact, if all you assumed were that the expected differences are continuous in the lag $h$, you wouldn't be assuming much at all! That weaker assumption would be tantamount to asserting a lack of structural breaks in the expectation (which wouldn't even imply a lack of structural breaks in the realizations of the process), but otherwise it could not be exploited to construct the kriging equations nor even estimate a variogram. To appreciate just how weak (and practically useless) the assumption of mean continuity can be, consider a process $Z$ on the real line for which $$Z(x) = U\text{ if } x \lt 0;\ Z(x) = -U\text{ otherwise }$$ where $U$ has a standard Normal distribution. The graph of a realization will consist of a half-line at height $u$ for negative $x$ and another half-line at height $-u$ for positive $x$. For any $x$ and $h$, $$E(Z(x)-Z(x-h)) = E(Z(x)) - E(Z(x-h)) = E(\pm U) - E(\pm U) = 0 - 0 = 0$$ yet almost surely $U\ne -U$, showing that almost all realizations of this process are discontinuous at $0$, even though the mean of the process is continuous everywhere. Interpretation Diggle and Ribeiro discuss this issue [at p. 66]. They are talking about intrinsic random functions, for which the increments $Z(x)-Z(x-h)$ are assumed stationary (not just weakly stationary): Intrinsic random functions embrace a wider class of models than do stationary random functions. With regard to spatial prediction, the main difference between predictions obtained from intrinsic and from stationary models is that if intrinsic models are used, the prediction at a point $x$ is influenced by the local behaviour of the data; i.e., by the observed measurement at locations relatively close to $x$, whereas predictions from stationary models are also affected by global behaviour. One way to understand this is to remember that the mean of an intrinsic process is indeterminate. As a consequence, predictions derived from an assumed intrinsic model tend to fluctuate around a local average. In contrast, predictions derived from an assumed stationary model tend to revert to the global mean of the assumed model in areas where the data are sparse. Which of these two types of behaviour is the more natural depends on the scientific context in which the models are being used. Comment Instead, if you want control over the local behavior of the process, you should be making assumptions about the second moment of the increments, $E([Z(x)-Z(x-h)]^{2})$. For instance, when this approaches $0$ as $h\to 0$, the process is mean-square continuous. When there exists a process $Z^\prime$ for which $$E([Z(x)-Z(x-h) - h Z^\prime(x)]^{2}) = O(h^2)$$ for all $x$, then the process is mean-square differentiable (with derivative $Z^\prime$). References Peter J. Diggle and Paulo J. Ribeiro Jr., Model-based Geostatistics. Springer (2007)
Intrinsic spatial stationarity: doesn't it only apply for small lags? Yes and no. Yes I recall that Andre Journel long ago emphasized the points that Stationarity assumptions are decisions made by the analyst concerning what kind of model to use. They are not inherent
28,582
How to compute R-squared value when doing cross-validation?
It is neither of them. Calculate mean square error and variance of each group and use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ to get R^2 for each fold. Report mean and standard error of the out-of-sample R^2. Please also have a look at this discussion. There are a lots of examples on the web, specifically R codes where $R^2$ is calculated by stacking together results of cross-validation folds and reporting $R^2$ between this chimeric vector and observed outcome variable y. However answers and comments in the discussion above and this paper by Kvålseth, which predates wide adoption of cross-validation technique, strongly recommends to use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ in general case. There are several things which might go wrong with the practice of (1) stacking and (2) correlating predictions. 1. Consider observed values of y in the test set: c(1,2,3,4) and prediction: c(8, 6, 4, 2). Clearly prediction is anti-correlated with the observed value, but you will be reporting perfect correlation $R^2 = 1.0$. 2. Consider a predictor that returns a vector which is a replicated mean of the train points of y. Now imagine that you sorted y and before splitting into cross-validation (CV) folds. You split without shuffling, e.g. in 4-fold CV on 16 samples you have following fold ID labels of the sorted y: foldid = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4) y = c(0.09, 0.2, 0.22, 0.24, 0.34, 0.42, 0.44, 0.45, 0.45, 0.47, 0.55, 0.63, 0.78, 0.85, 0.92, 1) When you split you sorted y points, the mean of the train set will anti-correlate with the mean of the test set, so you get a low negative Pearson $R$. Now you calculate a stacked $R^2$ and you get a pretty high value, though your predictors are just noise and the prediction is based on the mean of the seen y. See figure below for 10-fold CV
How to compute R-squared value when doing cross-validation?
It is neither of them. Calculate mean square error and variance of each group and use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ to get R^2 for each fold. Report mean and
How to compute R-squared value when doing cross-validation? It is neither of them. Calculate mean square error and variance of each group and use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ to get R^2 for each fold. Report mean and standard error of the out-of-sample R^2. Please also have a look at this discussion. There are a lots of examples on the web, specifically R codes where $R^2$ is calculated by stacking together results of cross-validation folds and reporting $R^2$ between this chimeric vector and observed outcome variable y. However answers and comments in the discussion above and this paper by Kvålseth, which predates wide adoption of cross-validation technique, strongly recommends to use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ in general case. There are several things which might go wrong with the practice of (1) stacking and (2) correlating predictions. 1. Consider observed values of y in the test set: c(1,2,3,4) and prediction: c(8, 6, 4, 2). Clearly prediction is anti-correlated with the observed value, but you will be reporting perfect correlation $R^2 = 1.0$. 2. Consider a predictor that returns a vector which is a replicated mean of the train points of y. Now imagine that you sorted y and before splitting into cross-validation (CV) folds. You split without shuffling, e.g. in 4-fold CV on 16 samples you have following fold ID labels of the sorted y: foldid = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4) y = c(0.09, 0.2, 0.22, 0.24, 0.34, 0.42, 0.44, 0.45, 0.45, 0.47, 0.55, 0.63, 0.78, 0.85, 0.92, 1) When you split you sorted y points, the mean of the train set will anti-correlate with the mean of the test set, so you get a low negative Pearson $R$. Now you calculate a stacked $R^2$ and you get a pretty high value, though your predictors are just noise and the prediction is based on the mean of the seen y. See figure below for 10-fold CV
How to compute R-squared value when doing cross-validation? It is neither of them. Calculate mean square error and variance of each group and use formula $R^2 = 1 - \frac{\mathbb{E}(y - \hat{y})^2}{\mathbb{V}({y})}$ to get R^2 for each fold. Report mean and
28,583
How to compute R-squared value when doing cross-validation?
Update: Revisiting my 'youthful' answer, I agree, this stitching approach is not the right way to compute R-squared metric. Stitching may be useful for visual inspection of residuals. I leave answer as is, as it is mentioned in other answers. @Is it the averaged R squared value of the 5 models? -No, it is computed as seen below. You predict k-fold observations, stitch them together to a ordered vector where obs#1 is first and obs#last is last. Calculate then the squared pearson product moment correlation (R²) of this k-fold prediction vector to the response vector(y). CV-correlation to response(y) is lower than a direct MLR-fit. In the example below R²(CV) = .63 and R²(direct fit)=.82. This would suggests simple MLR here is slightly overfitted, and if this bothers you could try to do somewhat better with PLS, ridge-regression or PCR. I have not heard of any 2% rule. library(foreach) obs=250 vars=72 nfolds=5 #a test data set X = data.frame(replicate(vars,rnorm(obs))) true.coefs = runif(vars,-1,1) y_signal = apply(t(t(X) * true.coefs),1,sum) y_noise = rnorm(obs,sd=sd(y_signal)*0.5) y = y_signal + y_noise #split obs randomly in nfold partitions folds = split(sample(obs),1:nfolds) #run nfold loops, train, predict.. #use cbind to stich together predictions of each test set to one test.preds = foreach(i = folds,.combine=cbind) %do% { Data.train = data.frame(X=X[-i,],y=y[-i]) Data.test = data.frame(X=X[i ,],y=y[ i]) lmf = lm(y~.,Data.train) test.pred = rep(0,obs) test.pred[i] = predict(lmf,Data.test) return(test.pred) } CVpreds = apply(test.preds,1,sum) cat(nfolds,"-fold CV, pearson R^2=",cor(CVpreds,y)^2,sep="") cat("simple MLR fit, pearson R^2=",cor(lm(y~.,data.frame(y,X))$fit,y)^2,sep="")
How to compute R-squared value when doing cross-validation?
Update: Revisiting my 'youthful' answer, I agree, this stitching approach is not the right way to compute R-squared metric. Stitching may be useful for visual inspection of residuals. I leave answer a
How to compute R-squared value when doing cross-validation? Update: Revisiting my 'youthful' answer, I agree, this stitching approach is not the right way to compute R-squared metric. Stitching may be useful for visual inspection of residuals. I leave answer as is, as it is mentioned in other answers. @Is it the averaged R squared value of the 5 models? -No, it is computed as seen below. You predict k-fold observations, stitch them together to a ordered vector where obs#1 is first and obs#last is last. Calculate then the squared pearson product moment correlation (R²) of this k-fold prediction vector to the response vector(y). CV-correlation to response(y) is lower than a direct MLR-fit. In the example below R²(CV) = .63 and R²(direct fit)=.82. This would suggests simple MLR here is slightly overfitted, and if this bothers you could try to do somewhat better with PLS, ridge-regression or PCR. I have not heard of any 2% rule. library(foreach) obs=250 vars=72 nfolds=5 #a test data set X = data.frame(replicate(vars,rnorm(obs))) true.coefs = runif(vars,-1,1) y_signal = apply(t(t(X) * true.coefs),1,sum) y_noise = rnorm(obs,sd=sd(y_signal)*0.5) y = y_signal + y_noise #split obs randomly in nfold partitions folds = split(sample(obs),1:nfolds) #run nfold loops, train, predict.. #use cbind to stich together predictions of each test set to one test.preds = foreach(i = folds,.combine=cbind) %do% { Data.train = data.frame(X=X[-i,],y=y[-i]) Data.test = data.frame(X=X[i ,],y=y[ i]) lmf = lm(y~.,Data.train) test.pred = rep(0,obs) test.pred[i] = predict(lmf,Data.test) return(test.pred) } CVpreds = apply(test.preds,1,sum) cat(nfolds,"-fold CV, pearson R^2=",cor(CVpreds,y)^2,sep="") cat("simple MLR fit, pearson R^2=",cor(lm(y~.,data.frame(y,X))$fit,y)^2,sep="")
How to compute R-squared value when doing cross-validation? Update: Revisiting my 'youthful' answer, I agree, this stitching approach is not the right way to compute R-squared metric. Stitching may be useful for visual inspection of residuals. I leave answer a
28,584
Using bootstrap to obtain sampling distribution of 1st-percentile
Bootstrap inference for the extremes of a distribution is generally dubious. When bootstrapping n-out-of-n the minimum or maximum in the sample of size $n$, you have $1 - (1-1/n)^n \sim 1 - {\rm exp}(-1) = 63.2\%$ chance that you will reproduce your sample extreme observation, and likewise approximately ${\rm exp}(-1) - {\rm exp}(-2)=23.3\%$ chance to reproduce your second extreme observation, and so on. You get a deterministic distribution that has little to do with the shape of the underlying distribution at the tail. Moreover, the bootstrap cannot give you anything below your sample minimum, even when the distribution has the support below this value (as would be the case with most continuous distributions like say normal). The solutions are complicated and rely on the combinations of asymptotics from extreme value theory and subsampling fewer than n observations (actually, way fewer, the rate should converge to zero as $n\to\infty$).
Using bootstrap to obtain sampling distribution of 1st-percentile
Bootstrap inference for the extremes of a distribution is generally dubious. When bootstrapping n-out-of-n the minimum or maximum in the sample of size $n$, you have $1 - (1-1/n)^n \sim 1 - {\rm exp}(
Using bootstrap to obtain sampling distribution of 1st-percentile Bootstrap inference for the extremes of a distribution is generally dubious. When bootstrapping n-out-of-n the minimum or maximum in the sample of size $n$, you have $1 - (1-1/n)^n \sim 1 - {\rm exp}(-1) = 63.2\%$ chance that you will reproduce your sample extreme observation, and likewise approximately ${\rm exp}(-1) - {\rm exp}(-2)=23.3\%$ chance to reproduce your second extreme observation, and so on. You get a deterministic distribution that has little to do with the shape of the underlying distribution at the tail. Moreover, the bootstrap cannot give you anything below your sample minimum, even when the distribution has the support below this value (as would be the case with most continuous distributions like say normal). The solutions are complicated and rely on the combinations of asymptotics from extreme value theory and subsampling fewer than n observations (actually, way fewer, the rate should converge to zero as $n\to\infty$).
Using bootstrap to obtain sampling distribution of 1st-percentile Bootstrap inference for the extremes of a distribution is generally dubious. When bootstrapping n-out-of-n the minimum or maximum in the sample of size $n$, you have $1 - (1-1/n)^n \sim 1 - {\rm exp}(
28,585
How to interpret ACF and PACF plots
There is no apparent structure in the plots that you show. The lag order of those negative partial autocorrelations that lie outside the bands are not multiple of each other (they are lags, 22, 56, 62, 78, 94) i.e., they do not arise after a regular number of lags as for example 12, 24, 36, 48, so I wouldn't infer any pattern based on that from the plot. As a complement you may apply a runs test, which is a test for independence that may be useful to capture runs of positive or negative values, which would suggest some pattern in the data. As regards the significance of some of the autorrelations I see that they arise at large orders. You should think if those autocorrelations make sense or may be expected in the context of your data. Is it sensible to expect that the value observed 56 observations ago will affect the current observation? If we had quarterly data, it would be worth inspecting significant correlation at lags 8 and 12 because they are multiples of the periodicity of the data and may reflect some seasonal pattern that we could explain in the context of the data. But I wouldn't concern that much if significant lags arose at lags 9, 11 or much higher lags for which I didn't have an explanation that will justify it as a regular pattern.
How to interpret ACF and PACF plots
There is no apparent structure in the plots that you show. The lag order of those negative partial autocorrelations that lie outside the bands are not multiple of each other (they are lags, 22, 56, 6
How to interpret ACF and PACF plots There is no apparent structure in the plots that you show. The lag order of those negative partial autocorrelations that lie outside the bands are not multiple of each other (they are lags, 22, 56, 62, 78, 94) i.e., they do not arise after a regular number of lags as for example 12, 24, 36, 48, so I wouldn't infer any pattern based on that from the plot. As a complement you may apply a runs test, which is a test for independence that may be useful to capture runs of positive or negative values, which would suggest some pattern in the data. As regards the significance of some of the autorrelations I see that they arise at large orders. You should think if those autocorrelations make sense or may be expected in the context of your data. Is it sensible to expect that the value observed 56 observations ago will affect the current observation? If we had quarterly data, it would be worth inspecting significant correlation at lags 8 and 12 because they are multiples of the periodicity of the data and may reflect some seasonal pattern that we could explain in the context of the data. But I wouldn't concern that much if significant lags arose at lags 9, 11 or much higher lags for which I didn't have an explanation that will justify it as a regular pattern.
How to interpret ACF and PACF plots There is no apparent structure in the plots that you show. The lag order of those negative partial autocorrelations that lie outside the bands are not multiple of each other (they are lags, 22, 56, 6
28,586
How to interpret ACF and PACF plots
Correlogram examination of the residuals (difference between the actual data point and estimates) is performed to check if any significant patterns about the data have not be left out in the ARIMA model. If all information has been captured, then the ACF and PACF plots should resemble white noise. If a visual examination does not help in confidently assume the same, then you can try to run a Box-Ljung test on the residuals. The null hypothesis, in this scenario, for a Box-Ljung test will be that the residuals are not different from white noise. The following is the code to run the test in r: Box.test(residuals, lag = 28, fitdf = 5, type = "Ljung") The lag value is set based on the number of lag autocorrelation coefficients and fitdf is the number of degree of freedoms to be subtracted. For an ARIMA (p,d,q) (P,D,Q)m, I usually set fitdf = (p + q + P + Q) If a Box-Ljung test returns a large p-value, it suggests that the residuals have no remaining autocorrelations, i.e., they resemble white noise.
How to interpret ACF and PACF plots
Correlogram examination of the residuals (difference between the actual data point and estimates) is performed to check if any significant patterns about the data have not be left out in the ARIMA mod
How to interpret ACF and PACF plots Correlogram examination of the residuals (difference between the actual data point and estimates) is performed to check if any significant patterns about the data have not be left out in the ARIMA model. If all information has been captured, then the ACF and PACF plots should resemble white noise. If a visual examination does not help in confidently assume the same, then you can try to run a Box-Ljung test on the residuals. The null hypothesis, in this scenario, for a Box-Ljung test will be that the residuals are not different from white noise. The following is the code to run the test in r: Box.test(residuals, lag = 28, fitdf = 5, type = "Ljung") The lag value is set based on the number of lag autocorrelation coefficients and fitdf is the number of degree of freedoms to be subtracted. For an ARIMA (p,d,q) (P,D,Q)m, I usually set fitdf = (p + q + P + Q) If a Box-Ljung test returns a large p-value, it suggests that the residuals have no remaining autocorrelations, i.e., they resemble white noise.
How to interpret ACF and PACF plots Correlogram examination of the residuals (difference between the actual data point and estimates) is performed to check if any significant patterns about the data have not be left out in the ARIMA mod
28,587
Effect size for a one-sample t-test
The standard effect size for a one-sample t-test is the difference between the sample mean and the null value in units of the sample standard deviation: $$ d = \frac{\bar x - \mu_0}{s} $$ The interpretation here is essentially the same as for the two-sample version of the standardized mean difference: it is the number of standard deviations that your distribution diverges on average. As in most cases with effect sizes, you can think of it as taking the $N$ out of your test statistic. Thus, with a test statistic / $p$-value you get a sense of the confidence you have in your result, but these conflate the size with $N$, so from a small $p$ you don't know if you have a big effect with a small $N$ or a small effect with a big $N$. Here, you would get a point estimate of the magnitude of the shift, but you don't know from $d = .5$ whether or not you can be confident that the true effect isn't $0$.
Effect size for a one-sample t-test
The standard effect size for a one-sample t-test is the difference between the sample mean and the null value in units of the sample standard deviation: $$ d = \frac{\bar x - \mu_0}{s} $$ The interpre
Effect size for a one-sample t-test The standard effect size for a one-sample t-test is the difference between the sample mean and the null value in units of the sample standard deviation: $$ d = \frac{\bar x - \mu_0}{s} $$ The interpretation here is essentially the same as for the two-sample version of the standardized mean difference: it is the number of standard deviations that your distribution diverges on average. As in most cases with effect sizes, you can think of it as taking the $N$ out of your test statistic. Thus, with a test statistic / $p$-value you get a sense of the confidence you have in your result, but these conflate the size with $N$, so from a small $p$ you don't know if you have a big effect with a small $N$ or a small effect with a big $N$. Here, you would get a point estimate of the magnitude of the shift, but you don't know from $d = .5$ whether or not you can be confident that the true effect isn't $0$.
Effect size for a one-sample t-test The standard effect size for a one-sample t-test is the difference between the sample mean and the null value in units of the sample standard deviation: $$ d = \frac{\bar x - \mu_0}{s} $$ The interpre
28,588
Effect size for a one-sample t-test
The important consideration is "what measure of effect matters for your purposes?". One common approach is to measure number of standard deviations of shift, that's probably not much use to you if you're trying to work out how much the mean changed. After all, if you're manufacturing screws, knowing the screws are 0.073 standard deviations shorter is not much use. Knowing they're about 0.016mm shorter on average -- that potentially matters. But if you were dealing with things whose scales are somewhat arbitrary, the numbers themselves aren't especially inherently meaningful, and then sd's of shift makes far more sense. If you're dealing with something like test scores, a change of 3 doesn't necessarily mean anything very much... but if you know it's half a standard deviation, that might matter much more.
Effect size for a one-sample t-test
The important consideration is "what measure of effect matters for your purposes?". One common approach is to measure number of standard deviations of shift, that's probably not much use to you if you
Effect size for a one-sample t-test The important consideration is "what measure of effect matters for your purposes?". One common approach is to measure number of standard deviations of shift, that's probably not much use to you if you're trying to work out how much the mean changed. After all, if you're manufacturing screws, knowing the screws are 0.073 standard deviations shorter is not much use. Knowing they're about 0.016mm shorter on average -- that potentially matters. But if you were dealing with things whose scales are somewhat arbitrary, the numbers themselves aren't especially inherently meaningful, and then sd's of shift makes far more sense. If you're dealing with something like test scores, a change of 3 doesn't necessarily mean anything very much... but if you know it's half a standard deviation, that might matter much more.
Effect size for a one-sample t-test The important consideration is "what measure of effect matters for your purposes?". One common approach is to measure number of standard deviations of shift, that's probably not much use to you if you
28,589
Finding outliers on a scatter plot
As a start in identifying the "scattered" points, consider focusing on locations where a kernel density estimate is relatively low. This suggestion assumes little or nothing is known or even suspected initially about the "locus" of the points--the curve or curves along which most of them will fall--and it is made in the spirit of semi-automated exploration of the data (rather than testing of hypotheses). You might need to play with the kernel width and the threshold of "relatively low". There exist good automatic ways to estimate the former while the latter could be identified via an analysis of the densities at the data points (to identify a cluster of low values). Example The figure is generated a combination of two kinds of data: one, shown as red points, are high-precision data, while the other, shown as blue points, are relatively low-precision data obtained near the extreme low value of $X$. In its background are (a) contours of a kernel density estimate (in grayscale) and (b) the curve around which the points were generated (in black). The points with relatively low densities have been circled automatically. (The densities at these points are less than one-eighth of the mean density among all points.) They include most--but not all!--of the low-precision points and some of the high-precision points (at the top right). Low-precision points lying near the curve (as extrapolated by the high-precision points) have not been circled. The circling of the high-precision points highlights the fact that wherever points are sparse, the trace of the underlying curve will be uncertain. This is a feature of the suggested approach, not a limitation! Code R code to produce this example follows. It uses the ks library, which assesses anisotropy in the point pattern to develop an oriented kernel shape. This approach works well in the sample data, whose point cloud tends to be long and skinny. # # Simulate some data. # f <- function(x) -0.55 + 0.45*x + 0.01/(1.2-x)^2 # The underlying curve set.seed(17) n1 <- 280; n2 <- 15 x <- c(1.2 - rbeta(n1,.9, .6), rep(0.1, n2)) y <- f(x) d <- data.frame(x=x + c(rnorm(n1, 0, 0.025), rnorm(n2, 0, 0.1)), y=y + c(rnorm(n1, 0, 0.025), rnorm(n2, 0, 0.33)), group=c(rep(1, n1), rep(2, n2))) d <- subset(d, subset=(y <= 1.0)) # Omit any high-y points # # Plot the density estimate. # require(ks) p <- cbind(d$x, d$y) dens <- kde(p) n.levels <- 13 colors <- gray(seq(1, 0, length.out=n.levels)) plot(dens, display="filled.contour2", cont=seq(0, 100, length.out=n.levels), col=colors, xlab="X", ylab="Y") # # Evaluate densities at the data points. # dens <- kde(p, eval.points=p) d$Density <- dens$estimate # # Plot the (correct) curve and the points. # curve(f(x), add=TRUE, to=1.2, col="Black") points(d$x, d$y, ylim=c(-1,1), pch=19, cex=sqrt(d$Density/8), col=ifelse(d$group==1, "Red", "Blue")) # # Highlight some low-density points. # m <- mean(d$Density) e <- subset(d, subset=(Density < m/10)) points(e$x, e$y, col="#00000080")
Finding outliers on a scatter plot
As a start in identifying the "scattered" points, consider focusing on locations where a kernel density estimate is relatively low. This suggestion assumes little or nothing is known or even suspected
Finding outliers on a scatter plot As a start in identifying the "scattered" points, consider focusing on locations where a kernel density estimate is relatively low. This suggestion assumes little or nothing is known or even suspected initially about the "locus" of the points--the curve or curves along which most of them will fall--and it is made in the spirit of semi-automated exploration of the data (rather than testing of hypotheses). You might need to play with the kernel width and the threshold of "relatively low". There exist good automatic ways to estimate the former while the latter could be identified via an analysis of the densities at the data points (to identify a cluster of low values). Example The figure is generated a combination of two kinds of data: one, shown as red points, are high-precision data, while the other, shown as blue points, are relatively low-precision data obtained near the extreme low value of $X$. In its background are (a) contours of a kernel density estimate (in grayscale) and (b) the curve around which the points were generated (in black). The points with relatively low densities have been circled automatically. (The densities at these points are less than one-eighth of the mean density among all points.) They include most--but not all!--of the low-precision points and some of the high-precision points (at the top right). Low-precision points lying near the curve (as extrapolated by the high-precision points) have not been circled. The circling of the high-precision points highlights the fact that wherever points are sparse, the trace of the underlying curve will be uncertain. This is a feature of the suggested approach, not a limitation! Code R code to produce this example follows. It uses the ks library, which assesses anisotropy in the point pattern to develop an oriented kernel shape. This approach works well in the sample data, whose point cloud tends to be long and skinny. # # Simulate some data. # f <- function(x) -0.55 + 0.45*x + 0.01/(1.2-x)^2 # The underlying curve set.seed(17) n1 <- 280; n2 <- 15 x <- c(1.2 - rbeta(n1,.9, .6), rep(0.1, n2)) y <- f(x) d <- data.frame(x=x + c(rnorm(n1, 0, 0.025), rnorm(n2, 0, 0.1)), y=y + c(rnorm(n1, 0, 0.025), rnorm(n2, 0, 0.33)), group=c(rep(1, n1), rep(2, n2))) d <- subset(d, subset=(y <= 1.0)) # Omit any high-y points # # Plot the density estimate. # require(ks) p <- cbind(d$x, d$y) dens <- kde(p) n.levels <- 13 colors <- gray(seq(1, 0, length.out=n.levels)) plot(dens, display="filled.contour2", cont=seq(0, 100, length.out=n.levels), col=colors, xlab="X", ylab="Y") # # Evaluate densities at the data points. # dens <- kde(p, eval.points=p) d$Density <- dens$estimate # # Plot the (correct) curve and the points. # curve(f(x), add=TRUE, to=1.2, col="Black") points(d$x, d$y, ylim=c(-1,1), pch=19, cex=sqrt(d$Density/8), col=ifelse(d$group==1, "Red", "Blue")) # # Highlight some low-density points. # m <- mean(d$Density) e <- subset(d, subset=(Density < m/10)) points(e$x, e$y, col="#00000080")
Finding outliers on a scatter plot As a start in identifying the "scattered" points, consider focusing on locations where a kernel density estimate is relatively low. This suggestion assumes little or nothing is known or even suspected
28,590
Simulate regression data with dependent variable being non-normally distributed
If I understand your question correctly, this is quite easy. You just need to decide what distribution you want your errors to have, and use the corresponding random generation function. There are a number of skewed distributions, so you need to figure out which one you like. In addition, most skewed distributions (e.g., log normal, chi-squared, Gamma, Weibull, etc.) are right skewed, so some minor adaptations would be necessary (e.g., multiply by $-1$). Here is an example modifying your code: set.seed(5840) # this makes the example exactly reproducible N <- 100 x <- rnorm(N) beta <- 0.4 errors <- rlnorm(N, meanlog=0, sdlog=1) errors <- -1*errors # this makes them left skewed errors <- errors - 1 # this centers the error distribution on 0 y <- 1 + x*beta + errors I should note at this point that regression does not make any assumptions about the distributions of $X$ or $Y$, only about the errors, $\varepsilon$ (see here: What if the residuals are normally distributed, but y is not?). Thus, that was the focus of my answer above. Update: Here is a right-skewed version with the errors distributed as Weibull: set.seed(5840) # this makes the example exactly reproducible N <- 100 x <- rnorm(N) beta <- 0.4 errors <- rweibull(N, shape=1.5, scale=1) # errors <- -1*errors # this makes them left skewed errors <- errors - factorial(1/1.5) # this centers the error distribution on 0 y <- 1 + x*beta + errors Weibull data are right skewed already, so we don't need to switch their direction (i.e., we drop the -1*errors part). Also, from the Wikipedia page for the Weibull distribution, we see that the mean of a Weibull should be: $E[W] = (1/{\rm shape})!$. We want to subtract that value from each of the errors so that the resulting error distribution is centered on $0$. That allows the structural part (i.e., 1 + x*beta) of your code to accurately reflect the structural part of the data generating process. The ExGaussian distribution is the sum of a normal and an exponential. There is a function ?rexGAUS in the gamlss.dist package to generate these. I don't have that package, but you should be able to adapt my code above without too much difficulty. You could also generate a random normal variable (via rnorm()) and an exponential (via rexp()) and sum them quite easily. Just remember to subtract the population mean, $\mu + 1/\lambda$, from each error prior to adding the errors to the structural part of the data generating process. (Be careful not to subtract the sample mean, mean(errors), though!) Some final, unrelated comments: Your example code in the question is somewhat muddled (meaning no offense). Because rnorm(N) generates data with mean=0 and sd=1 by default, 0.4*rnorm(N) will generate rnorm(N, mean=0, sd=0.4). Your code (and possibly your thinking) will be much clearer if you use the latter formulation. In addition, your code for beta seems confused. We generally think of the $\beta$ in a regression-type model as a parameter, not a random variable. That is, it is an unknown constant that governs the behavior of the data generating process, but the stochastic nature of the process is encapsulated by the errors. This isn't the way we think about it when we are working with multilevel models, and your code seems halfway between a standard regression model and the code for a multilevel regression model. Specifying your betas separately is a good idea for maintaining the conceptual clarity of the code, but for a standard regression model, you would just assign a single number to each beta (e.g., beta0 <- 1; beta1 <- .04).
Simulate regression data with dependent variable being non-normally distributed
If I understand your question correctly, this is quite easy. You just need to decide what distribution you want your errors to have, and use the corresponding random generation function. There are
Simulate regression data with dependent variable being non-normally distributed If I understand your question correctly, this is quite easy. You just need to decide what distribution you want your errors to have, and use the corresponding random generation function. There are a number of skewed distributions, so you need to figure out which one you like. In addition, most skewed distributions (e.g., log normal, chi-squared, Gamma, Weibull, etc.) are right skewed, so some minor adaptations would be necessary (e.g., multiply by $-1$). Here is an example modifying your code: set.seed(5840) # this makes the example exactly reproducible N <- 100 x <- rnorm(N) beta <- 0.4 errors <- rlnorm(N, meanlog=0, sdlog=1) errors <- -1*errors # this makes them left skewed errors <- errors - 1 # this centers the error distribution on 0 y <- 1 + x*beta + errors I should note at this point that regression does not make any assumptions about the distributions of $X$ or $Y$, only about the errors, $\varepsilon$ (see here: What if the residuals are normally distributed, but y is not?). Thus, that was the focus of my answer above. Update: Here is a right-skewed version with the errors distributed as Weibull: set.seed(5840) # this makes the example exactly reproducible N <- 100 x <- rnorm(N) beta <- 0.4 errors <- rweibull(N, shape=1.5, scale=1) # errors <- -1*errors # this makes them left skewed errors <- errors - factorial(1/1.5) # this centers the error distribution on 0 y <- 1 + x*beta + errors Weibull data are right skewed already, so we don't need to switch their direction (i.e., we drop the -1*errors part). Also, from the Wikipedia page for the Weibull distribution, we see that the mean of a Weibull should be: $E[W] = (1/{\rm shape})!$. We want to subtract that value from each of the errors so that the resulting error distribution is centered on $0$. That allows the structural part (i.e., 1 + x*beta) of your code to accurately reflect the structural part of the data generating process. The ExGaussian distribution is the sum of a normal and an exponential. There is a function ?rexGAUS in the gamlss.dist package to generate these. I don't have that package, but you should be able to adapt my code above without too much difficulty. You could also generate a random normal variable (via rnorm()) and an exponential (via rexp()) and sum them quite easily. Just remember to subtract the population mean, $\mu + 1/\lambda$, from each error prior to adding the errors to the structural part of the data generating process. (Be careful not to subtract the sample mean, mean(errors), though!) Some final, unrelated comments: Your example code in the question is somewhat muddled (meaning no offense). Because rnorm(N) generates data with mean=0 and sd=1 by default, 0.4*rnorm(N) will generate rnorm(N, mean=0, sd=0.4). Your code (and possibly your thinking) will be much clearer if you use the latter formulation. In addition, your code for beta seems confused. We generally think of the $\beta$ in a regression-type model as a parameter, not a random variable. That is, it is an unknown constant that governs the behavior of the data generating process, but the stochastic nature of the process is encapsulated by the errors. This isn't the way we think about it when we are working with multilevel models, and your code seems halfway between a standard regression model and the code for a multilevel regression model. Specifying your betas separately is a good idea for maintaining the conceptual clarity of the code, but for a standard regression model, you would just assign a single number to each beta (e.g., beta0 <- 1; beta1 <- .04).
Simulate regression data with dependent variable being non-normally distributed If I understand your question correctly, this is quite easy. You just need to decide what distribution you want your errors to have, and use the corresponding random generation function. There are
28,591
Likelihood vs. Probability
I think maybe the best way to explain the notion of likelihood is to consider a concrete example. Suppose I have a sample of IID observations drawn from a Bernoulli distribution with unknown probability of success $p$: $X_i \sim {\rm Bernoulli}(p)$, $i = 1, \ldots, n$, so the joint probability mass function of the sample is $$\Pr[{\boldsymbol X} = \boldsymbol x \mid p] = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}.$$ This expression also characterizes the likelihood of $p$, given an observed sample $\boldsymbol x = (x_1, \ldots, x_n)$: $$L(p \mid \boldsymbol x) = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}.$$ But if we think of $p$ as a random variable, this likelihood is not a density: $$\int_{p=0}^1 L(p \mid \boldsymbol x) \, dp \ne 1.$$ It is, however, proportional to a probability density, which is why we say it is a likelihood of $p$ being a particular value given the sample--it represents, in some sense, the relative plausibility of $p$ being some value for the observations we made. For instance, suppose $n = 5$ and the sample was $\boldsymbol x = (1, 1, 0, 1, 1)$. Intuitively we would conclude that $p$ is more likely to be closer to $1$ than to $0$, because we observed more ones. Indeed, we have $$L(p \mid \boldsymbol x) = p^4 (1 - p).$$ If we plot this function on $p \in [0,1]$, we can see how the likelihood confirms our intuition. Of course, we do not know the true value of $p$--it could have been $p = 0.25$ rather than $p = 0.8$, but the likelihood function tells us that the former is much less likely than the latter. But if we want to determine a probability that $p$ lies in a certain interval, we have to normalize the likelihood: since $\int_{p=0}^1 p^4(1-p) \, dp = \frac{1}{30}$, it follows that in order to get a posterior density for $p$, we must multiply by $30$: $$f_p(p \mid \boldsymbol x) = 30p^4(1-p).$$ In fact, this posterior is a beta distribution with parameters $a = 5, b = 2$. Now the areas under the density correspond to probabilities. So, what we have essentially done here is applied Bayes' rule: $$f_{\boldsymbol \Theta}(\boldsymbol \theta \mid \boldsymbol x) = \frac{f_{\boldsymbol X}(\boldsymbol x \mid \boldsymbol \theta) f_{\boldsymbol \Theta}(\boldsymbol \theta)}{f_{\boldsymbol X}(\boldsymbol x)}.$$ Here, $f_{\boldsymbol \Theta}(\boldsymbol \theta)$ is a prior distribution on the parameter(s) $\boldsymbol \theta$, the numerator is the likelihood $L(\boldsymbol \theta \mid \boldsymbol x) = f_{\boldsymbol X}(\boldsymbol x \mid \boldsymbol \theta) f_{\boldsymbol \Theta}(\boldsymbol \theta) = f_{\boldsymbol X, \boldsymbol \Theta}(\boldsymbol x, \boldsymbol \theta)$ which is also the joint distribution of $\boldsymbol X, \boldsymbol \Theta$, and the denominator is the marginal (unconditional) density of $\boldsymbol X$, obtained by integrating the joint distribution with respect to $\boldsymbol \theta$ to find the normalizing constant that makes the likelihood a probability density with respect to the parameter(s). In our numerical example, we implicitly took the prior for $f_{\boldsymbol \Theta}$ to be uniform on $[0,1]$. It can be shown that, for a Bernoulli sample, if the prior is ${\rm Beta}(a,b)$, the posterior for $f_{\boldsymbol \Theta}$ is also Beta, but with parameters $a^* = a+\sum x_i$, $b^* = b + n - \sum x_i$. We call such a prior conjugate (and refer to this as a Bernoulli-Beta conjugate pair).
Likelihood vs. Probability
I think maybe the best way to explain the notion of likelihood is to consider a concrete example. Suppose I have a sample of IID observations drawn from a Bernoulli distribution with unknown probabil
Likelihood vs. Probability I think maybe the best way to explain the notion of likelihood is to consider a concrete example. Suppose I have a sample of IID observations drawn from a Bernoulli distribution with unknown probability of success $p$: $X_i \sim {\rm Bernoulli}(p)$, $i = 1, \ldots, n$, so the joint probability mass function of the sample is $$\Pr[{\boldsymbol X} = \boldsymbol x \mid p] = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}.$$ This expression also characterizes the likelihood of $p$, given an observed sample $\boldsymbol x = (x_1, \ldots, x_n)$: $$L(p \mid \boldsymbol x) = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}.$$ But if we think of $p$ as a random variable, this likelihood is not a density: $$\int_{p=0}^1 L(p \mid \boldsymbol x) \, dp \ne 1.$$ It is, however, proportional to a probability density, which is why we say it is a likelihood of $p$ being a particular value given the sample--it represents, in some sense, the relative plausibility of $p$ being some value for the observations we made. For instance, suppose $n = 5$ and the sample was $\boldsymbol x = (1, 1, 0, 1, 1)$. Intuitively we would conclude that $p$ is more likely to be closer to $1$ than to $0$, because we observed more ones. Indeed, we have $$L(p \mid \boldsymbol x) = p^4 (1 - p).$$ If we plot this function on $p \in [0,1]$, we can see how the likelihood confirms our intuition. Of course, we do not know the true value of $p$--it could have been $p = 0.25$ rather than $p = 0.8$, but the likelihood function tells us that the former is much less likely than the latter. But if we want to determine a probability that $p$ lies in a certain interval, we have to normalize the likelihood: since $\int_{p=0}^1 p^4(1-p) \, dp = \frac{1}{30}$, it follows that in order to get a posterior density for $p$, we must multiply by $30$: $$f_p(p \mid \boldsymbol x) = 30p^4(1-p).$$ In fact, this posterior is a beta distribution with parameters $a = 5, b = 2$. Now the areas under the density correspond to probabilities. So, what we have essentially done here is applied Bayes' rule: $$f_{\boldsymbol \Theta}(\boldsymbol \theta \mid \boldsymbol x) = \frac{f_{\boldsymbol X}(\boldsymbol x \mid \boldsymbol \theta) f_{\boldsymbol \Theta}(\boldsymbol \theta)}{f_{\boldsymbol X}(\boldsymbol x)}.$$ Here, $f_{\boldsymbol \Theta}(\boldsymbol \theta)$ is a prior distribution on the parameter(s) $\boldsymbol \theta$, the numerator is the likelihood $L(\boldsymbol \theta \mid \boldsymbol x) = f_{\boldsymbol X}(\boldsymbol x \mid \boldsymbol \theta) f_{\boldsymbol \Theta}(\boldsymbol \theta) = f_{\boldsymbol X, \boldsymbol \Theta}(\boldsymbol x, \boldsymbol \theta)$ which is also the joint distribution of $\boldsymbol X, \boldsymbol \Theta$, and the denominator is the marginal (unconditional) density of $\boldsymbol X$, obtained by integrating the joint distribution with respect to $\boldsymbol \theta$ to find the normalizing constant that makes the likelihood a probability density with respect to the parameter(s). In our numerical example, we implicitly took the prior for $f_{\boldsymbol \Theta}$ to be uniform on $[0,1]$. It can be shown that, for a Bernoulli sample, if the prior is ${\rm Beta}(a,b)$, the posterior for $f_{\boldsymbol \Theta}$ is also Beta, but with parameters $a^* = a+\sum x_i$, $b^* = b + n - \sum x_i$. We call such a prior conjugate (and refer to this as a Bernoulli-Beta conjugate pair).
Likelihood vs. Probability I think maybe the best way to explain the notion of likelihood is to consider a concrete example. Suppose I have a sample of IID observations drawn from a Bernoulli distribution with unknown probabil
28,592
How to interpret Cochran-Mantel-Haenszel test?
The first test tells you that the odds ratio between A and B, ignoring C, is different from 1. Looking at the stratified analysis helps you decide whether it's all right to ignore C. The CMH test tells you that the odds ratio between A and B, adjusting for C, is different from one. It returns a weighted average of the stratum-specific odds ratios, so if these are $<1$ in some strata and $>1$ in others, they could cancel out and erroneously tell you there is no association between A and B. So we must test whether it is reasonable to assume that the odds ratios are equal (at the population level) across all the levels of C. The Breslow-Day test of interaction does exactly this, with the null hypothesis that all strata have the same odds ratio, which need not be equal to one. This test is implemented in the EpiR R package. The Breslow-Day p value of .14 means we can make this assumption, so the adjusted odds ratio is legitimate. But this doesn't help us decide between CMH and Fisher's exact (or Pearson's $\chi^2$) tests. If the Breslow-Day test was significant, you would need to report stratum-specific odds ratios. Since it's not, you need to ask whether it's necessary to adjust for C. Does C "confound" the association between A and B? The heuristic I learned (not a statistical test) was to check whether the proportional difference between the unadjusted and adjusted odds ratios is more than 10%. Here, $\frac{1.75-1.56}{1.75}=0.108$ so CMH is appropriate.
How to interpret Cochran-Mantel-Haenszel test?
The first test tells you that the odds ratio between A and B, ignoring C, is different from 1. Looking at the stratified analysis helps you decide whether it's all right to ignore C. The CMH test tell
How to interpret Cochran-Mantel-Haenszel test? The first test tells you that the odds ratio between A and B, ignoring C, is different from 1. Looking at the stratified analysis helps you decide whether it's all right to ignore C. The CMH test tells you that the odds ratio between A and B, adjusting for C, is different from one. It returns a weighted average of the stratum-specific odds ratios, so if these are $<1$ in some strata and $>1$ in others, they could cancel out and erroneously tell you there is no association between A and B. So we must test whether it is reasonable to assume that the odds ratios are equal (at the population level) across all the levels of C. The Breslow-Day test of interaction does exactly this, with the null hypothesis that all strata have the same odds ratio, which need not be equal to one. This test is implemented in the EpiR R package. The Breslow-Day p value of .14 means we can make this assumption, so the adjusted odds ratio is legitimate. But this doesn't help us decide between CMH and Fisher's exact (or Pearson's $\chi^2$) tests. If the Breslow-Day test was significant, you would need to report stratum-specific odds ratios. Since it's not, you need to ask whether it's necessary to adjust for C. Does C "confound" the association between A and B? The heuristic I learned (not a statistical test) was to check whether the proportional difference between the unadjusted and adjusted odds ratios is more than 10%. Here, $\frac{1.75-1.56}{1.75}=0.108$ so CMH is appropriate.
How to interpret Cochran-Mantel-Haenszel test? The first test tells you that the odds ratio between A and B, ignoring C, is different from 1. Looking at the stratified analysis helps you decide whether it's all right to ignore C. The CMH test tell
28,593
Machine learning applications in number theory
Genetic algorithms were used to lower the prime gap to 4680 in the recent Zhang twin primes proof breakthrough and associated Polymath project. The bound has been lowered by other methods but it shows some potential for machine learning approaches in this or related areas. they can be used to devise/optimize effective "combs" or basically sieves for analyzing/screening smallest-possible prime gaps. Together and Alone, Closing the Prime Gap (Erica Klarreich, Quanta magazine, 19 November 2013): The team eventually came up with the Polymath project’s record-holder — a 632-tooth comb whose width is 4,680 — using a genetic algorithm that “mates” admissible combs with each other to produce new, potentially better combs.
Machine learning applications in number theory
Genetic algorithms were used to lower the prime gap to 4680 in the recent Zhang twin primes proof breakthrough and associated Polymath project. The bound has been lowered by other methods but it shows
Machine learning applications in number theory Genetic algorithms were used to lower the prime gap to 4680 in the recent Zhang twin primes proof breakthrough and associated Polymath project. The bound has been lowered by other methods but it shows some potential for machine learning approaches in this or related areas. they can be used to devise/optimize effective "combs" or basically sieves for analyzing/screening smallest-possible prime gaps. Together and Alone, Closing the Prime Gap (Erica Klarreich, Quanta magazine, 19 November 2013): The team eventually came up with the Polymath project’s record-holder — a 632-tooth comb whose width is 4,680 — using a genetic algorithm that “mates” admissible combs with each other to produce new, potentially better combs.
Machine learning applications in number theory Genetic algorithms were used to lower the prime gap to 4680 in the recent Zhang twin primes proof breakthrough and associated Polymath project. The bound has been lowered by other methods but it shows
28,594
Machine learning applications in number theory
See the 2019 preprint Machine Learning meets Number Theory: The Data Science of Birch-Swinnerton-Dyer by Alessandretti, Baronchelli & He. Here is the Abstract: Empirical analysis is often the first step towards the birth of a conjecture. This is the case of the Birch-Swinnerton-Dyer (BSD) Conjecture describing the rational points on an elliptic curve, one of the most celebrated unsolved problems in mathematics. Here we extend the original empirical approach, to the analysis of the Cremona database of quantities relevant to BSD, inspecting more than 2.5 million elliptic curves by means of the latest techniques in data science, machine-learning and topological data analysis. Key quantities such as rank, Weierstrass coefficients, period, conductor, Tamagawa number, regulator and order of the Tate-Shafarevich group give rise to a high-dimensional point-cloud whose statistical properties we investigate. We reveal patterns and distributions in the rank versus Weierstrass coefficients, as well as the Beta distribution of the BSD ratio of the quantities. Via gradient boosted trees, machine learning is applied in finding inter-correlation amongst the various quantities. We anticipate that our approach will spark further research on the statistical properties of large datasets in Number Theory and more in general in pure Mathematics.
Machine learning applications in number theory
See the 2019 preprint Machine Learning meets Number Theory: The Data Science of Birch-Swinnerton-Dyer by Alessandretti, Baronchelli & He. Here is the Abstract: Empirical analysis is often the first s
Machine learning applications in number theory See the 2019 preprint Machine Learning meets Number Theory: The Data Science of Birch-Swinnerton-Dyer by Alessandretti, Baronchelli & He. Here is the Abstract: Empirical analysis is often the first step towards the birth of a conjecture. This is the case of the Birch-Swinnerton-Dyer (BSD) Conjecture describing the rational points on an elliptic curve, one of the most celebrated unsolved problems in mathematics. Here we extend the original empirical approach, to the analysis of the Cremona database of quantities relevant to BSD, inspecting more than 2.5 million elliptic curves by means of the latest techniques in data science, machine-learning and topological data analysis. Key quantities such as rank, Weierstrass coefficients, period, conductor, Tamagawa number, regulator and order of the Tate-Shafarevich group give rise to a high-dimensional point-cloud whose statistical properties we investigate. We reveal patterns and distributions in the rank versus Weierstrass coefficients, as well as the Beta distribution of the BSD ratio of the quantities. Via gradient boosted trees, machine learning is applied in finding inter-correlation amongst the various quantities. We anticipate that our approach will spark further research on the statistical properties of large datasets in Number Theory and more in general in pure Mathematics.
Machine learning applications in number theory See the 2019 preprint Machine Learning meets Number Theory: The Data Science of Birch-Swinnerton-Dyer by Alessandretti, Baronchelli & He. Here is the Abstract: Empirical analysis is often the first s
28,595
How to visualise "equal in distribution" in the context of stochastic dominance?
Graphing distribution functions is a standard way to visualize distributions which makes "$\overset{d}{=}$" almost trivial to understand. What is really needed in this context, though, is an understanding of how adding two random variables changes their distribution functions. This answer develops that understanding by explaining the concepts by means of visualizations. The result is somewhat surprising: a Wikipedia claim about the connection between stochastic dominance and equivalence in distribution appears to be wrong. Visualizing random variables By definition, a random variable is a (measurable) real-valued function defined on a sample space $\Omega$ ("Omega"). A common way to visualize functions is to graph them on a pair of axes. The points on one axis represent the elements of $\Omega$ (the domain) while the points on the other axis represent the possible values a function can take on (the range). Although conventionally a horizontal axis is used for the domain, for reasons that will soon become apparent I will dedicate the vertical axis to this role, reserving the horizontal axis to represent the range (all real numbers). As a running example, let $\Omega = \{a, b, c, d\}$ be a set of four possible outcomes of an experiment. Assume that all subsets are measurable (can have probabilities associated with them). Suppose the random variable $X$ assigns the values $2$, $4$, $5$, and $5$ to these outcomes, respectively. Then the upper left panel of the figure, "Random Variable X," is a graph of $X$. The centers of the blue dots locate the values of $X$ for each element of $\Omega$. Visualizing probabilities A random variable exists independently of any probability measure on $\Omega$. For this example, I posit a measure $\mathbb{P}$ in which $b$, $c$, and $d$ have equal probabilities of $1/5$ but $a$ has twice that probability, $2/5$. I have depicted that measure by making the areas of the dots in the graph directly proportional to the probabilities. The total area of all four dots therefore is taken to be unity (a probability of $1$). Probability distribution functions The cumulative distribution function (CDF) $F_X$ of a random variable $X:\Omega\to\mathbb{R}$ is determined by sweeping across the value ($\mathbb{R}$) axis from $-\infty$ to $\infty$ in the graph of $X$. The total area of the dots swept up in the graph of $X$ between $-\infty$ and any point $x$ (including $x$ itself) is the value of $F_X(x)$. This is illustrated in the left hand panels. Because the value axis is horizontal, the sweeping proceeds from left to right. The region in the graph that has already been swept out at $x=4.5$ is shaded. The total shaded area covers one large dot for $a$ ($\mathbb{P}(\{a\})=2/5$) and one small dot for $b$ ($\mathbb{P}(\{b\})=1/5)$). The total probability swept out so far is $3/5$. Therefore, as shown in the bottom left graph "CDF," where we can envision the left-to-right sweep occurring in tandem with the sweep in the upper graph, the value of $F_X(4.5)$ equals $3/5 = 0.6$. (This is why I made the value axis horizontal in the graph of $X$, because it enables us to visualize this sweeping process using a conventional graph of the CDF, where the value axis is horizontal and the probability axis is vertical.) (First order) stochastic dominance The right panels use these methods to show the same random variable $X$ together with another random variable $Y$ (defined on the same set $\Omega$) that it dominates. By definition, $X$ dominates $Y$ provided $F_X(x) \le F_Y(x)$ for all values $x$ and, for at least some values $x$, $F_X(x) \ne F_Y(x)$. Graphically, this means that the solid blue lines and dots in the lower right panel ("CDFs of X and Y") always lie at or beneath the dashed red lines and dots. Because a CDF can never decrease (probability can only be added in during the sweeping process, never dropped out), this geometric fact is equivalent to the graph of $F_X$ lying to the right of the graph of $F_Y$. Compared to $Y$, $X$ is shifted towards higher values. Notice, however, that $X$ is not uniformly greater than $Y$: whereas $X(a)=2,$ $Y(a)=3$ is larger. Nevertheless, $X$ manages to dominate $Y$ because there is a different subset of $\Omega$ (namely $\{b,c,d\}$) at which $Y$ is clearly inferior to $X$. The key idea worth pondering is that the probabilities for $X$ and $Y$ depicted in the lower graphs (of the distribution functions) can come from different subsets of $\Omega$. It is not necessary, for instance, that all the probability associated with $F_X(2)$ (which comes only from $\{a\}$) correspond to the same set associated with $F_Y(2)$ (which comes from $\{b, c, d\}$). In this example the two sets have nothing in common! Adding and subtracting random variables Random variables are added and subtracted pointwise. For instance, the random variable $X-Y$ has the value $(X-Y)(a) = X(a) - Y(a) = 2 - 0 = 2$. Similar calculations hold for the rest of $\Omega$. When, for any $\omega\in\Omega$, we add a negative value to $X(\omega)$, that must shift the point for $\omega$ to the left in the graph of $X$, because I have oriented the value axis to point to the right. The Wikipedia claim The Wikipedia article on stochastic dominance uses the term "gamble" without definition. This appears to be a synonym for "random variable" (and the word "state" appears to refer to any element of $\Omega$). It uses two forms of notation for gambles, apparently interchangeably: "$A$" and "$x_A$" refer to one gamble and "$B$" and "$x_B$" to another. I will simply use $X$ for the former and $Y$ for the latter. With this understanding, the article asserts that If and only if $X$ first-order stochastically dominates $Y$, there exists some gamble $Z$ such that $Y \overset {d}{=} (X+Z)$ where $Z\le 0$ in all possible states (and strictly negative in at least one state), ... Equivalence in distribution, "$\overset{d}{=}$", simply means that the two distribution functions are the same. This occurs if and only if their graphs coincide. Consider how such a coincidence could be made to happen by changing $X$ to $X+Z$: that is, by either keeping the values of $X$ the same or making some of them smaller, we wish to transform the graph of $F_X$ into the graph of $F_Y$. The figure shows a counterexample to the claim. In the sweeping-out construction of $F_{X+Z}$, by the time we have swept out the probability through the value $x=2$, the value of $X+Z$ at $a$ must already have been accounted for (since $Z(a)$ can be no greater than $0$). Therefore, no matter how $X$ is altered by $Z$, the value of $F_{X+Z}(2)$ must be $2/5$ or greater. Moreover, the graph of $F_{X+Z}$ must exhibit a vertical jump of at least $\mathbb{P}(\{a\})=2/5$ at the point where $x = (X+Z)(a)$, because all the probability of $\{a\}$ is swept up instantaneously there. But it is obvious geometrically that the graph of $F_X$ cannot be changed in this way to coincide with the graph of $F_Y$, because the latter has no vertical jump this large until $x=3$, which is too late. When neither graph has any discrete jumps--that is, when both $X$ and $Y$ have continuous distributions--we can systematically alter the values of $X$ to shift the graph of $F_X$ to the left until it coincides with the graph of $F_Y$. (The proof of this involves either consideration of "infinitesimal" amounts of probability or else a measure-theoretic limiting argument.)
How to visualise "equal in distribution" in the context of stochastic dominance?
Graphing distribution functions is a standard way to visualize distributions which makes "$\overset{d}{=}$" almost trivial to understand. What is really needed in this context, though, is an understa
How to visualise "equal in distribution" in the context of stochastic dominance? Graphing distribution functions is a standard way to visualize distributions which makes "$\overset{d}{=}$" almost trivial to understand. What is really needed in this context, though, is an understanding of how adding two random variables changes their distribution functions. This answer develops that understanding by explaining the concepts by means of visualizations. The result is somewhat surprising: a Wikipedia claim about the connection between stochastic dominance and equivalence in distribution appears to be wrong. Visualizing random variables By definition, a random variable is a (measurable) real-valued function defined on a sample space $\Omega$ ("Omega"). A common way to visualize functions is to graph them on a pair of axes. The points on one axis represent the elements of $\Omega$ (the domain) while the points on the other axis represent the possible values a function can take on (the range). Although conventionally a horizontal axis is used for the domain, for reasons that will soon become apparent I will dedicate the vertical axis to this role, reserving the horizontal axis to represent the range (all real numbers). As a running example, let $\Omega = \{a, b, c, d\}$ be a set of four possible outcomes of an experiment. Assume that all subsets are measurable (can have probabilities associated with them). Suppose the random variable $X$ assigns the values $2$, $4$, $5$, and $5$ to these outcomes, respectively. Then the upper left panel of the figure, "Random Variable X," is a graph of $X$. The centers of the blue dots locate the values of $X$ for each element of $\Omega$. Visualizing probabilities A random variable exists independently of any probability measure on $\Omega$. For this example, I posit a measure $\mathbb{P}$ in which $b$, $c$, and $d$ have equal probabilities of $1/5$ but $a$ has twice that probability, $2/5$. I have depicted that measure by making the areas of the dots in the graph directly proportional to the probabilities. The total area of all four dots therefore is taken to be unity (a probability of $1$). Probability distribution functions The cumulative distribution function (CDF) $F_X$ of a random variable $X:\Omega\to\mathbb{R}$ is determined by sweeping across the value ($\mathbb{R}$) axis from $-\infty$ to $\infty$ in the graph of $X$. The total area of the dots swept up in the graph of $X$ between $-\infty$ and any point $x$ (including $x$ itself) is the value of $F_X(x)$. This is illustrated in the left hand panels. Because the value axis is horizontal, the sweeping proceeds from left to right. The region in the graph that has already been swept out at $x=4.5$ is shaded. The total shaded area covers one large dot for $a$ ($\mathbb{P}(\{a\})=2/5$) and one small dot for $b$ ($\mathbb{P}(\{b\})=1/5)$). The total probability swept out so far is $3/5$. Therefore, as shown in the bottom left graph "CDF," where we can envision the left-to-right sweep occurring in tandem with the sweep in the upper graph, the value of $F_X(4.5)$ equals $3/5 = 0.6$. (This is why I made the value axis horizontal in the graph of $X$, because it enables us to visualize this sweeping process using a conventional graph of the CDF, where the value axis is horizontal and the probability axis is vertical.) (First order) stochastic dominance The right panels use these methods to show the same random variable $X$ together with another random variable $Y$ (defined on the same set $\Omega$) that it dominates. By definition, $X$ dominates $Y$ provided $F_X(x) \le F_Y(x)$ for all values $x$ and, for at least some values $x$, $F_X(x) \ne F_Y(x)$. Graphically, this means that the solid blue lines and dots in the lower right panel ("CDFs of X and Y") always lie at or beneath the dashed red lines and dots. Because a CDF can never decrease (probability can only be added in during the sweeping process, never dropped out), this geometric fact is equivalent to the graph of $F_X$ lying to the right of the graph of $F_Y$. Compared to $Y$, $X$ is shifted towards higher values. Notice, however, that $X$ is not uniformly greater than $Y$: whereas $X(a)=2,$ $Y(a)=3$ is larger. Nevertheless, $X$ manages to dominate $Y$ because there is a different subset of $\Omega$ (namely $\{b,c,d\}$) at which $Y$ is clearly inferior to $X$. The key idea worth pondering is that the probabilities for $X$ and $Y$ depicted in the lower graphs (of the distribution functions) can come from different subsets of $\Omega$. It is not necessary, for instance, that all the probability associated with $F_X(2)$ (which comes only from $\{a\}$) correspond to the same set associated with $F_Y(2)$ (which comes from $\{b, c, d\}$). In this example the two sets have nothing in common! Adding and subtracting random variables Random variables are added and subtracted pointwise. For instance, the random variable $X-Y$ has the value $(X-Y)(a) = X(a) - Y(a) = 2 - 0 = 2$. Similar calculations hold for the rest of $\Omega$. When, for any $\omega\in\Omega$, we add a negative value to $X(\omega)$, that must shift the point for $\omega$ to the left in the graph of $X$, because I have oriented the value axis to point to the right. The Wikipedia claim The Wikipedia article on stochastic dominance uses the term "gamble" without definition. This appears to be a synonym for "random variable" (and the word "state" appears to refer to any element of $\Omega$). It uses two forms of notation for gambles, apparently interchangeably: "$A$" and "$x_A$" refer to one gamble and "$B$" and "$x_B$" to another. I will simply use $X$ for the former and $Y$ for the latter. With this understanding, the article asserts that If and only if $X$ first-order stochastically dominates $Y$, there exists some gamble $Z$ such that $Y \overset {d}{=} (X+Z)$ where $Z\le 0$ in all possible states (and strictly negative in at least one state), ... Equivalence in distribution, "$\overset{d}{=}$", simply means that the two distribution functions are the same. This occurs if and only if their graphs coincide. Consider how such a coincidence could be made to happen by changing $X$ to $X+Z$: that is, by either keeping the values of $X$ the same or making some of them smaller, we wish to transform the graph of $F_X$ into the graph of $F_Y$. The figure shows a counterexample to the claim. In the sweeping-out construction of $F_{X+Z}$, by the time we have swept out the probability through the value $x=2$, the value of $X+Z$ at $a$ must already have been accounted for (since $Z(a)$ can be no greater than $0$). Therefore, no matter how $X$ is altered by $Z$, the value of $F_{X+Z}(2)$ must be $2/5$ or greater. Moreover, the graph of $F_{X+Z}$ must exhibit a vertical jump of at least $\mathbb{P}(\{a\})=2/5$ at the point where $x = (X+Z)(a)$, because all the probability of $\{a\}$ is swept up instantaneously there. But it is obvious geometrically that the graph of $F_X$ cannot be changed in this way to coincide with the graph of $F_Y$, because the latter has no vertical jump this large until $x=3$, which is too late. When neither graph has any discrete jumps--that is, when both $X$ and $Y$ have continuous distributions--we can systematically alter the values of $X$ to shift the graph of $F_X$ to the left until it coincides with the graph of $F_Y$. (The proof of this involves either consideration of "infinitesimal" amounts of probability or else a measure-theoretic limiting argument.)
How to visualise "equal in distribution" in the context of stochastic dominance? Graphing distribution functions is a standard way to visualize distributions which makes "$\overset{d}{=}$" almost trivial to understand. What is really needed in this context, though, is an understa
28,596
How to visualise "equal in distribution" in the context of stochastic dominance?
Take an unbiased coin. The random variable "Heads" is equal in distribution to random variable "Tails". The variable "heads" is equal almost surely to Not(Heads). 2 different unbiased coins are also equal in distribution, but they are not equal almost surely... I do not know the value of other coin from knowing value of the first.
How to visualise "equal in distribution" in the context of stochastic dominance?
Take an unbiased coin. The random variable "Heads" is equal in distribution to random variable "Tails". The variable "heads" is equal almost surely to Not(Heads). 2 different unbiased coins are also
How to visualise "equal in distribution" in the context of stochastic dominance? Take an unbiased coin. The random variable "Heads" is equal in distribution to random variable "Tails". The variable "heads" is equal almost surely to Not(Heads). 2 different unbiased coins are also equal in distribution, but they are not equal almost surely... I do not know the value of other coin from knowing value of the first.
How to visualise "equal in distribution" in the context of stochastic dominance? Take an unbiased coin. The random variable "Heads" is equal in distribution to random variable "Tails". The variable "heads" is equal almost surely to Not(Heads). 2 different unbiased coins are also
28,597
How to visualise "equal in distribution" in the context of stochastic dominance?
This is meant to be a comment on @whuber's answer, but I don't have the reputation to do this. @Sycorax turned a previous answer I made into a comment but it was truncated, so I'm reproducing my original post below and adding a reply to @whuber's reply. The claim on Wikipedia refers to Strassen's theorem -- see e.g. (3) in this note ("first-order stochastic dominance" is just the strict version of the usual stochastic order). It is incorrectly stated, but not for the reason mentioned here. The problem comes from the second part of the sentence "where $y \leq 0$ in all possible states (and strictly negative in at least one state)". The condition in parentheses is not sufficient; instead, this should be and not equal in distribution to 0. Indeed, it is possible to have $X(\omega) \neq 0$ for some $\omega$ and yet have $X$ be equal in distribution to 0. I don't understand @whuber's post and believe it is incorrect. At any rate, his figure does not show a counter example to Strassen's theorem. The theorem says that we can find a random two random variables $X'$ and $Z$, defined on the same probability space as $Y$, such that (1) $Z$ is almost surely negative (and not equal in distribution to 0) and (2) we have $Y = X' + Z$. After my initial reply, @whuber's said: First, $X$ and $Y$ are defined on the same probability space, as they must be. I have updated the figure (in the upper right panel) to make that more clear. Second, this is not a counterexample to Strassen's Theorem, because that theorem (at least in the generality discussed by Lindvall in the note you reference) applies only to complete separable metric spaces, which implies they must have at least a countable number of outcomes, which is not the case in my example. First, note that in general $X$ and $Y$ need not be defined on the same probability space. It is the new random variable $X'$ such that $Y = X' + Z$ that has to be defined on the same probability space as $Y$ (otherwise that last inequality would make no sense). Second, every finite (or infinite countable) topological space is separable. Third, here is one way to obtain the coupling we are looking for: let $F^{-1}_X$ and $F^{-1}_Y$ be the inverse CDFs of $X$ and $Y$ (also known as their quantile function), and let $U$ be a uniform variable on [0, 1]. Then, $F^{-1}_X(U) \sim X$, $F^{-1}_Y(U) \sim Y$ and $F^{-1}_X(U) \leq F^{-1}_Y(U)$.
How to visualise "equal in distribution" in the context of stochastic dominance?
This is meant to be a comment on @whuber's answer, but I don't have the reputation to do this. @Sycorax turned a previous answer I made into a comment but it was truncated, so I'm reproducing my origi
How to visualise "equal in distribution" in the context of stochastic dominance? This is meant to be a comment on @whuber's answer, but I don't have the reputation to do this. @Sycorax turned a previous answer I made into a comment but it was truncated, so I'm reproducing my original post below and adding a reply to @whuber's reply. The claim on Wikipedia refers to Strassen's theorem -- see e.g. (3) in this note ("first-order stochastic dominance" is just the strict version of the usual stochastic order). It is incorrectly stated, but not for the reason mentioned here. The problem comes from the second part of the sentence "where $y \leq 0$ in all possible states (and strictly negative in at least one state)". The condition in parentheses is not sufficient; instead, this should be and not equal in distribution to 0. Indeed, it is possible to have $X(\omega) \neq 0$ for some $\omega$ and yet have $X$ be equal in distribution to 0. I don't understand @whuber's post and believe it is incorrect. At any rate, his figure does not show a counter example to Strassen's theorem. The theorem says that we can find a random two random variables $X'$ and $Z$, defined on the same probability space as $Y$, such that (1) $Z$ is almost surely negative (and not equal in distribution to 0) and (2) we have $Y = X' + Z$. After my initial reply, @whuber's said: First, $X$ and $Y$ are defined on the same probability space, as they must be. I have updated the figure (in the upper right panel) to make that more clear. Second, this is not a counterexample to Strassen's Theorem, because that theorem (at least in the generality discussed by Lindvall in the note you reference) applies only to complete separable metric spaces, which implies they must have at least a countable number of outcomes, which is not the case in my example. First, note that in general $X$ and $Y$ need not be defined on the same probability space. It is the new random variable $X'$ such that $Y = X' + Z$ that has to be defined on the same probability space as $Y$ (otherwise that last inequality would make no sense). Second, every finite (or infinite countable) topological space is separable. Third, here is one way to obtain the coupling we are looking for: let $F^{-1}_X$ and $F^{-1}_Y$ be the inverse CDFs of $X$ and $Y$ (also known as their quantile function), and let $U$ be a uniform variable on [0, 1]. Then, $F^{-1}_X(U) \sim X$, $F^{-1}_Y(U) \sim Y$ and $F^{-1}_X(U) \leq F^{-1}_Y(U)$.
How to visualise "equal in distribution" in the context of stochastic dominance? This is meant to be a comment on @whuber's answer, but I don't have the reputation to do this. @Sycorax turned a previous answer I made into a comment but it was truncated, so I'm reproducing my origi
28,598
AIC, anova error: Models are not all fitted to the same number of observations, models were not all fitted to the same size of dataset
A quick search shows that it is possible (although I have to admit that I thought it wasn't) and that it isn't a bug...just another case where methods in R are hidden and result in things that seem 'unexpected', but the RTFM crowd say, "It is in the documentation." Anyway...your solution is to do anova with the lme as the first argument and the lm models as the second (and third if you like) argument(s). If this seems odd, it is because it is a little odd. The reason is that when you call anova, the anova.lme method is called only if the first argument is an lme object. Otherwise, it calls anova.lm (which in turn calls anova.lmlist; if you dig into anova.lm, you'll see why). For details about how you want to be calling anova in this case, pull up help for anova.lme. You'll see that you can compare other models with lme models, but they have to be in a position other than the first argument. Apparently it is also possible to use anova on models fit using the gls function without caring too much about the order of the model arguments. But I don't know enough of the details to determine whether that is a good idea or not, or what exactly it implies (it seems probably fine, but your call). From that link comparing lm to lme appears to be well documented and cited as a method, so I'd err in that direction, were I you. Good luck.
AIC, anova error: Models are not all fitted to the same number of observations, models were not all
A quick search shows that it is possible (although I have to admit that I thought it wasn't) and that it isn't a bug...just another case where methods in R are hidden and result in things that seem 'u
AIC, anova error: Models are not all fitted to the same number of observations, models were not all fitted to the same size of dataset A quick search shows that it is possible (although I have to admit that I thought it wasn't) and that it isn't a bug...just another case where methods in R are hidden and result in things that seem 'unexpected', but the RTFM crowd say, "It is in the documentation." Anyway...your solution is to do anova with the lme as the first argument and the lm models as the second (and third if you like) argument(s). If this seems odd, it is because it is a little odd. The reason is that when you call anova, the anova.lme method is called only if the first argument is an lme object. Otherwise, it calls anova.lm (which in turn calls anova.lmlist; if you dig into anova.lm, you'll see why). For details about how you want to be calling anova in this case, pull up help for anova.lme. You'll see that you can compare other models with lme models, but they have to be in a position other than the first argument. Apparently it is also possible to use anova on models fit using the gls function without caring too much about the order of the model arguments. But I don't know enough of the details to determine whether that is a good idea or not, or what exactly it implies (it seems probably fine, but your call). From that link comparing lm to lme appears to be well documented and cited as a method, so I'd err in that direction, were I you. Good luck.
AIC, anova error: Models are not all fitted to the same number of observations, models were not all A quick search shows that it is possible (although I have to admit that I thought it wasn't) and that it isn't a bug...just another case where methods in R are hidden and result in things that seem 'u
28,599
AIC, anova error: Models are not all fitted to the same number of observations, models were not all fitted to the same size of dataset
This is peculiar definitely. As a first thought: when doing model comparison where models are having different fixed effects structures (m2 and m3 for example), it is best to us $ML$ as REML will "change" $y$. (It will multiply it with $k$, where $kX= 0$) That is interesting it that it works using method="ML" which makes me believe it might not be a bug. It seems almost like it enforces "good practice". Having said that, let's look under the hood: methods(AIC) getAnywhere('AIC.default') A single object matching ‘AIC.default’ was found It was found in the following places registered S3 method for AIC from namespace stats namespace:stats with value function (object, ..., k = 2) { ll <- if ("stats4" %in% loadedNamespaces()) stats4:::logLik else logLik if (!missing(...)) { lls <- lapply(list(object, ...), ll) vals <- sapply(lls, function(el) { no <- attr(el, "nobs") #THIS IS THE ISSUE! c(as.numeric(el), attr(el, "df"), if (is.null(no)) NA_integer_ else no) }) val <- data.frame(df = vals[2L, ], ll = vals[1L, ]) nos <- na.omit(vals[3L, ]) if (length(nos) && any(nos != nos[1L])) warning("models are not all fitted to the same number of observations") val <- data.frame(df = val$df, AIC = -2 * val$ll + k * val$df) Call <- match.call() Call$k <- NULL row.names(val) <- as.character(Call[-1L]) val } else { lls <- ll(object) -2 * as.numeric(lls) + k * attr(lls, "df") } } where in your case you can see that : lls <- lapply(list(m2,m3), stats4::logLik) attr(lls[[1]], "nobs") #[1] 500 attr(lls[[2]], "nobs") #[1] 498 and obviously logLik is doing something (maybe?) unexpected...? no, not really, if you look at the documentation of logLik, ?logLik, you'll see it is explicitly stated: There may be other attributes depending on the method used: see the appropriate documentation. One that is used by several methods is ‘"nobs"’, the number of observations used in estimation (after the restrictions if ‘REML = TRUE’) which brings us back to our original point, you should be using ML. To use a common saying in CS: "It's not a bug; it's an (real) feature!" EDIT: (Just to address your comment:) Assume you fit another model using lmer this time: m3lmer <- lmer(y ~ x + 1|cat) and you do the following: lls <- lapply(list(m2,m3, m3lmer), stats4::logLik) attr(lls[[3]], "nobs") #[1] 500 attr(lls[[2]], "nobs") #[1] 498 Which seems like a clear discrepancy between the two but it really isn't as Gavin explained. Just to state the obvious: attr( logLik(lme(y ~ x, random = ~ 1|cat, na.action = na.omit, method="ML")), "nobs") #[1] 500 There is a good reason why this happens in terms of methodology I think. lme does try to make sense of the LME regression for you while lmer when doing model comparisons it falls back immediately to it's ML results. I think there are no bugs on this matter in lme and lmer just different rationales behind each package. See also Gavin Simposon's comment on a more insightful explanation of what went on with anova() (The same thing practically happens with AIC)
AIC, anova error: Models are not all fitted to the same number of observations, models were not all
This is peculiar definitely. As a first thought: when doing model comparison where models are having different fixed effects structures (m2 and m3 for example), it is best to us $ML$ as REML will "cha
AIC, anova error: Models are not all fitted to the same number of observations, models were not all fitted to the same size of dataset This is peculiar definitely. As a first thought: when doing model comparison where models are having different fixed effects structures (m2 and m3 for example), it is best to us $ML$ as REML will "change" $y$. (It will multiply it with $k$, where $kX= 0$) That is interesting it that it works using method="ML" which makes me believe it might not be a bug. It seems almost like it enforces "good practice". Having said that, let's look under the hood: methods(AIC) getAnywhere('AIC.default') A single object matching ‘AIC.default’ was found It was found in the following places registered S3 method for AIC from namespace stats namespace:stats with value function (object, ..., k = 2) { ll <- if ("stats4" %in% loadedNamespaces()) stats4:::logLik else logLik if (!missing(...)) { lls <- lapply(list(object, ...), ll) vals <- sapply(lls, function(el) { no <- attr(el, "nobs") #THIS IS THE ISSUE! c(as.numeric(el), attr(el, "df"), if (is.null(no)) NA_integer_ else no) }) val <- data.frame(df = vals[2L, ], ll = vals[1L, ]) nos <- na.omit(vals[3L, ]) if (length(nos) && any(nos != nos[1L])) warning("models are not all fitted to the same number of observations") val <- data.frame(df = val$df, AIC = -2 * val$ll + k * val$df) Call <- match.call() Call$k <- NULL row.names(val) <- as.character(Call[-1L]) val } else { lls <- ll(object) -2 * as.numeric(lls) + k * attr(lls, "df") } } where in your case you can see that : lls <- lapply(list(m2,m3), stats4::logLik) attr(lls[[1]], "nobs") #[1] 500 attr(lls[[2]], "nobs") #[1] 498 and obviously logLik is doing something (maybe?) unexpected...? no, not really, if you look at the documentation of logLik, ?logLik, you'll see it is explicitly stated: There may be other attributes depending on the method used: see the appropriate documentation. One that is used by several methods is ‘"nobs"’, the number of observations used in estimation (after the restrictions if ‘REML = TRUE’) which brings us back to our original point, you should be using ML. To use a common saying in CS: "It's not a bug; it's an (real) feature!" EDIT: (Just to address your comment:) Assume you fit another model using lmer this time: m3lmer <- lmer(y ~ x + 1|cat) and you do the following: lls <- lapply(list(m2,m3, m3lmer), stats4::logLik) attr(lls[[3]], "nobs") #[1] 500 attr(lls[[2]], "nobs") #[1] 498 Which seems like a clear discrepancy between the two but it really isn't as Gavin explained. Just to state the obvious: attr( logLik(lme(y ~ x, random = ~ 1|cat, na.action = na.omit, method="ML")), "nobs") #[1] 500 There is a good reason why this happens in terms of methodology I think. lme does try to make sense of the LME regression for you while lmer when doing model comparisons it falls back immediately to it's ML results. I think there are no bugs on this matter in lme and lmer just different rationales behind each package. See also Gavin Simposon's comment on a more insightful explanation of what went on with anova() (The same thing practically happens with AIC)
AIC, anova error: Models are not all fitted to the same number of observations, models were not all This is peculiar definitely. As a first thought: when doing model comparison where models are having different fixed effects structures (m2 and m3 for example), it is best to us $ML$ as REML will "cha
28,600
How to best handle subscores in a meta-analysis?
This type of data is known as the dependent effect sizes. Several approaches can be used to handle the dependence. I would recommend the use of three-level meta-analysis (Cheung, 2014; Konstantopoulos, 2011; Van den Noortgate et al. 2013). It decomposes the variation to level 2 and level 3 heterogeneity. In your example, the level 2 and level 3 heterogeneity refer to the heterogeneity due to subscales and studies. The metaSEM package (http://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/) implemented in R provides functions to conduct three-level meta-analysis. For example, ## Your data d <- round(rnorm(5,5,1),2) sd <- round(rnorm(5,1,0.1),2) study <- c(1,2,3,3,3) subscore <- c(1,1,1,2,3) my_data <- as.data.frame(cbind(study, subscore, d, sd)) ## Load the library with the data set library(metaSEM) summary( meta3(y=d, v=sd^2, cluster=study, data=my_data) ) The output is: Running Meta analysis with ML Call: meta3(y = d, v = sd^2, cluster = study, data = my_data) 95% confidence intervals: z statistic approximation Coefficients: Estimate Std.Error lbound ubound z value Pr(>|z|) Intercept 4.9878e+00 4.2839e-01 4.1482e+00 5.8275e+00 11.643 < 2.2e-16 *** Tau2_2 1.0000e-10 NA NA NA NA NA Tau2_3 1.0000e-10 NA NA NA NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Q statistic on homogeneity of effect sizes: 0.1856967 Degrees of freedom of the Q statistic: 4 P value of the Q statistic: 0.9959473 Heterogeneity indices (based on the estimated Tau2): Estimate I2_2 (Typical v: Q statistic) 0 I2_3 (Typical v: Q statistic) 0 Number of studies (or clusters): 3 Number of observed statistics: 5 Number of estimated parameters: 3 Degrees of freedom: 2 -2 log likelihood: 8.989807 OpenMx status1: 1 ("0" and "1": considered fine; other values indicate problems) In this example, the estimates of the level 2 and level 3 heterogeneity are close to 0. Level 2 and level 3 covariates may also be included to model the heterogeneity. More examples on the three-level meta-analysis are available at http://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/3level.html References Cheung, M. W.-L. (2014). Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach. Psychological Methods, 19(2), 211-29. doi: 10.1037/a0032968. Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2(1), 61–76. doi:10.1002/jrsm.35 Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45(2), 576–594. doi:10.3758/s13428-012-0261-6
How to best handle subscores in a meta-analysis?
This type of data is known as the dependent effect sizes. Several approaches can be used to handle the dependence. I would recommend the use of three-level meta-analysis (Cheung, 2014; Konstantopoulos
How to best handle subscores in a meta-analysis? This type of data is known as the dependent effect sizes. Several approaches can be used to handle the dependence. I would recommend the use of three-level meta-analysis (Cheung, 2014; Konstantopoulos, 2011; Van den Noortgate et al. 2013). It decomposes the variation to level 2 and level 3 heterogeneity. In your example, the level 2 and level 3 heterogeneity refer to the heterogeneity due to subscales and studies. The metaSEM package (http://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/) implemented in R provides functions to conduct three-level meta-analysis. For example, ## Your data d <- round(rnorm(5,5,1),2) sd <- round(rnorm(5,1,0.1),2) study <- c(1,2,3,3,3) subscore <- c(1,1,1,2,3) my_data <- as.data.frame(cbind(study, subscore, d, sd)) ## Load the library with the data set library(metaSEM) summary( meta3(y=d, v=sd^2, cluster=study, data=my_data) ) The output is: Running Meta analysis with ML Call: meta3(y = d, v = sd^2, cluster = study, data = my_data) 95% confidence intervals: z statistic approximation Coefficients: Estimate Std.Error lbound ubound z value Pr(>|z|) Intercept 4.9878e+00 4.2839e-01 4.1482e+00 5.8275e+00 11.643 < 2.2e-16 *** Tau2_2 1.0000e-10 NA NA NA NA NA Tau2_3 1.0000e-10 NA NA NA NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Q statistic on homogeneity of effect sizes: 0.1856967 Degrees of freedom of the Q statistic: 4 P value of the Q statistic: 0.9959473 Heterogeneity indices (based on the estimated Tau2): Estimate I2_2 (Typical v: Q statistic) 0 I2_3 (Typical v: Q statistic) 0 Number of studies (or clusters): 3 Number of observed statistics: 5 Number of estimated parameters: 3 Degrees of freedom: 2 -2 log likelihood: 8.989807 OpenMx status1: 1 ("0" and "1": considered fine; other values indicate problems) In this example, the estimates of the level 2 and level 3 heterogeneity are close to 0. Level 2 and level 3 covariates may also be included to model the heterogeneity. More examples on the three-level meta-analysis are available at http://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/3level.html References Cheung, M. W.-L. (2014). Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach. Psychological Methods, 19(2), 211-29. doi: 10.1037/a0032968. Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2(1), 61–76. doi:10.1002/jrsm.35 Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45(2), 576–594. doi:10.3758/s13428-012-0261-6
How to best handle subscores in a meta-analysis? This type of data is known as the dependent effect sizes. Several approaches can be used to handle the dependence. I would recommend the use of three-level meta-analysis (Cheung, 2014; Konstantopoulos