idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
46,901 | Derivation for the confidence interval for a population proportion | The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the binomial distribution by a normal distribution with the same mean np and the same variance $np(1-p)$. This means $\frac{X... | Derivation for the confidence interval for a population proportion | The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the b | Derivation for the confidence interval for a population proportion
The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the binomial distribution by a normal distribution with th... | Derivation for the confidence interval for a population proportion
The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the b |
46,902 | Derivation for the confidence interval for a population proportion | This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would have:
$$
\mbox{P}\left( -z_{\alpha/2} < Z < z_{\alpha/2} \right) \approx 1-\alpha.
$$
Substitute, then, the centered and ... | Derivation for the confidence interval for a population proportion | This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would hav | Derivation for the confidence interval for a population proportion
This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would have:
$$
\mbox{P}\left( -z_{\alpha/2} < Z < z_{\alpha/2... | Derivation for the confidence interval for a population proportion
This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would hav |
46,903 | How to interpret two-way interactions in Linear Mixed Effects modeling? | It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we look at higher ages; the effect of TrialNumber is 2.123e0-08 units higher for additional unit of Age. As Age gets bigger, t... | How to interpret two-way interactions in Linear Mixed Effects modeling? | It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we loo | How to interpret two-way interactions in Linear Mixed Effects modeling?
It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we look at higher ages; the effect of TrialNumber is 2... | How to interpret two-way interactions in Linear Mixed Effects modeling?
It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we loo |
46,904 | How to account for lag in a simple regression in R? | I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around on your problem. Pay attention to configure the time series structure. | How to account for lag in a simple regression in R? | I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around | How to account for lag in a simple regression in R?
I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around on your problem. Pay attention to configure the time series structur... | How to account for lag in a simple regression in R?
I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around |
46,905 | How to account for lag in a simple regression in R? | You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with lags and leads (when customers and/or competitors anticipate a marketing action and adjust their behavior before that ac... | How to account for lag in a simple regression in R? | You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with | How to account for lag in a simple regression in R?
You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with lags and leads (when customers and/or competitors anticipate a marke... | How to account for lag in a simple regression in R?
You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with |
46,906 | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require a lot more knowledge/intuition/experience to properly use and interpret. Sort of like replacing company cars with airpl... | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require a lot more knowledge/int... | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require |
46,907 | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impossible.
I think in your shoes I would just make a lot of reports that write themselves (with Brew, Sweave, R2HTML, whate... | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impo | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impossible.
I think in your s... | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impo |
46,908 | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | When it comes to data mining, Weka is very user-friendly. | What are some good (ideally free) tools to give laymen access to basic statistical techniques? | When it comes to data mining, Weka is very user-friendly. | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
When it comes to data mining, Weka is very user-friendly. | What are some good (ideally free) tools to give laymen access to basic statistical techniques?
When it comes to data mining, Weka is very user-friendly. |
46,909 | What are probabilistic approaches to finding the right number of clusters? | There are methods to do that. A good starting point is
Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information Processing Systems 12 (Vol. 12, pp. 554-560). MIT Press.
The idea is to put a Dirichlet prior on the mixture weights of ... | What are probabilistic approaches to finding the right number of clusters? | There are methods to do that. A good starting point is
Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information P | What are probabilistic approaches to finding the right number of clusters?
There are methods to do that. A good starting point is
Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information Processing Systems 12 (Vol. 12, pp. 554-560). ... | What are probabilistic approaches to finding the right number of clusters?
There are methods to do that. A good starting point is
Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information P |
46,910 | What are probabilistic approaches to finding the right number of clusters? | The first question you should then answer is:
What is a cluster?
Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct.
If you run e.g. k-means, it does a good job in finding the optimal $k$ cell voronoi partitioning of the dataset. So if you are referring to k-mean... | What are probabilistic approaches to finding the right number of clusters? | The first question you should then answer is:
What is a cluster?
Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct.
If you run e.g. k-means, i | What are probabilistic approaches to finding the right number of clusters?
The first question you should then answer is:
What is a cluster?
Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct.
If you run e.g. k-means, it does a good job in finding the optimal $k$ ... | What are probabilistic approaches to finding the right number of clusters?
The first question you should then answer is:
What is a cluster?
Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct.
If you run e.g. k-means, i |
46,911 | What is distribution of lengths of gaps between occurrences of ones in Bernoulli process? | The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e.
$$
\mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots
$$
Fitting your distribution to the data presumably means estimating $p$ by $\hat p$ and using the pluggin distribution $\mathcal{G}(\hat p... | What is distribution of lengths of gaps between occurrences of ones in Bernoulli process? | The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e.
$$
\mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots
$$
Fitting your dis | What is distribution of lengths of gaps between occurrences of ones in Bernoulli process?
The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e.
$$
\mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots
$$
Fitting your distribution to the data presumab... | What is distribution of lengths of gaps between occurrences of ones in Bernoulli process?
The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e.
$$
\mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots
$$
Fitting your dis |
46,912 | How do I clean up inconsistent survey data? | data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :)
Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted telephone interviewing (or online interviews, face-to-face interviewing with a laptop), the survey programmers code the surv... | How do I clean up inconsistent survey data? | data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :)
Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted tele | How do I clean up inconsistent survey data?
data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :)
Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted telephone interviewing (or online interviews, face-to-face interviewing with a l... | How do I clean up inconsistent survey data?
data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :)
Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted tele |
46,913 | Understanding proof of McDiarmid's inequality | $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$
Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$.
\begin{align*}
\mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g | X_{1:i}) - \mathbb{E}(g | X_{1:i-1})\right] | X_{1:i-1}\right) \\\\
&=\mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1... | Understanding proof of McDiarmid's inequality | $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$
Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$.
\begin{align*}
\mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g | Understanding proof of McDiarmid's inequality
$\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$
Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$.
\begin{align*}
\mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g | X_{1:i}) - \mathbb{E}(g | X_{1:i-1})\right] | X_{1:i-1}\right) \\\\
&=\... | Understanding proof of McDiarmid's inequality
$\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$
Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$.
\begin{align*}
\mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g |
46,914 | What are the assumptions (and H0) for Wilcoxon signed-rank test? | Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distributional assumption this depends on how you state the hypothesis. If you want to make an inference about the mean differen... | What are the assumptions (and H0) for Wilcoxon signed-rank test? | Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distribut | What are the assumptions (and H0) for Wilcoxon signed-rank test?
Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distributional assumption this depends on how you state the hypo... | What are the assumptions (and H0) for Wilcoxon signed-rank test?
Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distribut |
46,915 | Whether to report untransformed data when performing ANOVA on transformed data? | This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting it back from the transformed after summarizing, means, variance, etc. might be OK in certain situations). I guess the ... | Whether to report untransformed data when performing ANOVA on transformed data? | This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting | Whether to report untransformed data when performing ANOVA on transformed data?
This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting it back from the transformed after summ... | Whether to report untransformed data when performing ANOVA on transformed data?
This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting |
46,916 | Whether to report untransformed data when performing ANOVA on transformed data? | @John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's not necessary for model estimation. Normality is required for $p$-values to be accurate with low $N$ (i.e., $p$-values wi... | Whether to report untransformed data when performing ANOVA on transformed data? | @John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's no | Whether to report untransformed data when performing ANOVA on transformed data?
@John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's not necessary for model estimation. Norma... | Whether to report untransformed data when performing ANOVA on transformed data?
@John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's no |
46,917 | Whether to report untransformed data when performing ANOVA on transformed data? | It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means | Whether to report untransformed data when performing ANOVA on transformed data? | It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means | Whether to report untransformed data when performing ANOVA on transformed data?
It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means | Whether to report untransformed data when performing ANOVA on transformed data?
It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means |
46,918 | How can I (or should I) test that observation A tends to be greater than observation B for each subject? | You can do a paired t-test.
In R, t.test(surfOM,meanOM,paired=TRUE)
That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes are a sample of a larger population of lakes, not if you have data on all the lakes in your population (e.g. all the lak... | How can I (or should I) test that observation A tends to be greater than observation B for each subj | You can do a paired t-test.
In R, t.test(surfOM,meanOM,paired=TRUE)
That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes | How can I (or should I) test that observation A tends to be greater than observation B for each subject?
You can do a paired t-test.
In R, t.test(surfOM,meanOM,paired=TRUE)
That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes are a sample of... | How can I (or should I) test that observation A tends to be greater than observation B for each subj
You can do a paired t-test.
In R, t.test(surfOM,meanOM,paired=TRUE)
That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes |
46,919 | How can I (or should I) test that observation A tends to be greater than observation B for each subject? | When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B}$, but it can also happen that $\mu_{A}$ is greater than $\mu_{B}$... yet $P(A > B) < 0.50$.
(For a concrete example o... | How can I (or should I) test that observation A tends to be greater than observation B for each subj | When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B | How can I (or should I) test that observation A tends to be greater than observation B for each subject?
When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B}$, but it can ... | How can I (or should I) test that observation A tends to be greater than observation B for each subj
When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B |
46,920 | Are confidence intervals always symmetrical around the point estimate? [duplicate] | Short Answer: No
Long Answer: It depends.
A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Hazard Ratios, Risk Ratios and Odds Ratios are symmetrical around the point estimate on the natural log scale. They aren't n... | Are confidence intervals always symmetrical around the point estimate? [duplicate] | Short Answer: No
Long Answer: It depends.
A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Haza | Are confidence intervals always symmetrical around the point estimate? [duplicate]
Short Answer: No
Long Answer: It depends.
A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Hazard Ratios, Risk Ratios and Odds Ratio... | Are confidence intervals always symmetrical around the point estimate? [duplicate]
Short Answer: No
Long Answer: It depends.
A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Haza |
46,921 | Which algorithm to compute p-value of logrank test with three or more groups is best? | As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describes the derivation of the log-rank test as a score test in the two-sample case, and shows how the simpler formula correspo... | Which algorithm to compute p-value of logrank test with three or more groups is best? | As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describe | Which algorithm to compute p-value of logrank test with three or more groups is best?
As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describes the derivation of the log-rank t... | Which algorithm to compute p-value of logrank test with three or more groups is best?
As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describe |
46,922 | Which algorithm to compute p-value of logrank test with three or more groups is best? | This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40:
"Some computer packages and text books use the name log rank test for a
slightly different test statistic, namely: AltChi2 = (O1 - E1)^2/E1 + (O2 -
E2)^2/E2. The alternative version of the l... | Which algorithm to compute p-value of logrank test with three or more groups is best? | This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40:
"Some computer packages and text books use the name log rank test for a
| Which algorithm to compute p-value of logrank test with three or more groups is best?
This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40:
"Some computer packages and text books use the name log rank test for a
slightly different test statistic,... | Which algorithm to compute p-value of logrank test with three or more groups is best?
This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40:
"Some computer packages and text books use the name log rank test for a
|
46,923 | Recursive partitioning using median (instead of mean) | You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers. | Recursive partitioning using median (instead of mean) | You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers. | Recursive partitioning using median (instead of mean)
You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers. | Recursive partitioning using median (instead of mean)
You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers. |
46,924 | Recursive partitioning using median (instead of mean) | Although I have never used it, the quantregForest package seems to do what you want.
Here is the description:
Quantile Regression Forests is a tree-based ensemble method for
estimation of conditional quantiles. It is particularly well suited
for high-dimensional data. Predictor variables of mixed classes can be
... | Recursive partitioning using median (instead of mean) | Although I have never used it, the quantregForest package seems to do what you want.
Here is the description:
Quantile Regression Forests is a tree-based ensemble method for
estimation of condition | Recursive partitioning using median (instead of mean)
Although I have never used it, the quantregForest package seems to do what you want.
Here is the description:
Quantile Regression Forests is a tree-based ensemble method for
estimation of conditional quantiles. It is particularly well suited
for high-dimensiona... | Recursive partitioning using median (instead of mean)
Although I have never used it, the quantregForest package seems to do what you want.
Here is the description:
Quantile Regression Forests is a tree-based ensemble method for
estimation of condition |
46,925 | Recursive partitioning using median (instead of mean) | In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles. | Recursive partitioning using median (instead of mean) | In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles. | Recursive partitioning using median (instead of mean)
In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles. | Recursive partitioning using median (instead of mean)
In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles. |
46,926 | How to determine the sample distribution based on a survey involving six variables? | There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) or a kernel density estimate (density()). It should give you and idea as to what parametric family (exponential, normal,... | How to determine the sample distribution based on a survey involving six variables? | There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) | How to determine the sample distribution based on a survey involving six variables?
There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) or a kernel density estimate (densit... | How to determine the sample distribution based on a survey involving six variables?
There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) |
46,927 | How to determine the sample distribution based on a survey involving six variables? | I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is binomial, may be conditional on some other covariates -- that's a logistic regression. You may have counts, so the distribut... | How to determine the sample distribution based on a survey involving six variables? | I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is bino | How to determine the sample distribution based on a survey involving six variables?
I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is binomial, may be conditional on some oth... | How to determine the sample distribution based on a survey involving six variables?
I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is bino |
46,928 | Beta binomial Bayesian updating over many iterations | 1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it would be carrying half as much weight). This might even be a feature, if you would rather trust newer data.
Compare for... | Beta binomial Bayesian updating over many iterations | 1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it | Beta binomial Bayesian updating over many iterations
1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it would be carrying half as much weight). This might even be a featu... | Beta binomial Bayesian updating over many iterations
1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it |
46,929 | Beta binomial Bayesian updating over many iterations | If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary?
If the answer to the question is yes, then all that you should need to do is take a random sample of your data to create a likelihood function and then generate the poste... | Beta binomial Bayesian updating over many iterations | If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary?
If the answer to the question is yes, then all tha | Beta binomial Bayesian updating over many iterations
If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary?
If the answer to the question is yes, then all that you should need to do is take a random sample of your data to cre... | Beta binomial Bayesian updating over many iterations
If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary?
If the answer to the question is yes, then all tha |
46,930 | Beta binomial Bayesian updating over many iterations | If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution.
Having said that, scaling alpha and beta down would preserve the mean and keen you away from conversion (if that's what you're looking for... | Beta binomial Bayesian updating over many iterations | If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution.
Having said that, | Beta binomial Bayesian updating over many iterations
If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution.
Having said that, scaling alpha and beta down would preserve the mean and keen you aw... | Beta binomial Bayesian updating over many iterations
If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution.
Having said that, |
46,931 | Beta binomial Bayesian updating over many iterations | The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant. | Beta binomial Bayesian updating over many iterations | The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant. | Beta binomial Bayesian updating over many iterations
The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant. | Beta binomial Bayesian updating over many iterations
The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant. |
46,932 | How can I use credibility intervals in Bayesian logistic regression? | I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you have actually used. But the means can be useful for giving you a feel for which combinations of regressor variables are li... | How can I use credibility intervals in Bayesian logistic regression? | I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you hav | How can I use credibility intervals in Bayesian logistic regression?
I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you have actually used. But the means can be useful for g... | How can I use credibility intervals in Bayesian logistic regression?
I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you hav |
46,933 | How can I use credibility intervals in Bayesian logistic regression? | Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of those predictions (or better still treat the predictions for all sampled coefficient vectors as the posterior distributio... | How can I use credibility intervals in Bayesian logistic regression? | Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of t | How can I use credibility intervals in Bayesian logistic regression?
Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of those predictions (or better still treat the predict... | How can I use credibility intervals in Bayesian logistic regression?
Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of t |
46,934 | How can I use credibility intervals in Bayesian logistic regression? | I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calculate standard deviations. This is easy to do, if you have the MCMC output. Just take the values sampled (after a burnin... | How can I use credibility intervals in Bayesian logistic regression? | I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calc | How can I use credibility intervals in Bayesian logistic regression?
I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calculate standard deviations. This is easy to do, if y... | How can I use credibility intervals in Bayesian logistic regression?
I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calc |
46,935 | How can I use credibility intervals in Bayesian logistic regression? | What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates.
You can either use the samples from the MCMC run, or you can approximate it from the mean and covariance of the posterior dis... | How can I use credibility intervals in Bayesian logistic regression? | What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates.
You c | How can I use credibility intervals in Bayesian logistic regression?
What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates.
You can either use the samples from the MCMC run, or you... | How can I use credibility intervals in Bayesian logistic regression?
What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates.
You c |
46,936 | Generating sorted pseudo-random numbers in Stata | The help for set_seed states
The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched.
Stata's philosophy emphasizes reproducibility, so this consistency is not surprising. Of course you can set the seed yourself. See the he... | Generating sorted pseudo-random numbers in Stata | The help for set_seed states
The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched.
Stata's philosophy | Generating sorted pseudo-random numbers in Stata
The help for set_seed states
The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched.
Stata's philosophy emphasizes reproducibility, so this consistency is not surprising. Of ... | Generating sorted pseudo-random numbers in Stata
The help for set_seed states
The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched.
Stata's philosophy |
46,937 | How to export data in R syntax? | You can use dput() to get a structure() that can be used later.
> #Build the original data frame
> x <- seq(1, 10, 1)
> y <- seq(10, 100, 10)
> df <- data.frame(x=x, y=y)
> df
x y
1 1 10
2 2 20
3 3 30
4 4 40
5 5 50
6 6 60
7 7 70
8 8 80
9 9 90
10 10 100
> #Use the dput() state... | How to export data in R syntax? | You can use dput() to get a structure() that can be used later.
> #Build the original data frame
> x <- seq(1, 10, 1)
> y <- seq(10, 100, 10)
> df <- data.frame(x=x, y=y)
> df
x y
1 | How to export data in R syntax?
You can use dput() to get a structure() that can be used later.
> #Build the original data frame
> x <- seq(1, 10, 1)
> y <- seq(10, 100, 10)
> df <- data.frame(x=x, y=y)
> df
x y
1 1 10
2 2 20
3 3 30
4 4 40
5 5 50
6 6 60
7 7 70
8 8 80
9 9 90
10... | How to export data in R syntax?
You can use dput() to get a structure() that can be used later.
> #Build the original data frame
> x <- seq(1, 10, 1)
> y <- seq(10, 100, 10)
> df <- data.frame(x=x, y=y)
> df
x y
1 |
46,938 | Advice on missing value imputation | First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can both lead to severely biased results.
Next, if I understand correctly, your problem is that you don't know which variab... | Advice on missing value imputation | First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can | Advice on missing value imputation
First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can both lead to severely biased results.
Next, if I understand correctly, your problem ... | Advice on missing value imputation
First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can |
46,939 | Advice on missing value imputation | I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (this yields non-biased estimates of missing data, incorporating the uncertainty one finds in non-missing data) is probably... | Advice on missing value imputation | I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (th | Advice on missing value imputation
I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (this yields non-biased estimates of missing data, incorporating the uncertainty one fin... | Advice on missing value imputation
I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (th |
46,940 | Generating over-dispersed counts data with serial correlation | A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ has a Gamma distribution, you will get the negative binomial distribution for $Y$.
You can easily impose serial correla... | Generating over-dispersed counts data with serial correlation | A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ | Generating over-dispersed counts data with serial correlation
A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ has a Gamma distribution, you will get the negative binomi... | Generating over-dispersed counts data with serial correlation
A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ |
46,941 | Generating over-dispersed counts data with serial correlation | This is one way to do it:
v = rnorm(1, 30, 10)
for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10))
round(v) | Generating over-dispersed counts data with serial correlation | This is one way to do it:
v = rnorm(1, 30, 10)
for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10))
round(v) | Generating over-dispersed counts data with serial correlation
This is one way to do it:
v = rnorm(1, 30, 10)
for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10))
round(v) | Generating over-dispersed counts data with serial correlation
This is one way to do it:
v = rnorm(1, 30, 10)
for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10))
round(v) |
46,942 | How to do meta-regression in SPSS? | Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "macros for performing meta-analytic analyses". One of these macros is called MetaReg which can perform fixed-effect or m... | How to do meta-regression in SPSS? | Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS " | How to do meta-regression in SPSS?
Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "macros for performing meta-analytic analyses". One of these macros is called MetaReg ... | How to do meta-regression in SPSS?
Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS " |
46,943 | How to do meta-regression in SPSS? | These are some wonderful responses to your initial questions, and the reference guide is particularly helpful.
If you're looking for a relatively simple package to do meta-regression, may I recommend Borenstein 's software package Comprehensive Meta Analysis. It is limited to meta-regression of a single predictor but t... | How to do meta-regression in SPSS? | These are some wonderful responses to your initial questions, and the reference guide is particularly helpful.
If you're looking for a relatively simple package to do meta-regression, may I recommend | How to do meta-regression in SPSS?
These are some wonderful responses to your initial questions, and the reference guide is particularly helpful.
If you're looking for a relatively simple package to do meta-regression, may I recommend Borenstein 's software package Comprehensive Meta Analysis. It is limited to meta-reg... | How to do meta-regression in SPSS?
These are some wonderful responses to your initial questions, and the reference guide is particularly helpful.
If you're looking for a relatively simple package to do meta-regression, may I recommend |
46,944 | How to use Confidence Intervals to find the true mean within a percentage | I am not sure what kind of variable is being audited, so I give 2 alternatives:
To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confidence interval) you have to know a few parameters: mean, standard deviation (and to be precise: population size). If you d... | How to use Confidence Intervals to find the true mean within a percentage | I am not sure what kind of variable is being audited, so I give 2 alternatives:
To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confide | How to use Confidence Intervals to find the true mean within a percentage
I am not sure what kind of variable is being audited, so I give 2 alternatives:
To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confidence interval) you have to know a few parameter... | How to use Confidence Intervals to find the true mean within a percentage
I am not sure what kind of variable is being audited, so I give 2 alternatives:
To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confide |
46,945 | How to use Confidence Intervals to find the true mean within a percentage | It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic.
Here's why I think this is the case.
The problem of estimating the population mean, say $\mu$, to within $\pm $ 0.5% obviously depends on the value of $\mu$ (a pivotal st... | How to use Confidence Intervals to find the true mean within a percentage | It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic.
Here's why I think this is the case.
The problem o | How to use Confidence Intervals to find the true mean within a percentage
It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic.
Here's why I think this is the case.
The problem of estimating the population mean, say $\mu$, t... | How to use Confidence Intervals to find the true mean within a percentage
It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic.
Here's why I think this is the case.
The problem o |
46,946 | Time series cross section forecasting with R | After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discussed in the new edition. The main conclusion ... | Time series cross section forecasting with R | After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged varia | Time series cross section forecasting with R
After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discu... | Time series cross section forecasting with R
After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged varia |
46,947 | Time series cross section forecasting with R | May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package. | Time series cross section forecasting with R | May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package. | Time series cross section forecasting with R
May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package. | Time series cross section forecasting with R
May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package. |
46,948 | Interpolating the empirical cumulative function | The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfectly general: any kind of interpolation would be i... | Interpolating the empirical cumulative function | The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstr | Interpolating the empirical cumulative function
The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfect... | Interpolating the empirical cumulative function
The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstr |
46,949 | Interpolating the empirical cumulative function | The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a kernel density estimate for the PDF and integrate it to get another estimate for the CDF, which would do some kind of inter... | Interpolating the empirical cumulative function | The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a ker | Interpolating the empirical cumulative function
The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a kernel density estimate for the PDF and integrate it to get another estimat... | Interpolating the empirical cumulative function
The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a ker |
46,950 | Interpolating the empirical cumulative function | I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test:
\begin{align*}
A^2/n &:= \int_{-\infty}^{\infty} \frac{(F_{n}(x) -F(x))^2}{F(x)(1-F(x))} \, \mathrm{d}F(x) \\
&= \int_{-\infty}^{x_0} \frac{F(x)}{1-F(x)} \, \mathrm{d}F(... | Interpolating the empirical cumulative function | I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test:
\begin{align*}
A^2/n &:= \int_{-\infty}^{\infty} | Interpolating the empirical cumulative function
I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test:
\begin{align*}
A^2/n &:= \int_{-\infty}^{\infty} \frac{(F_{n}(x) -F(x))^2}{F(x)(1-F(x))} \, \mathrm{d}F(x) \\
&= \int_{-\... | Interpolating the empirical cumulative function
I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test:
\begin{align*}
A^2/n &:= \int_{-\infty}^{\infty} |
46,951 | Using k-fold cross-validation to test all data | As far as I understand your question, it can be formulated this way:
Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all folds an then calculate my quality measure, hence getting only one instead of k values?
This question requires two perspect... | Using k-fold cross-validation to test all data | As far as I understand your question, it can be formulated this way:
Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all fol | Using k-fold cross-validation to test all data
As far as I understand your question, it can be formulated this way:
Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all folds an then calculate my quality measure, hence getting only one instead o... | Using k-fold cross-validation to test all data
As far as I understand your question, it can be formulated this way:
Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all fol |
46,952 | Using k-fold cross-validation to test all data | Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged. | Using k-fold cross-validation to test all data | Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged. | Using k-fold cross-validation to test all data
Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged. | Using k-fold cross-validation to test all data
Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged. |
46,953 | Using k-fold cross-validation to test all data | I'm not 100% clear on the question, but I have a few points to add:
I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would be good (and likely unbiased) approximation of the true prediction error IF your training sets are sufficiently large. La... | Using k-fold cross-validation to test all data | I'm not 100% clear on the question, but I have a few points to add:
I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would b | Using k-fold cross-validation to test all data
I'm not 100% clear on the question, but I have a few points to add:
I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would be good (and likely unbiased) approximation of the true prediction error I... | Using k-fold cross-validation to test all data
I'm not 100% clear on the question, but I have a few points to add:
I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would b |
46,954 | Is it possible to apply Bayes Theorem with only samples from the prior? | The short answer is yes. Have a look at sequential MCMC/ particle filters.
Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with probability $1/M$. Since each particle has equal probability of being chosen, this term disappears in the M-H ratio.
A big ... | Is it possible to apply Bayes Theorem with only samples from the prior? | The short answer is yes. Have a look at sequential MCMC/ particle filters.
Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with pro | Is it possible to apply Bayes Theorem with only samples from the prior?
The short answer is yes. Have a look at sequential MCMC/ particle filters.
Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with probability $1/M$. Since each particle has equal pr... | Is it possible to apply Bayes Theorem with only samples from the prior?
The short answer is yes. Have a look at sequential MCMC/ particle filters.
Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with pro |
46,955 | Comparing test-retest reliabilities | Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be confounded with a learning or memory effect. A chance-corrected measure of agreement, like Cohen's kappa, can be used with ... | Comparing test-retest reliabilities | Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be con | Comparing test-retest reliabilities
Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be confounded with a learning or memory effect. A chance-corrected measure of agreement, l... | Comparing test-retest reliabilities
Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be con |
46,956 | Comparing test-retest reliabilities | Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients | Comparing test-retest reliabilities | Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients | Comparing test-retest reliabilities
Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients | Comparing test-retest reliabilities
Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients |
46,957 | Bayes' Theorem and Agresti-Coull: Will it blend? | When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample) gives you this number, after you calculate P(A) with an uncertainty. With the uncertainties package, this would be:
#... | Bayes' Theorem and Agresti-Coull: Will it blend? | When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample | Bayes' Theorem and Agresti-Coull: Will it blend?
When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample) gives you this number, after you calculate P(A) with an uncertainty. ... | Bayes' Theorem and Agresti-Coull: Will it blend?
When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample |
46,958 | Bayes' Theorem and Agresti-Coull: Will it blend? | Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account.
You can obtain the variance of your distribution P(B|A) using the Delta Method and use that to obtain a confidence interval.
With Bayesian inference, you might... | Bayes' Theorem and Agresti-Coull: Will it blend? | Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account.
You can obtain the variance of your dis | Bayes' Theorem and Agresti-Coull: Will it blend?
Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account.
You can obtain the variance of your distribution P(B|A) using the Delta Method and use that to obtain a confid... | Bayes' Theorem and Agresti-Coull: Will it blend?
Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account.
You can obtain the variance of your dis |
46,959 | Bayes' Theorem and Agresti-Coull: Will it blend? | This is a potential answer to the title of the original question and not necessarily the body of the question...
Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblance to a Bayesian estimator.
https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Agresti-Coull_Interval... | Bayes' Theorem and Agresti-Coull: Will it blend? | This is a potential answer to the title of the original question and not necessarily the body of the question...
Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblan | Bayes' Theorem and Agresti-Coull: Will it blend?
This is a potential answer to the title of the original question and not necessarily the body of the question...
Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblance to a Bayesian estimator.
https://en.wikipedia.org/wiki/Binomial_prop... | Bayes' Theorem and Agresti-Coull: Will it blend?
This is a potential answer to the title of the original question and not necessarily the body of the question...
Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblan |
46,960 | Bayes' Theorem and Agresti-Coull: Will it blend? | Brown, Cai, and DasGupta, AS, 2002
Brown, Cai, and DasGupta, Stat Sci, 2001
I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when it comes to binomial proportions' CI and estimation.
Sorry if this is not what you wanted. | Bayes' Theorem and Agresti-Coull: Will it blend? | Brown, Cai, and DasGupta, AS, 2002
Brown, Cai, and DasGupta, Stat Sci, 2001
I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when i | Bayes' Theorem and Agresti-Coull: Will it blend?
Brown, Cai, and DasGupta, AS, 2002
Brown, Cai, and DasGupta, Stat Sci, 2001
I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when it comes to binomial proportions' CI and estimation.
Sorry if this is no... | Bayes' Theorem and Agresti-Coull: Will it blend?
Brown, Cai, and DasGupta, AS, 2002
Brown, Cai, and DasGupta, Stat Sci, 2001
I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when i |
46,961 | Metric spaces and the support of a random variable | Here are some technical conveniences of separable metric spaces
(a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" re... | Metric spaces and the support of a random variable | Here are some technical conveniences of separable metric spaces
(a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define | Metric spaces and the support of a random variable
Here are some technical conveniences of separable metric spaces
(a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equival... | Metric spaces and the support of a random variable
Here are some technical conveniences of separable metric spaces
(a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define |
46,962 | Metric spaces and the support of a random variable | Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose an intriguing distinction; namely, between a set of measure zero having a measure zero neighborhood and a set of measur... | Metric spaces and the support of a random variable | Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose | Metric spaces and the support of a random variable
Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose an intriguing distinction; namely, between a set of measure zero hav... | Metric spaces and the support of a random variable
Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose |
46,963 | Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process | You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now. | Comparing noisy data sequences to estimate the likelihood of them being produced by different instan | You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now. | Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process
You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now. | Comparing noisy data sequences to estimate the likelihood of them being produced by different instan
You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now. |
46,964 | Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process | A few thoughts:
Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the original series, since this is often easier to model. There are also relative distribution functions (see, for instance, t... | Comparing noisy data sequences to estimate the likelihood of them being produced by different instan | A few thoughts:
Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the ori | Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process
A few thoughts:
Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) ins... | Comparing noisy data sequences to estimate the likelihood of them being produced by different instan
A few thoughts:
Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the ori |
46,965 | Test if probabilities are statistically different? | If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution.
Tests of statistical significance are rejection tests, i.e. reject the hypothesis that the two parameters are equal if the prob... | Test if probabilities are statistically different? | If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution.
Tests o | Test if probabilities are statistically different?
If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution.
Tests of statistical significance are rejection tests, i.e. reject the hypot... | Test if probabilities are statistically different?
If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution.
Tests o |
46,966 | How to determine the correlation between two normal random variables conditioned on their sum being negative? | The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ is standard normal, whence
$$E((X+Y)/\sqrt 2\mid X+Y \lt 0) = -E[|Z|] = -\frac{2}{\sqrt {2\pi}} = -\sqrt\frac{2}{\pi},$$... | How to determine the correlation between two normal random variables conditioned on their sum being | The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ | How to determine the correlation between two normal random variables conditioned on their sum being negative?
The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ is standar... | How to determine the correlation between two normal random variables conditioned on their sum being
The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ |
46,967 | How to determine the correlation between two normal random variables conditioned on their sum being negative? | Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two concepts: $E[\xi|A]$ (where $A$ is an event with $P(A) > 0$) and $E[\xi|\mathscr{G}$] (where $\mathscr{G}$ is a sub-$\s... | How to determine the correlation between two normal random variables conditioned on their sum being | Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two | How to determine the correlation between two normal random variables conditioned on their sum being negative?
Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two concepts: ... | How to determine the correlation between two normal random variables conditioned on their sum being
Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two |
46,968 | Formula of conditional probability when we have discrete and continuous random variables | Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether $Y$ is discrete or not is unessential for the proof, as the formula you listed is a special case of a more general rela... | Formula of conditional probability when we have discrete and continuous random variables | Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether | Formula of conditional probability when we have discrete and continuous random variables
Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether $Y$ is discrete or not is unes... | Formula of conditional probability when we have discrete and continuous random variables
Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether |
46,969 | Formula of conditional probability when we have discrete and continuous random variables | Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments.
The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an (absolutely) continuous random variable and for which $P(V=x)=0$. First of all, rest assured that such a probability exi... | Formula of conditional probability when we have discrete and continuous random variables | Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments.
The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an | Formula of conditional probability when we have discrete and continuous random variables
Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments.
The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an (absolutely) continuous random ... | Formula of conditional probability when we have discrete and continuous random variables
Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments.
The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an |
46,970 | Formula of conditional probability when we have discrete and continuous random variables | Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$.
Then actually the question is: what is the function prescribed by:$$v\mapsto P(A|V=v)$$?????...
Because events like $\{V=v\}$ have probability $0$ we cann... | Formula of conditional probability when we have discrete and continuous random variables | Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$.
Then actually the question is: wha | Formula of conditional probability when we have discrete and continuous random variables
Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$.
Then actually the question is: what is the function prescribed by... | Formula of conditional probability when we have discrete and continuous random variables
Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$.
Then actually the question is: wha |
46,971 | Are only confounders used to generate propensity scores for propensity score matching/IPW? | The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or matching, etc. Pearl explains this extremely clearly in his book Causality (see section 11.3.5 here in particular).
Propen... | Are only confounders used to generate propensity scores for propensity score matching/IPW? | The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or ma | Are only confounders used to generate propensity scores for propensity score matching/IPW?
The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or matching, etc. Pearl explains t... | Are only confounders used to generate propensity scores for propensity score matching/IPW?
The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or ma |
46,972 | What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"? | $\DeclareMathOperator{\diag}{diag}$
$\DeclareMathOperator{\tr}{tr}$
A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more frequently, $\Sigma = X^{-1}$). The $N \leq m$ case seems always overlooked (or merely qualitatively mentioned).
In sho... | What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"? | $\DeclareMathOperator{\diag}{diag}$
$\DeclareMathOperator{\tr}{tr}$
A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more | What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"?
$\DeclareMathOperator{\diag}{diag}$
$\DeclareMathOperator{\tr}{tr}$
A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more frequently, $\Sigma = X^{-1}$). The $N... | What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"?
$\DeclareMathOperator{\diag}{diag}$
$\DeclareMathOperator{\tr}{tr}$
A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more |
46,973 | EFA: N factors are too many for N variables | The reason is that factanal implements maximum likelihood estimation, which imposes a constraint.
Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$
where $L$ is the matrix of factor loadings, $F$ are the factor scores and $\Psi$ is the diagonal matrix of specific variances.
I... | EFA: N factors are too many for N variables | The reason is that factanal implements maximum likelihood estimation, which imposes a constraint.
Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$
where $L | EFA: N factors are too many for N variables
The reason is that factanal implements maximum likelihood estimation, which imposes a constraint.
Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$
where $L$ is the matrix of factor loadings, $F$ are the factor scores and $\Psi$ is ... | EFA: N factors are too many for N variables
The reason is that factanal implements maximum likelihood estimation, which imposes a constraint.
Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$
where $L |
46,974 | Why does univariate Mahalanobis distance not match z-score? | Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (See the scikit-learn documentation for details.) In practice, this means that the z scores you compute by hand are not e... | Why does univariate Mahalanobis distance not match z-score? | Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. ( | Why does univariate Mahalanobis distance not match z-score?
Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (See the scikit-learn documentation for details.) In practice... | Why does univariate Mahalanobis distance not match z-score?
Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. ( |
46,975 | How is the denominator in one sample Z test of proportion derived? | When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probability of $1$ is some unknown number $p_0.$
Letting the values in the sample be the (random variables) $X_1, X_2, \ldots, X_n... | How is the denominator in one sample Z test of proportion derived? | When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probabilit | How is the denominator in one sample Z test of proportion derived?
When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probability of $1$ is some unknown number $p_0.$
Letting the va... | How is the denominator in one sample Z test of proportion derived?
When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probabilit |
46,976 | How is the denominator in one sample Z test of proportion derived? | The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population standard deviation divided by the square root of the sample size.
For the proportion variable you are considering, calcul... | How is the denominator in one sample Z test of proportion derived? | The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population s | How is the denominator in one sample Z test of proportion derived?
The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population standard deviation divided by the square root of the s... | How is the denominator in one sample Z test of proportion derived?
The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population s |
46,977 | Proper loss function for regression with uniform target distribution | using the L2 norm assumes that the target is normally distributed
Sorry, but this is nonsense. (There is a lot of nonsense on the internet.)
Your choice of error measure or loss function assumes nothing about the (conditional or unconditional) distribution of the target variable. Rather, different loss functions elici... | Proper loss function for regression with uniform target distribution | using the L2 norm assumes that the target is normally distributed
Sorry, but this is nonsense. (There is a lot of nonsense on the internet.)
Your choice of error measure or loss function assumes noth | Proper loss function for regression with uniform target distribution
using the L2 norm assumes that the target is normally distributed
Sorry, but this is nonsense. (There is a lot of nonsense on the internet.)
Your choice of error measure or loss function assumes nothing about the (conditional or unconditional) distri... | Proper loss function for regression with uniform target distribution
using the L2 norm assumes that the target is normally distributed
Sorry, but this is nonsense. (There is a lot of nonsense on the internet.)
Your choice of error measure or loss function assumes noth |
46,978 | Visualizing repeated measures (not longitudinal) | Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often gets "free" information out of the graphic. Ordering the plots by median or box height is usually insightful, because ... | Visualizing repeated measures (not longitudinal) | Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often | Visualizing repeated measures (not longitudinal)
Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often gets "free" information out of the graphic. Ordering the plots by med... | Visualizing repeated measures (not longitudinal)
Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often |
46,979 | Visualizing repeated measures (not longitudinal) | In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are:
there is considerably variation between individuals
there is, by comparison, much less variation within individuals
there is considerable hete... | Visualizing repeated measures (not longitudinal) | In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are:
there is considerably va | Visualizing repeated measures (not longitudinal)
In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are:
there is considerably variation between individuals
there is, by comparison, much less variati... | Visualizing repeated measures (not longitudinal)
In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are:
there is considerably va |
46,980 | Transforming a Kumaraswamy distribution to a gamma distribution? | Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in part (ii) of Step 1 here), we get that
$Y = q(1-(1-X^a)^b)$ has the required gamma distribution.
If you don't care which g... | Transforming a Kumaraswamy distribution to a gamma distribution? | Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in pa | Transforming a Kumaraswamy distribution to a gamma distribution?
Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in part (ii) of Step 1 here), we get that
$Y = q(1-(1-X^a)^b... | Transforming a Kumaraswamy distribution to a gamma distribution?
Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in pa |
46,981 | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? | Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)"
The lower bound is 0.
Let's take $(Y_{n})$ a series following a Bernoulli law such as : $P(Y_{n} = 1) =1/n$.
Then, ${E(Y_{n})= \lambda = 1/n}$.
$P(\lvert Y_{n} \rvert > \frac{\lvert \lambda ... | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? | Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)"
The lower bound is 0.
Let's take $(Y_{n})$ a series following a Bernou | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)"
The lower bound is 0.
Let's take $(Y_{n})$ a series following a Bernoulli law such as : $P(Y_{n} = 1) =1/n$.
Then... | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)"
The lower bound is 0.
Let's take $(Y_{n})$ a series following a Bernou |
46,982 | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? | Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being a lower bound means that for any random variable $Y$ with $E[Y^2]\lt \infty$ and $E[Y]=\lambda,$ $$\Pr(|Y| \gt |\lambda... | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? | Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being a lower bound means that for any random va... | How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being |
46,983 | Gradient of the log likelihood for energy based models | The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that
$$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, dx} \neq \nabla_{\theta}\int_x \exp(E(_{\theta}(x)) \, dx.$$
Instead, we have
$$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{... | Gradient of the log likelihood for energy based models | The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that
$$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, d | Gradient of the log likelihood for energy based models
The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that
$$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, dx} \neq \nabla_{\theta}\int_x \exp(E(_{\theta}(x)) \, dx.$$
Inste... | Gradient of the log likelihood for energy based models
The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that
$$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, d |
46,984 | Estimate normal distribution from dnorm in R | For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$
then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the end.]
If you have many realizations $X_i$ from the distribution, you can estimate the population mean $\mu$ by the sample... | Estimate normal distribution from dnorm in R | For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$
then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the e | Estimate normal distribution from dnorm in R
For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$
then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the end.]
If you have many realizations $X_i$ from the distribution, you can est... | Estimate normal distribution from dnorm in R
For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$
then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the e |
46,985 | Estimate normal distribution from dnorm in R | A very simple, general-purpose solution:
First, write a function that
takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've used the sum of squared differences here).
Then, use optim() to find the parameters than minimise this function.
x = seq(-... | Estimate normal distribution from dnorm in R | A very simple, general-purpose solution:
First, write a function that
takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've us | Estimate normal distribution from dnorm in R
A very simple, general-purpose solution:
First, write a function that
takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've used the sum of squared differences here).
Then, use optim() to find the para... | Estimate normal distribution from dnorm in R
A very simple, general-purpose solution:
First, write a function that
takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've us |
46,986 | Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider? | There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would seem that this heuristic really should only be a starting point for selecting $k$ using data-dependent methods.
Theoreti... | Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider? | There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would | Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider?
There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would seem that this heuristic really should onl... | Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider?
There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would |
46,987 | Does oversampling lead to more overfitting than classweights for really small classes? | This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent effect on loss calculations as duplicating the datapoint $w$ times. Oversampling then is just a discrete version of class-... | Does oversampling lead to more overfitting than classweights for really small classes? | This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent eff | Does oversampling lead to more overfitting than classweights for really small classes?
This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent effect on loss calculations as dupli... | Does oversampling lead to more overfitting than classweights for really small classes?
This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent eff |
46,988 | What would be a good objective function for a computer vision model that predicts rotation? | A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoids "boundaries" in the output which might prove challenging for backprop to learn.
Each rotation $\theta$ has a correspo... | What would be a good objective function for a computer vision model that predicts rotation? | A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoi | What would be a good objective function for a computer vision model that predicts rotation?
A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoids "boundaries" in the outpu... | What would be a good objective function for a computer vision model that predicts rotation?
A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoi |
46,989 | What would be a good objective function for a computer vision model that predicts rotation? | In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. It works, and intuitively it is certainly more sensible than the categorical approach, for the reason you mention. The re... | What would be a good objective function for a computer vision model that predicts rotation? | In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. I | What would be a good objective function for a computer vision model that predicts rotation?
In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. It works, and intuitively it ... | What would be a good objective function for a computer vision model that predicts rotation?
In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. I |
46,990 | What is the best way to regress proportions (as both dependent and independent variables)? | Suppose we have a model such as
$$y = x$$
where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or size of each population, $z$, and we wish to form another model so that we are dealing with proportions, we could have th... | What is the best way to regress proportions (as both dependent and independent variables)? | Suppose we have a model such as
$$y = x$$
where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or s | What is the best way to regress proportions (as both dependent and independent variables)?
Suppose we have a model such as
$$y = x$$
where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or size of each population, $z$, ... | What is the best way to regress proportions (as both dependent and independent variables)?
Suppose we have a model such as
$$y = x$$
where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or s |
46,991 | Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics? | Both terms are fairly loaded terms with meanings that will depend on who is using it.
Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" in approaching a question and delivering an answer. Indeed, Angrist and Krueger write in Empirical Strategies in Labor ... | Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics? | Both terms are fairly loaded terms with meanings that will depend on who is using it.
Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" | Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics?
Both terms are fairly loaded terms with meanings that will depend on who is using it.
Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" in approaching a question ... | Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics?
Both terms are fairly loaded terms with meanings that will depend on who is using it.
Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" |
46,992 | Interpreting the VIF in checking the multicollinearity in logistic regression | Yes, you can use VIF in the same way for logistic regression as you would in linear regression.
Variance inflation factor measures how much the behaviour (variance) of an independent variable is influenced, or inflated, by its interaction/correlation with the other independent variables. Variance inflation factors allo... | Interpreting the VIF in checking the multicollinearity in logistic regression | Yes, you can use VIF in the same way for logistic regression as you would in linear regression.
Variance inflation factor measures how much the behaviour (variance) of an independent variable is influ | Interpreting the VIF in checking the multicollinearity in logistic regression
Yes, you can use VIF in the same way for logistic regression as you would in linear regression.
Variance inflation factor measures how much the behaviour (variance) of an independent variable is influenced, or inflated, by its interaction/cor... | Interpreting the VIF in checking the multicollinearity in logistic regression
Yes, you can use VIF in the same way for logistic regression as you would in linear regression.
Variance inflation factor measures how much the behaviour (variance) of an independent variable is influ |
46,993 | Interpreting the VIF in checking the multicollinearity in logistic regression | One caveat.
While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such value might be. | Interpreting the VIF in checking the multicollinearity in logistic regression | One caveat.
While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such v | Interpreting the VIF in checking the multicollinearity in logistic regression
One caveat.
While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such value might be. | Interpreting the VIF in checking the multicollinearity in logistic regression
One caveat.
While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such v |
46,994 | Using Gaussian Processes to learn a function online | This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot so that's one application to look at for this kind of thing.
$\newcommand{\f}{\mathbf f}$$\newcommand{\one}{\mathbf 1}$Le... | Using Gaussian Processes to learn a function online | This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot s | Using Gaussian Processes to learn a function online
This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot so that's one application to look at for this kind of thing.
$\newcom... | Using Gaussian Processes to learn a function online
This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot s |
46,995 | Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete statistics | There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness and Basu’s Theorem"). This means you can't have distinct $T_1(X)$ and $T_2(X)$ the way you want. As the paper says (fir... | Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete s | There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness | Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete statistics
There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness and Basu’s... | Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete s
There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness |
46,996 | How do I cross validate when I don't have a test set? | Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models.
Note that, alongside the benefits, LOOCV has its cons as well: It's computationally more expensive, and it may have larger variance in the performance metric... | How do I cross validate when I don't have a test set? | Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models.
Note that, alongside the benefits, LO | How do I cross validate when I don't have a test set?
Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models.
Note that, alongside the benefits, LOOCV has its cons as well: It's computationally more expensive, and... | How do I cross validate when I don't have a test set?
Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models.
Note that, alongside the benefits, LO |
46,997 | How do I cross validate when I don't have a test set? | You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, because every model of the ensemble overfits differently. When you have enough data the distribution is very narrow and when... | How do I cross validate when I don't have a test set? | You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, beca | How do I cross validate when I don't have a test set?
You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, because every model of the ensemble overfits differently. When you hav... | How do I cross validate when I don't have a test set?
You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, beca |
46,998 | What is a partial chi-square statistic according to Frank Harrell? | This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for illustration only, so there are restrictions; most importantly, it's assumed the predictors have unique names.
library("... | What is a partial chi-square statistic according to Frank Harrell? | This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for | What is a partial chi-square statistic according to Frank Harrell?
This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for illustration only, so there are restrictions; most im... | What is a partial chi-square statistic according to Frank Harrell?
This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for |
46,999 | What is a partial chi-square statistic according to Frank Harrell? | For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an option for OLS models. The Wald $\chi^2$ statistic used in the test for a coefficient or a set of coefficients is the "... | What is a partial chi-square statistic according to Frank Harrell? | For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an | What is a partial chi-square statistic according to Frank Harrell?
For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an option for OLS models. The Wald $\chi^2$ statistic u... | What is a partial chi-square statistic according to Frank Harrell?
For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an |
47,000 | How to know if the p value will increase or decrease | I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assume a classic Z-test Statistics.
You are trying to see if the average of your sample $\bar{x}$ is significantly different... | How to know if the p value will increase or decrease | I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assu | How to know if the p value will increase or decrease
I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assume a classic Z-test Statistics.
You are trying to see if the averag... | How to know if the p value will increase or decrease
I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.