idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
48,801 | How to set SMOTE parameters in R package DMwR? | I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate with various levels of synthetic data to determine what's giving you the best accuracy on your hold-out data (pretty st... | How to set SMOTE parameters in R package DMwR? | I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate | How to set SMOTE parameters in R package DMwR?
I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate with various levels of synthetic data to determine what's giving you the... | How to set SMOTE parameters in R package DMwR?
I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate |
48,802 | One step ahead forecast with new data collected sequentially | You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want:
library(forecast)
model <- auto.arima(y)
newfit <- Arima(c(y,new.data), model=model)
onestep.for <- fitted(newfit)[1001:1010] | One step ahead forecast with new data collected sequentially | You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want:
library(forecast)
model <- auto.arima(y)
newfit <- | One step ahead forecast with new data collected sequentially
You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want:
library(forecast)
model <- auto.arima(y)
newfit <- Arima(c(y,new.data), model=model)
onestep.for <- fitted(new... | One step ahead forecast with new data collected sequentially
You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want:
library(forecast)
model <- auto.arima(y)
newfit <- |
48,803 | critical value of a point mass at zero and a chi square distribution with one degree of freedom | The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$.
.... as long as $\alpha < 0.5$.
Of course, p-values work similarly. Look the value up as if it were a $\chi_1^2$ and halve the resulting p-value.` | critical value of a point mass at zero and a chi square distribution with one degree of freedom | The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$.
.... as long as $\alpha < 0.5$.
Of course, p-val | critical value of a point mass at zero and a chi square distribution with one degree of freedom
The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$.
.... as long as $\alpha < 0.5$.
Of course, p-values work similarly. Look... | critical value of a point mass at zero and a chi square distribution with one degree of freedom
The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$.
.... as long as $\alpha < 0.5$.
Of course, p-val |
48,804 | critical value of a point mass at zero and a chi square distribution with one degree of freedom | Some R code if interested:
Using TcGSA package:
ss <- 0.3
sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1)
TcGSA:::pval_simu(s=ss, sample_mixt)
Using base:
1/2*(1-pchisq(ss,df=1)) | critical value of a point mass at zero and a chi square distribution with one degree of freedom | Some R code if interested:
Using TcGSA package:
ss <- 0.3
sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1)
TcGSA:::pval_simu(s=ss, sample_mixt)
Using base:
1/2*(1-pchisq(ss,df=1)) | critical value of a point mass at zero and a chi square distribution with one degree of freedom
Some R code if interested:
Using TcGSA package:
ss <- 0.3
sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1)
TcGSA:::pval_simu(s=ss, sample_mixt)
Using base:
1/2*(1-pchisq(ss,df=1)) | critical value of a point mass at zero and a chi square distribution with one degree of freedom
Some R code if interested:
Using TcGSA package:
ss <- 0.3
sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1)
TcGSA:::pval_simu(s=ss, sample_mixt)
Using base:
1/2*(1-pchisq(ss,df=1)) |
48,805 | Law of total expectation and how prove that two variables are independent | It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability.
Consider a sample space $\Omega = \{au, bu, cu, av, bv, cv\}$ ("$au$" etc. are just names of six abstract things) where all ... | Law of total expectation and how prove that two variables are independent | It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability.
Cons | Law of total expectation and how prove that two variables are independent
It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability.
Consider a sample space $\Omega = \{au, bu, cu, av... | Law of total expectation and how prove that two variables are independent
It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability.
Cons |
48,806 | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? | As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values:
> library(ltm)
> library(mirt
> set.seed(1234)
> dat <- expand.table(LSAT7)
> dat[sample(1:(nrow(dat)*ncol(dat)), 150)] <- NA
>... | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? | As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values:
> libra | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values:
> library(ltm)
> library(mirt
> set.see... | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values:
> libra |
48,807 | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? | The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/ | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? | The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/ | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/ | IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/ |
48,808 | Measuring k-means clustering quality on training and test sets | The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a similar clustering algorithm (self-fulfilling prophecy), or the data set is really simple.
Have you tried computing the k-mean... | Measuring k-means clustering quality on training and test sets | The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a simila | Measuring k-means clustering quality on training and test sets
The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a similar clustering algorithm (self-fulfilling prophecy), or the... | Measuring k-means clustering quality on training and test sets
The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a simila |
48,809 | A measure of overall variance from multivariate Gaussian | Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the centered variables, $\hat{\bf{\Sigma}} = \frac{1}{n} \dot{\bf{X}}' \dot{\bf{X}}$ where $\dot{\bf{X}}' \dot{\bf{X}}$ is the m... | A measure of overall variance from multivariate Gaussian | Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the cent | A measure of overall variance from multivariate Gaussian
Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the centered variables, $\hat{\bf{\Sigma}} = \frac{1}{n} \dot{\bf{X}}' ... | A measure of overall variance from multivariate Gaussian
Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the cent |
48,810 | Mutual Information really invariant to invertible transformations? | If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information about $X$. If $X$ is simply a uniformly random real number for instance, it almost surely takes infinite number of bits to ... | Mutual Information really invariant to invertible transformations? | If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information abo | Mutual Information really invariant to invertible transformations?
If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information about $X$. If $X$ is simply a uniformly random real numb... | Mutual Information really invariant to invertible transformations?
If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information abo |
48,811 | Urn with non-uniform probabilities | The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download my program ICM Explorer which can calculate the finishing probabilities for up to $10$ balls/players.
Although there does... | Urn with non-uniform probabilities | The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download m | Urn with non-uniform probabilities
The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download my program ICM Explorer which can calculate the finishing probabilities for up to $10$... | Urn with non-uniform probabilities
The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download m |
48,812 | Multicollinearity between categorical and continuous variable | I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was:
There is no silver bullet for decomposing variation in that situation. One thing you can do with two collinear predictors, $x_1,x_2$, is fit a model $x_1 \sim x_2$, take the resi... | Multicollinearity between categorical and continuous variable | I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was:
There is no silver bullet for decomposing variation in tha | Multicollinearity between categorical and continuous variable
I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was:
There is no silver bullet for decomposing variation in that situation. One thing you can do with two collinear predi... | Multicollinearity between categorical and continuous variable
I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was:
There is no silver bullet for decomposing variation in tha |
48,813 | Predicting continuous variables from text features | A similar question has been asked on stackoverflow:
https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents
One answer here was to use k-nearest-neighbor regression to predict a continuous value from text documents, see https://stackoverflow.com/a/15089788/179014. | Predicting continuous variables from text features | A similar question has been asked on stackoverflow:
https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents
One answer here was to use k-nearest-neigh | Predicting continuous variables from text features
A similar question has been asked on stackoverflow:
https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents
One answer here was to use k-nearest-neighbor regression to predict a continuous value from text documents, see... | Predicting continuous variables from text features
A similar question has been asked on stackoverflow:
https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents
One answer here was to use k-nearest-neigh |
48,814 | Predicting continuous variables from text features | I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch interaction of word-features and can be used for both regression and classification. | Predicting continuous variables from text features | I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch i | Predicting continuous variables from text features
I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch interaction of word-features and can be used for both regression and c... | Predicting continuous variables from text features
I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch i |
48,815 | Predicting continuous variables from text features | There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression Models
in the "Large p, Small n" Paradigm. If the number of latent factors is large, then it is equivalent to linear r... | Predicting continuous variables from text features | There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression | Predicting continuous variables from text features
There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression Models
in the "Large p, Small n" Paradigm. If the number of latent ... | Predicting continuous variables from text features
There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression |
48,816 | Count explanatory variable, proportion dependent variable | You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The part that makes this more difficult is that your data are clustered within just four participants. That means you need... | Count explanatory variable, proportion dependent variable | You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The | Count explanatory variable, proportion dependent variable
You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The part that makes this more difficult is that your data are clu... | Count explanatory variable, proportion dependent variable
You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The |
48,817 | Bootstrapping unbalanced clustered data (non-parametric bootstrap) | With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimates will generally improve only to the extent that you increase the number of clusters. So I would not see the random sam... | Bootstrapping unbalanced clustered data (non-parametric bootstrap) | With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimat | Bootstrapping unbalanced clustered data (non-parametric bootstrap)
With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimates will generally improve only to the extent that you... | Bootstrapping unbalanced clustered data (non-parametric bootstrap)
With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimat |
48,818 | expectation of an exponential function [closed] | Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$.
It notes that, for the lognormal distribution defined as:
$$X = e^{\mu + \sigma Z}$$
with $Z$ a standard normal variable, the expectation is:
$$\mathbb{E}[X] = e^{\mu + \sigma^2/2}$$ | expectation of an exponential function [closed] | Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$.
It notes that, for the lognormal distribution defined as:
$$X = e^ | expectation of an exponential function [closed]
Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$.
It notes that, for the lognormal distribution defined as:
$$X = e^{\mu + \sigma Z}$$
with $Z$ a standard normal variable, the expectation ... | expectation of an exponential function [closed]
Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$.
It notes that, for the lognormal distribution defined as:
$$X = e^ |
48,819 | Measure of variation around Median | Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value.
The variance of a normally distributed random variable can be used directly to compute exact confidence intervals for sample means of IID rando... | Measure of variation around Median | Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value.
The variance of a norma | Measure of variation around Median
Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value.
The variance of a normally distributed random variable can be used directly to compute exact confidence inte... | Measure of variation around Median
Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value.
The variance of a norma |
48,820 | With a small sample from a normal distribution, do you simulate using a t distribution? | Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you have because it is the posterior predictive distribution of your model. You might want to check your formula though as ... | With a small sample from a normal distribution, do you simulate using a t distribution? | Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you | With a small sample from a normal distribution, do you simulate using a t distribution?
Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you have because it is the posterior... | With a small sample from a normal distribution, do you simulate using a t distribution?
Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you |
48,821 | With a small sample from a normal distribution, do you simulate using a t distribution? | You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution representing your uncertainty in the variance, then generate the actual observations from a normal with your vector of means an... | With a small sample from a normal distribution, do you simulate using a t distribution? | You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution represe | With a small sample from a normal distribution, do you simulate using a t distribution?
You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution representing your uncertainty in the va... | With a small sample from a normal distribution, do you simulate using a t distribution?
You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution represe |
48,822 | With a small sample from a normal distribution, do you simulate using a t distribution? | I would elaborate Neil G and Greg Snow's answers as follows :
run a noninformative Bayesian inference for your original $10$ data values
use the posterior predictive distribution to generate new data
The posterior predictive distribution derived from a noninformative prior exactly aims to provide your desire: a distr... | With a small sample from a normal distribution, do you simulate using a t distribution? | I would elaborate Neil G and Greg Snow's answers as follows :
run a noninformative Bayesian inference for your original $10$ data values
use the posterior predictive distribution to generate new data | With a small sample from a normal distribution, do you simulate using a t distribution?
I would elaborate Neil G and Greg Snow's answers as follows :
run a noninformative Bayesian inference for your original $10$ data values
use the posterior predictive distribution to generate new data
The posterior predictive distr... | With a small sample from a normal distribution, do you simulate using a t distribution?
I would elaborate Neil G and Greg Snow's answers as follows :
run a noninformative Bayesian inference for your original $10$ data values
use the posterior predictive distribution to generate new data |
48,823 | What test should I use to determine if a policy change had a statistically significant impact on website registrations? | You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags here, so I am proposing an edit to add it to your question.)
Among other ways, it can be done using an autoregressive integ... | What test should I use to determine if a policy change had a statistically significant impact on web | You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags her | What test should I use to determine if a policy change had a statistically significant impact on website registrations?
You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags her... | What test should I use to determine if a policy change had a statistically significant impact on web
You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags her |
48,824 | What test should I use to determine if a policy change had a statistically significant impact on website registrations? | Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks.
One confounding variable that you may want to watch out for is effects over time. Does the site have more sign-ups in certain parts... | What test should I use to determine if a policy change had a statistically significant impact on web | Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks.
One confoun | What test should I use to determine if a policy change had a statistically significant impact on website registrations?
Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks.
One confoun... | What test should I use to determine if a policy change had a statistically significant impact on web
Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks.
One confoun |
48,825 | Internal consistency reliability in item response theory models | You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed into the standard error of estimate $SEE$, which is a direct estimate of the reliability of the test at that $\theta$: $... | Internal consistency reliability in item response theory models | You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed | Internal consistency reliability in item response theory models
You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed into the standard error of estimate $SEE$, which is a di... | Internal consistency reliability in item response theory models
You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed |
48,826 | Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set | 1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.)
2) When checking a normality assumption, a Q-Q plot is a better idea than a formal hypothesis test - hypothesis tests don't actually answer the relevant question.
... | Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set | 1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.)
2) When checking a normality assumption, a | Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set
1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.)
2) When checking a normality assumption, a Q-Q plot is a better idea than a formal hy... | Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set
1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.)
2) When checking a normality assumption, a |
48,827 | Can I use a likelihood ratio test when the error distributions differ? | This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventually, though, if it is edited enough times by me or other people.
Generally, the Neyman-Pearson Lemma doesn't apply exce... | Can I use a likelihood ratio test when the error distributions differ? | This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventu | Can I use a likelihood ratio test when the error distributions differ?
This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventually, though, if it is edited enough times by me ... | Can I use a likelihood ratio test when the error distributions differ?
This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventu |
48,828 | Can I use a likelihood ratio test when the error distributions differ? | I think that you understand that nested comparisons are well understood using the LRT.
Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, $f_\theta(x)$, the other parameterized by $\eta$, $f_\eta(x)$, with $\theta$ and $\eta$ of equal length (this condition... | Can I use a likelihood ratio test when the error distributions differ? | I think that you understand that nested comparisons are well understood using the LRT.
Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, | Can I use a likelihood ratio test when the error distributions differ?
I think that you understand that nested comparisons are well understood using the LRT.
Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, $f_\theta(x)$, the other parameterized by $\eta$... | Can I use a likelihood ratio test when the error distributions differ?
I think that you understand that nested comparisons are well understood using the LRT.
Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, |
48,829 | Shifted intercepts in logistic regression | Your shifted, average score is:
\begin{align}
M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}}
\end{align}
Your question is basically "Is $M$ monotonically increasing in $\alpha$?" The answer is that it is not. Once you put the question this... | Shifted intercepts in logistic regression | Your shifted, average score is:
\begin{align}
M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}}
\end{align}
Your question is | Shifted intercepts in logistic regression
Your shifted, average score is:
\begin{align}
M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}}
\end{align}
Your question is basically "Is $M$ monotonically increasing in $\alpha$?" The answer is that ... | Shifted intercepts in logistic regression
Your shifted, average score is:
\begin{align}
M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}}
\end{align}
Your question is |
48,830 | Not standardizing outcome, standardizing predictors only | No, it's not really correct.
The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable exception: The idea that standardizing independent variables makes it easier to compare the effects of one variable to ano... | Not standardizing outcome, standardizing predictors only | No, it's not really correct.
The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable ex | Not standardizing outcome, standardizing predictors only
No, it's not really correct.
The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable exception: The idea that standardizing independent variables make... | Not standardizing outcome, standardizing predictors only
No, it's not really correct.
The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable ex |
48,831 | Do correlated and/or derived fields require special consideration when using Random Forest? | Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choices with the Gini index being the most commonly used one. In fact, I think it is beneficial to have highly correlated v... | Do correlated and/or derived fields require special consideration when using Random Forest? | Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choi | Do correlated and/or derived fields require special consideration when using Random Forest?
Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choices with the Gini index bein... | Do correlated and/or derived fields require special consideration when using Random Forest?
Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choi |
48,832 | How to test the hypothesis of dependency between price and demand | I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship between price and quantity demanded. It a movement along the curve (the slope of which you care about), rather than a movemen... | How to test the hypothesis of dependency between price and demand | I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship betwe | How to test the hypothesis of dependency between price and demand
I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship between price and quantity demanded. It a movement along th... | How to test the hypothesis of dependency between price and demand
I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship betwe |
48,833 | How to test the hypothesis of dependency between price and demand | Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you could look at this like analyzing an itnervention. Take the set of paired differences between demand prior to the price d... | How to test the hypothesis of dependency between price and demand | Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you co | How to test the hypothesis of dependency between price and demand
Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you could look at this like analyzing an itnervention. Take... | How to test the hypothesis of dependency between price and demand
Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you co |
48,834 | What are the rules / guidelines for downsampling? | If you keep all the positives from your data set you may find that you have skewed your results.
A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so you have 100K +ve and 100K -ve the a priori +ve probability is now 1 in 2.
Unless there is a large separation with little... | What are the rules / guidelines for downsampling? | If you keep all the positives from your data set you may find that you have skewed your results.
A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so yo | What are the rules / guidelines for downsampling?
If you keep all the positives from your data set you may find that you have skewed your results.
A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so you have 100K +ve and 100K -ve the a priori +ve probability is now 1 in ... | What are the rules / guidelines for downsampling?
If you keep all the positives from your data set you may find that you have skewed your results.
A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so yo |
48,835 | Presenting the error term in a quantile regression specification | These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation should not be interpreted to mean that no error term is assumed. The author probably thinks that the error term is impli... | Presenting the error term in a quantile regression specification | These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation s | Presenting the error term in a quantile regression specification
These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation should not be interpreted to mean that no error term is ... | Presenting the error term in a quantile regression specification
These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation s |
48,836 | Presenting the error term in a quantile regression specification | When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there is no error there. Suppose your model $M$ is $f(X)$. Then
$$y = f(X)+\varepsilon$$
For general $f$ this type of model ... | Presenting the error term in a quantile regression specification | When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there | Presenting the error term in a quantile regression specification
When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there is no error there. Suppose your model $M$ is $f(X)$. Th... | Presenting the error term in a quantile regression specification
When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there |
48,837 | How to decide which decision tree classifier to use? | Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. It is among most used data mining algorithms.
If J4.8 does not give you good enough solutions , try other algorithms.
R... | How to decide which decision tree classifier to use? | Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. | How to decide which decision tree classifier to use?
Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. It is among most used data mining algorithms.
If J4.8 does not give... | How to decide which decision tree classifier to use?
Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. |
48,838 | Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution | There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformation and weights is really arbitrary, and a reasonable way to do it is to programme them into a spreadsheet and then alter... | Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution | There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformati | Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution
There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformation and weights is really arbi... | Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution
There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformati |
48,839 | Friedman's test for binary data - possible or not? | When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q statistic) which is the extension of McNemar's test from 2 to several related samples. McNemar's uses exact binomial computat... | Friedman's test for binary data - possible or not? | When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q stati | Friedman's test for binary data - possible or not?
When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q statistic) which is the extension of McNemar's test from 2 to several rela... | Friedman's test for binary data - possible or not?
When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q stati |
48,840 | Conjugate prior for a binomial-like distribution | There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear function of the parameters in the log-likelihood makes it impossible for the data distribution to belong to an exponential fami... | Conjugate prior for a binomial-like distribution | There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear functio | Conjugate prior for a binomial-like distribution
There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear function of the parameters in the log-likelihood makes it impossible for the d... | Conjugate prior for a binomial-like distribution
There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear functio |
48,841 | How to tell how extreme an outlier is? | Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, you will be essentially ignoring it in the computation of $(l_x,s_x)$. if your original point was not an outlier, it will ... | How to tell how extreme an outlier is? | Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, yo | How to tell how extreme an outlier is?
Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, you will be essentially ignoring it in the computation of $(l_x,s_x)$. if your orig... | How to tell how extreme an outlier is?
Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, yo |
48,842 | How to tell how extreme an outlier is? | Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page.
It would be good to look through more discussion on the topic, one place to start is wikipedia. It also includes some of the other me... | How to tell how extreme an outlier is? | Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page.
It would be go | How to tell how extreme an outlier is?
Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page.
It would be good to look through more discussion on the topic, one place to start is wikipedia.... | How to tell how extreme an outlier is?
Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page.
It would be go |
48,843 | How to tell how extreme an outlier is? | Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site.
Why don't simple outlier methods work? The argument against our competition.
For a couple of reasons:
It wasn't an outlier. It was... | How to tell how extreme an outlier is? | Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site.
Why don't | How to tell how extreme an outlier is?
Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site.
Why don't simple outlier methods work? The argument against our competition.
For a couple o... | How to tell how extreme an outlier is?
Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site.
Why don't |
48,844 | Details regarding the delete-a-group jackknife | PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or a census tract. Then you go down to the level of city blocks (secondary sampling units), dwellings, households, and indi... | Details regarding the delete-a-group jackknife | PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or | Details regarding the delete-a-group jackknife
PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or a census tract. Then you go down to the level of city blocks (secondary s... | Details regarding the delete-a-group jackknife
PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or |
48,845 | What are U-type statistics? | From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics".
Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. A U-statistics of degree or order $r$ is based on a permutation symmetric kernel function $h$ of arity $r$
$$ h(x_1, ...... | What are U-type statistics? | From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics".
Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. | What are U-type statistics?
From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics".
Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. A U-statistics of degree or order $r$ is based on a permutation symmetric kernel function $h... | What are U-type statistics?
From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics".
Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. |
48,846 | What are U-type statistics? | We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparametrics and I am sure also in the various statistical encyclopedias. Here is a nice article by Tom Ferguson that summariz... | What are U-type statistics? | We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparam | What are U-type statistics?
We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparametrics and I am sure also in the various statistical encyclopedias. Here is a nice article b... | What are U-type statistics?
We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparam |
48,847 | How to combine several time series into a useful average time series? | I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for.
In particular, a VAR with a linear time trend or period-specific deterministic component would give you a nice summary statistic for the overall trend in a given period. See... | How to combine several time series into a useful average time series? | I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for.
In particular, a VAR with a linear time trend or perio | How to combine several time series into a useful average time series?
I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for.
In particular, a VAR with a linear time trend or period-specific deterministic component would give you ... | How to combine several time series into a useful average time series?
I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for.
In particular, a VAR with a linear time trend or perio |
48,848 | Evidence on red-purple-blue graphs | To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utilizing ggplot2 graphics? It appears the default for scale_color_gradient is blue to red. It appears to me to be a default i... | Evidence on red-purple-blue graphs | To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utiliz | Evidence on red-purple-blue graphs
To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utilizing ggplot2 graphics? It appears the default for scale_color_gradient is blue to red.... | Evidence on red-purple-blue graphs
To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utiliz |
48,849 | Ordered logit with (too many?) categorical independent variables | Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equally spaced. If you are comfortable with this assumption, push the OLS button and try to convince your audience.
I would... | Ordered logit with (too many?) categorical independent variables | Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equa | Ordered logit with (too many?) categorical independent variables
Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equally spaced. If you are comfortable with this assumption... | Ordered logit with (too many?) categorical independent variables
Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equa |
48,850 | Ordered logit with (too many?) categorical independent variables | After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error.
My issue wasn't categorical variables at all. The issue was that in my independent variables, I had household Income, which was not scaled properly. This meant that both polr and ba... | Ordered logit with (too many?) categorical independent variables | After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error.
My issue wasn't categorical variables at all. The issue was t | Ordered logit with (too many?) categorical independent variables
After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error.
My issue wasn't categorical variables at all. The issue was that in my independent variables, I had household Income... | Ordered logit with (too many?) categorical independent variables
After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error.
My issue wasn't categorical variables at all. The issue was t |
48,851 | Refugee from SPSS having issues with fit.contrast in R | My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would recommend.
But to answer your question directly, the issue here is that fit.contrast is using "type 3" tests of effects whi... | Refugee from SPSS having issues with fit.contrast in R | My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would rec | Refugee from SPSS having issues with fit.contrast in R
My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would recommend.
But to answer your question directly, the issue here is t... | Refugee from SPSS having issues with fit.contrast in R
My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would rec |
48,852 | Refugee from SPSS having issues with fit.contrast in R | Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2.
library(gmodels)
set.seed(03215)
Genotype <- sample(c("WT","KO"), 1000, replace=TRUE)
Time <- factor(sample(1:3, 1000, replace=TR... | Refugee from SPSS having issues with fit.contrast in R | Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2.
library(g | Refugee from SPSS having issues with fit.contrast in R
Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2.
library(gmodels)
set.seed(03215)
Genotype <- sample(c("WT","KO"), 1000, re... | Refugee from SPSS having issues with fit.contrast in R
Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2.
library(g |
48,853 | How to check whether a sample is representative across two dimensions simultaneously? | You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two samples. The same chi square test applies. | How to check whether a sample is representative across two dimensions simultaneously? | You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two | How to check whether a sample is representative across two dimensions simultaneously?
You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two samples. The same chi square test... | How to check whether a sample is representative across two dimensions simultaneously?
You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two |
48,854 | How to check whether a sample is representative across two dimensions simultaneously? | Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of the Royal Astronomical Society 225:155-170. The paper is freely available here. | How to check whether a sample is representative across two dimensions simultaneously? | Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of th | How to check whether a sample is representative across two dimensions simultaneously?
Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of the Royal Astronomical Society 225:1... | How to check whether a sample is representative across two dimensions simultaneously?
Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of th |
48,855 | How to check whether a sample is representative across two dimensions simultaneously? | Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to use a minimum spanning tree, which is the smallest tree that connects the points of the cloud in $n$ dimensions, and compu... | How to check whether a sample is representative across two dimensions simultaneously? | Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to us | How to check whether a sample is representative across two dimensions simultaneously?
Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to use a minimum spanning tree, which i... | How to check whether a sample is representative across two dimensions simultaneously?
Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to us |
48,856 | Inference on a probabilistic graphical model with observed continuous variable | Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled.
Note that some of the things inside your integral do not depend on $y$. You can re-write it as
$$ P(d,l,x,f) = P(l) P(f) f_{X|L}(x|l) \int_y P(d|x,y) f_{Y|L,F}(y|l,f) dy $$
This... | Inference on a probabilistic graphical model with observed continuous variable | Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled.
Note that some of the things inside your integral do not d | Inference on a probabilistic graphical model with observed continuous variable
Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled.
Note that some of the things inside your integral do not depend on $y$. You can re-write it as
$$ P... | Inference on a probabilistic graphical model with observed continuous variable
Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled.
Note that some of the things inside your integral do not d |
48,857 | What to do when the standard error equals 0 | How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform.
If you're unfamiliar with them, the idea is actually pretty simple. Start by calculating the difference between the two groups' means. We'll c... | What to do when the standard error equals 0 | How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform.
If you're unfamiliar w | What to do when the standard error equals 0
How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform.
If you're unfamiliar with them, the idea is actually pretty simple. Start by calculating the diffe... | What to do when the standard error equals 0
How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform.
If you're unfamiliar w |
48,858 | What to do when the standard error equals 0 | Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that that is an accurate way to describe your situation.) You may have to use a much larger sample for each of your 2 groups i... | What to do when the standard error equals 0 | Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that th | What to do when the standard error equals 0
Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that that is an accurate way to describe your situation.) You may have to use a mu... | What to do when the standard error equals 0
Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that th |
48,859 | Using multinomial logistic regression for multiple related outcomes | As @Riaz Rizvi suggests, this may not be a good idea.
Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you suspect, or at least wish to allow the possibility that the presence of A is informative of B, then you should be workin... | Using multinomial logistic regression for multiple related outcomes | As @Riaz Rizvi suggests, this may not be a good idea.
Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you | Using multinomial logistic regression for multiple related outcomes
As @Riaz Rizvi suggests, this may not be a good idea.
Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you suspect, or at least wish to allow the possibility t... | Using multinomial logistic regression for multiple related outcomes
As @Riaz Rizvi suggests, this may not be a good idea.
Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you |
48,860 | Using multinomial logistic regression for multiple related outcomes | A multinomial is perfectly fine in this situation, but it comes at two costs:
An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ parameters instead of the original $n$.)
The solution is harder to interpret if the original variables are actually indepe... | Using multinomial logistic regression for multiple related outcomes | A multinomial is perfectly fine in this situation, but it comes at two costs:
An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ pa | Using multinomial logistic regression for multiple related outcomes
A multinomial is perfectly fine in this situation, but it comes at two costs:
An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ parameters instead of the original $n$.)
The solution ... | Using multinomial logistic regression for multiple related outcomes
A multinomial is perfectly fine in this situation, but it comes at two costs:
An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ pa |
48,861 | Using multinomial logistic regression for multiple related outcomes | I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here. | Using multinomial logistic regression for multiple related outcomes | I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here. | Using multinomial logistic regression for multiple related outcomes
I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here. | Using multinomial logistic regression for multiple related outcomes
I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here. |
48,862 | R: How to "control" for another variable in Linear Mixed Effects Regression model? | I don't think the issues here can be addressed in a simple answer posted online. I would add:
the inclusion of age and time is problematic and should be thought through. It is unclear to me what the benefit is of having both variables in the model. It can be done. But not by avoiding the issue by making one of the var... | R: How to "control" for another variable in Linear Mixed Effects Regression model? | I don't think the issues here can be addressed in a simple answer posted online. I would add:
the inclusion of age and time is problematic and should be thought through. It is unclear to me what the | R: How to "control" for another variable in Linear Mixed Effects Regression model?
I don't think the issues here can be addressed in a simple answer posted online. I would add:
the inclusion of age and time is problematic and should be thought through. It is unclear to me what the benefit is of having both variables i... | R: How to "control" for another variable in Linear Mixed Effects Regression model?
I don't think the issues here can be addressed in a simple answer posted online. I would add:
the inclusion of age and time is problematic and should be thought through. It is unclear to me what the |
48,863 | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wikipedia.org/wiki/Gibbs_sampling#Mathematical_background.
But I guess you are seeking an intuitive answer on why the samp... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wik | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wikipedia.org/wiki/Gibbs_sampling#M... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wik |
48,864 | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means words that appear frequently in sequence such as "of the" being a common pair in english. Languages tend to have patterns... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means wo | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means words that appear frequently in se... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means wo |
48,865 | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to do LDA: our goal is to fit the best possible LDA model given the data and our initial parameter settings.
How many topi... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to do LDA: our goal is to fit the b... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to |
48,866 | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | Seems you are asking for intuition.
In a mixture, this is enough to find clusters of co-occurring words.
This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensible that the co-occuring words come in same topic and fewer words in same topic increases the probability of words occuri... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? | Seems you are asking for intuition.
In a mixture, this is enough to find clusters of co-occurring words.
This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensib | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Seems you are asking for intuition.
In a mixture, this is enough to find clusters of co-occurring words.
This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensible that the co-occuring words co... | Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Seems you are asking for intuition.
In a mixture, this is enough to find clusters of co-occurring words.
This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensib |
48,867 | Implementing an ordered probit model in pymc [closed] | This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi.
Also, this runs pretty slowly, since it doesn't make good use of numpy's array manipulation. For data sets of reasonable size, some smart preprocessing ... | Implementing an ordered probit model in pymc [closed] | This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi.
Also, this runs pretty slowly, si | Implementing an ordered probit model in pymc [closed]
This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi.
Also, this runs pretty slowly, since it doesn't make good use of numpy's array manipulation. For d... | Implementing an ordered probit model in pymc [closed]
This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi.
Also, this runs pretty slowly, si |
48,868 | Communicating Regression Model Results | In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression.
In your example, if one is interested in the difference in mean houses prices, comparing houses whose NOx level differs by one unit but that have identical values of all the other covariates, then (gi... | Communicating Regression Model Results | In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression.
In your example, if one is interested in the difference in mean houses prices, c | Communicating Regression Model Results
In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression.
In your example, if one is interested in the difference in mean houses prices, comparing houses whose NOx level differs by one unit but that have identical value... | Communicating Regression Model Results
In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression.
In your example, if one is interested in the difference in mean houses prices, c |
48,869 | Communicating Regression Model Results | I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example:
Mallow's Cp
adjusted R^2
AIC
BIC
I see no need for another statistic. The misinterpretation of p-values is, I think, a diffe... | Communicating Regression Model Results | I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example:
Mallo | Communicating Regression Model Results
I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example:
Mallow's Cp
adjusted R^2
AIC
BIC
I see no need for another statistic. The misinterpre... | Communicating Regression Model Results
I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example:
Mallo |
48,870 | Communicating Regression Model Results | The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we just switched to _________'. I have never yet found that convincing, and I'm afraid don't here either. I recognize th... | Communicating Regression Model Results | The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we | Communicating Regression Model Results
The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we just switched to _________'. I have never yet found that convincing, and I'm af... | Communicating Regression Model Results
The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we |
48,871 | Communicating Regression Model Results | What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variables and discloses what the R Square is for the best one variable model, best 2 variable model, etc... It keeps on addin... | Communicating Regression Model Results | What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variab | Communicating Regression Model Results
What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variables and discloses what the R Square is for the best one variable model, best 2 va... | Communicating Regression Model Results
What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variab |
48,872 | Are there problems with inference using linear regression on observational data with highly skewed distributions of predictor values? | Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why this should be a problem; at most it will mean you just have a relatively high degree of uncertainty about your estimates... | Are there problems with inference using linear regression on observational data with highly skewed d | Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why t | Are there problems with inference using linear regression on observational data with highly skewed distributions of predictor values?
Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's n... | Are there problems with inference using linear regression on observational data with highly skewed d
Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why t |
48,873 | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] | You cannot share parameters between Q and R, as you have specified in the model.
See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same character name are constrained to be equal (no sharing across parameter matrices, only within)."
I don't know if this helps muc... | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] | You cannot share parameters between Q and R, as you have specified in the model.
See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same charact | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
You cannot share parameters between Q and R, as you have specified in the model.
See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same character name are constrai... | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
You cannot share parameters between Q and R, as you have specified in the model.
See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same charact |
48,874 | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] | I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an initial value for this run with no restrictions on the variance terms to start with; alternatively use the filter package, ... | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] | I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an ini | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an initial value for this ... | Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an ini |
48,875 | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model? | Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome.
# take a dataframe, and re-level it such that the levels of the factors are
# assigned positive coefficients by lm()
# NOTE: this cur... | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i | Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome.
# take a data | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model?
Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome.
# take a da... | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i
Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome.
# take a data |
48,876 | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model? | Here is an attampt of doing what you wanted.
# Setting up some sample data
require(dummies)
df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60))
flevels <- dummy(df$categorial)
df$categorial <- factor(df$categorial)
df$y=20 + df$x*3 + flevels%*%c(3,1,2) + rnorm(60)*2
I use a regression in order to o... | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i | Here is an attampt of doing what you wanted.
# Setting up some sample data
require(dummies)
df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60))
flevels <- dummy(df$categorial)
df$cat | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model?
Here is an attampt of doing what you wanted.
# Setting up some sample data
require(dummies)
df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60))
flevels <- dummy(df$categorial)
df$c... | How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i
Here is an attampt of doing what you wanted.
# Setting up some sample data
require(dummies)
df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60))
flevels <- dummy(df$categorial)
df$cat |
48,877 | Permutational MANOVA and Mahalanobis distances in R | As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarity matrix in the function. Changing adonis() wouldn't be too difficult to do so that it allows any function that returns ... | Permutational MANOVA and Mahalanobis distances in R | As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarit | Permutational MANOVA and Mahalanobis distances in R
As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarity matrix in the function. Changing adonis() wouldn't be too difficul... | Permutational MANOVA and Mahalanobis distances in R
As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarit |
48,878 | Test for non random-walk | You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page. | Test for non random-walk | You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page. | Test for non random-walk
You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page. | Test for non random-walk
You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page. |
48,879 | McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result | Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test.
Yes, for research question 2 McNemar is the right choice.
No. T-test is unusual to apply for proportions. When they do it for all t... | McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result | Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test.
Yes, for re | McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result
Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test.
Yes, for research question 2 McNema... | McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result
Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test.
Yes, for re |
48,880 | What are subjective interestingness measures? | Consider, a classic example of the following rule:
IF (patient is pregnant) THEN (patient is female).
This rule is very accurate and comprehensible, but it is not interesting, since it represents the obvious.
Another Example from real world data set,
IF (used_seat_belt = ‘yes’) THEN (injury = ‘no’)..................... | What are subjective interestingness measures? | Consider, a classic example of the following rule:
IF (patient is pregnant) THEN (patient is female).
This rule is very accurate and comprehensible, but it is not interesting, since it represents th | What are subjective interestingness measures?
Consider, a classic example of the following rule:
IF (patient is pregnant) THEN (patient is female).
This rule is very accurate and comprehensible, but it is not interesting, since it represents the obvious.
Another Example from real world data set,
IF (used_seat_belt =... | What are subjective interestingness measures?
Consider, a classic example of the following rule:
IF (patient is pregnant) THEN (patient is female).
This rule is very accurate and comprehensible, but it is not interesting, since it represents th |
48,881 | Determining how well given real-life data fits to a given probability distribution | There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best for this task.
Typically, the best approach is to use a qq-plot. A somewhat less-used, but similar approach is to use ... | Determining how well given real-life data fits to a given probability distribution | There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best f | Determining how well given real-life data fits to a given probability distribution
There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best for this task.
Typically, the best a... | Determining how well given real-life data fits to a given probability distribution
There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best f |
48,882 | Determining how well given real-life data fits to a given probability distribution | Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood estimation to estimate the parameters, then use the probability density function to assess how typical the data are. If... | Determining how well given real-life data fits to a given probability distribution | Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood | Determining how well given real-life data fits to a given probability distribution
Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood estimation to estimate the parameter... | Determining how well given real-life data fits to a given probability distribution
Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood |
48,883 | Interpreting coefficients from a VECM (Vector Error Correction Model) | After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm:
Helmut Lütkepohl, Markus Krätzig
Structural Vector Autoregressive Modeling and Impulse Responses pp. 159-196. In: Applied time-series economics.
A link to the chapter is given below:
http://ebooks... | Interpreting coefficients from a VECM (Vector Error Correction Model) | After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm:
Helmut Lütkepohl, Markus Krätzig
Structural Vector Autoregressive Modeling a | Interpreting coefficients from a VECM (Vector Error Correction Model)
After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm:
Helmut Lütkepohl, Markus Krätzig
Structural Vector Autoregressive Modeling and Impulse Responses pp. 159-196. In: Applied time... | Interpreting coefficients from a VECM (Vector Error Correction Model)
After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm:
Helmut Lütkepohl, Markus Krätzig
Structural Vector Autoregressive Modeling a |
48,884 | Interpreting coefficients from a VECM (Vector Error Correction Model) | ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated coefficient is -0.87 (The estimated coefficient indicates that about 87 per cent of this disequilibrium is corrected betwe... | Interpreting coefficients from a VECM (Vector Error Correction Model) | ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated co | Interpreting coefficients from a VECM (Vector Error Correction Model)
ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated coefficient is -0.87 (The estimated coefficient indi... | Interpreting coefficients from a VECM (Vector Error Correction Model)
ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated co |
48,885 | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test? | You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is also a good idea. (See Haukoos JS, Lewis RJ. Advanced Statistics: Bootstrapping Confidence Intervals for Statistics with ‘‘... | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet | You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is als | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test?
You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is also a good i... | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet
You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is als |
48,886 | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test? | This largely overlaps with what @propofol has said.
LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range.
If your data contains many tied ranks , then an interpolated median is typically more sensitive than a standard median. See this discussion, and... | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet | This largely overlaps with what @propofol has said.
LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range.
If your data contains ma | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test?
This largely overlaps with what @propofol has said.
LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range.
If your data contains many tied ra... | What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet
This largely overlaps with what @propofol has said.
LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range.
If your data contains ma |
48,887 | Software for learning statistical quality control | The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr plugins. | Software for learning statistical quality control | The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr | Software for learning statistical quality control
The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr plugins. | Software for learning statistical quality control
The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr |
48,888 | Literature on generating "similar" synthetic time series from observed time series | There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some reference data. This data is then used to run tests to see whether there is additional (nonlinear/chaotic) structure in t... | Literature on generating "similar" synthetic time series from observed time series | There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some r | Literature on generating "similar" synthetic time series from observed time series
There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some reference data. This data is then used... | Literature on generating "similar" synthetic time series from observed time series
There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some r |
48,889 | Literature on generating "similar" synthetic time series from observed time series | Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of course there's also a certain amount of literature on how to bootstrap time series (block bootstrap, mostly), which may ... | Literature on generating "similar" synthetic time series from observed time series | Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of | Literature on generating "similar" synthetic time series from observed time series
Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of course there's also a certain amount ... | Literature on generating "similar" synthetic time series from observed time series
Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of |
48,890 | Literature on generating "similar" synthetic time series from observed time series | Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a multivariate normal distribution by running a Cholesky decomposition on the sample covariance matrix of your data set. ... | Literature on generating "similar" synthetic time series from observed time series | Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a | Literature on generating "similar" synthetic time series from observed time series
Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a multivariate normal distribution by ... | Literature on generating "similar" synthetic time series from observed time series
Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a |
48,891 | How to choose the tolerance parameter for ABC? | One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vaguely similar to your observed data (e.g. by performing ABC with a relatively large $\epsilon$). From the ABC output for... | How to choose the tolerance parameter for ABC? | One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vag | How to choose the tolerance parameter for ABC?
One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vaguely similar to your observed data (e.g. by performing ABC with a relativ... | How to choose the tolerance parameter for ABC?
One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vag |
48,892 | How to choose the tolerance parameter for ABC? | Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A simple possibility is to select several different values and see whether the resulting posterior distributions look simi... | How to choose the tolerance parameter for ABC? | Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A | How to choose the tolerance parameter for ABC?
Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A simple possibility is to select several different values and see whether ... | How to choose the tolerance parameter for ABC?
Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A |
48,893 | How to draw a random sample from distribution of prediction? | Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed data to obtain a posterior ("predictive") distribution, which you can sample from. This not only has benefits in terms of a... | How to draw a random sample from distribution of prediction? | Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed dat | How to draw a random sample from distribution of prediction?
Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed data to obtain a posterior ("predictive") distribution, which ... | How to draw a random sample from distribution of prediction?
Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed dat |
48,894 | How to draw a random sample from distribution of prediction? | I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate
$$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_{i}
+ \hat{\beta}_{2} {\rm sex}_{i} + \hat{\beta}_{3}(y_{t-1})_{i} + u_{i} $$
to generate a sample $y_{1}, ..., y_{t}$... | How to draw a random sample from distribution of prediction? | I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate
$$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_ | How to draw a random sample from distribution of prediction?
I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate
$$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_{i}
+ \hat{\beta}_{2} {\rm sex}_{i} + \hat{\beta}_{3}(y_{t... | How to draw a random sample from distribution of prediction?
I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate
$$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_ |
48,895 | Choosing variables for Discriminant Analysis | You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair.
Then you can look at partial least squares, and pick variables that are important in the PLS solution.
I did this with a similar problem and it worked pretty well (that is, the resulting discriminant fun... | Choosing variables for Discriminant Analysis | You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair.
Then you can look at partial least squares, and pick variables that are important in | Choosing variables for Discriminant Analysis
You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair.
Then you can look at partial least squares, and pick variables that are important in the PLS solution.
I did this with a similar problem and it worked pretty ... | Choosing variables for Discriminant Analysis
You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair.
Then you can look at partial least squares, and pick variables that are important in |
48,896 | What is the best way of weighing cardinal scores and Likert scale scores to create a composite score? | Combining likert items with different numeric scalings
Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same response scale (e.g., a 1 to 5 likert scale).
If you add or substract a constant to the scaling of an item, this will n... | What is the best way of weighing cardinal scores and Likert scale scores to create a composite score | Combining likert items with different numeric scalings
Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same | What is the best way of weighing cardinal scores and Likert scale scores to create a composite score?
Combining likert items with different numeric scalings
Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same response scale (e... | What is the best way of weighing cardinal scores and Likert scale scores to create a composite score
Combining likert items with different numeric scalings
Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same |
48,897 | Generating data with a pre-specified odds ratio | It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as described here), for example.
If you want to generate data with a particular odds ratio, you're talking about binary that... | Generating data with a pre-specified odds ratio | It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as de | Generating data with a pre-specified odds ratio
It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as described here), for example.
If you want to generate data with a particu... | Generating data with a pre-specified odds ratio
It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as de |
48,898 | Analyzing treatment effect with possibly flawed control data | There is a growing econometric literature on the misclassification of treatment status.
A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.org/WNE/lect_10_diffindiffs.pdf p.17 mentioning Poisson case. The problem with misclassification for a general conditiona... | Analyzing treatment effect with possibly flawed control data | There is a growing econometric literature on the misclassification of treatment status.
A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.o | Analyzing treatment effect with possibly flawed control data
There is a growing econometric literature on the misclassification of treatment status.
A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.org/WNE/lect_10_diffindiffs.pdf p.17 mentioning Poisson case... | Analyzing treatment effect with possibly flawed control data
There is a growing econometric literature on the misclassification of treatment status.
A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.o |
48,899 | Analyzing treatment effect with possibly flawed control data | I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a given control was on or off treatment, and see how they effect your results.
When it comes down to it, yes, you can combine... | Analyzing treatment effect with possibly flawed control data | I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a give | Analyzing treatment effect with possibly flawed control data
I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a given control was on or off treatment, and see how they effect ... | Analyzing treatment effect with possibly flawed control data
I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a give |
48,900 | Statistically comparing classifiers using only confusion matrix (or average accuracies) | You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$$. Without knowing the samples that each classifier gets right/wrong, you won't be able to estimate the covariance, thu... | Statistically comparing classifiers using only confusion matrix (or average accuracies) | You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$ | Statistically comparing classifiers using only confusion matrix (or average accuracies)
You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$$. Without knowing the samples ... | Statistically comparing classifiers using only confusion matrix (or average accuracies)
You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.