idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
44,501
Probability that the square of a random integer ends in 1
I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit has only ten possibilities, and that's what makes your sample space: you only focus on the last digit. Every number has ...
Probability that the square of a random integer ends in 1
I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit
Probability that the square of a random integer ends in 1 I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit has only ten possibilities, and that's what makes your sample ...
Probability that the square of a random integer ends in 1 I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit
44,502
Algorithm for minimization of sum of squares in regression packages
No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" multiple regression: $E(y) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p + \epsilon$ $\quad\quad\,\,=X\beta...
Algorithm for minimization of sum of squares in regression packages
No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" mu
Algorithm for minimization of sum of squares in regression packages No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" multiple regression: $E(y) = \beta_0 + \beta_1 x_1 + \...
Algorithm for minimization of sum of squares in regression packages No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" mu
44,503
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general problems. See (for example) the answer (not really the question!) to How can I group numerical data into naturally form...
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general problems. See (for example) the answer (not...
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general
44,504
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optimization, for example. Kernel Density Estimation (KDE) works really well on 1-dimensional data, and by looking for minima...
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optim
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optimization, for example. Kernel Density Estima...
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optim
44,505
How to partition the variance explained at group level and at individual level?
Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 + u_{0j} + e_{ij}, $$ where the random intercepts $u_{0j}$ have variance $\sigma^2_{u_0}$ and the residuals $e_{ij}$ have...
How to partition the variance explained at group level and at individual level?
Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 +
How to partition the variance explained at group level and at individual level? Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 + u_{0j} + e_{ij}, $$ where the random in...
How to partition the variance explained at group level and at individual level? Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 +
44,506
Is a count variable with a large, but finite, number of possible values categorical or continuous?
There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy existed, it would probably be too complicated to be widely acceptable. It is best to focus on examples rather than give n...
Is a count variable with a large, but finite, number of possible values categorical or continuous?
There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy ex
Is a count variable with a large, but finite, number of possible values categorical or continuous? There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy existed, it would proba...
Is a count variable with a large, but finite, number of possible values categorical or continuous? There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy ex
44,507
Is a count variable with a large, but finite, number of possible values categorical or continuous?
I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the units in which they are measured have identical sizes or intervals. For example, very few people need to consider the ...
Is a count variable with a large, but finite, number of possible values categorical or continuous?
I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the
Is a count variable with a large, but finite, number of possible values categorical or continuous? I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the units in which they ...
Is a count variable with a large, but finite, number of possible values categorical or continuous? I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the
44,508
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one where inference proceeds from the posterior distribution, will satisfy the Likelihood principle. Essentially, the Likelih...
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one wh
Should coin flips be modeled as Bernoulli or binomial draws in RJags? Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one where inference proceeds from the posterior distribu...
Should coin flips be modeled as Bernoulli or binomial draws in RJags? Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one wh
44,509
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial distribution is enough, no need for detailed model with N bernoulli flips. However, if you have data on results of indi...
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial
Should coin flips be modeled as Bernoulli or binomial draws in RJags? One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial distribution is enough, no need for detailed mode...
Should coin flips be modeled as Bernoulli or binomial draws in RJags? One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial
44,510
How do I validate my multiple linear regression model?
Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sample (& compare it to the RSS to assess the amount of over-fitting). For ordinary least-squares regression there's an ana...
How do I validate my multiple linear regression model?
Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sam
How do I validate my multiple linear regression model? Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sample (& compare it to the RSS to assess the amount of over-fitting...
How do I validate my multiple linear regression model? Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sam
44,511
How do I validate my multiple linear regression model?
You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated from your 80% dataset, on the independent variables (IV) of your another 20% dataset (validation dataset). In R use rmse...
How do I validate my multiple linear regression model?
You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated
How do I validate my multiple linear regression model? You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated from your 80% dataset, on the independent variables (IV) of your ...
How do I validate my multiple linear regression model? You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated
44,512
Understanding the definition of omnibus tests
I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained. To understand this in detail you have to really look at the equations. I'll try to explain it anyway without introduci...
Understanding the definition of omnibus tests
I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained.
Understanding the definition of omnibus tests I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained. To understand this in detail you have to really look at the equations. I...
Understanding the definition of omnibus tests I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained.
44,513
Understanding the definition of omnibus tests
I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmanteau statistic and factotum statistic. Over a century or more, there have been all sorts of fashions over terminology, i...
Understanding the definition of omnibus tests
I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmant
Understanding the definition of omnibus tests I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmanteau statistic and factotum statistic. Over a century or more, there have ...
Understanding the definition of omnibus tests I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmant
44,514
How random are the results of the kmeans algorithm?
There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. ordering of points. Then there is Hartigan, Wong, Forgy, ... And of course, various implementations may have implementatio...
How random are the results of the kmeans algorithm?
There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. or
How random are the results of the kmeans algorithm? There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. ordering of points. Then there is Hartigan, Wong, Forgy, ... And of co...
How random are the results of the kmeans algorithm? There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. or
44,515
How random are the results of the kmeans algorithm?
K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can be chosen the same each time, similar each time, or completely random each time. With MATLAB/R implementations, the choi...
How random are the results of the kmeans algorithm?
K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can
How random are the results of the kmeans algorithm? K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can be chosen the same each time, similar each time, or completely rando...
How random are the results of the kmeans algorithm? K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can
44,516
Logistic regression: categorical predictor vs. quantitative predictor
That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you only know whether an observation is below or above a certain threshold (e.g. the mean or median), while with a quantitat...
Logistic regression: categorical predictor vs. quantitative predictor
That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you
Logistic regression: categorical predictor vs. quantitative predictor That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you only know whether an observation is below or above...
Logistic regression: categorical predictor vs. quantitative predictor That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you
44,517
Logistic regression: categorical predictor vs. quantitative predictor
It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing analyses treating the predictor values, say ${1,2,3,4,5,6,7,8,9,10}$, as either continuous or categorical, in the latte...
Logistic regression: categorical predictor vs. quantitative predictor
It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing
Logistic regression: categorical predictor vs. quantitative predictor It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing analyses treating the predictor values, say ${1,2...
Logistic regression: categorical predictor vs. quantitative predictor It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing
44,518
Logistic regression: categorical predictor vs. quantitative predictor
As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can see how much information you loose by discretizing by different schemas. You may also want to have a look at Frank Harr...
Logistic regression: categorical predictor vs. quantitative predictor
As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can
Logistic regression: categorical predictor vs. quantitative predictor As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can see how much information you loose by discretizin...
Logistic regression: categorical predictor vs. quantitative predictor As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can
44,519
Is it feasible to use global optimization methods to train deep learning models?
In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradient. Geoff Hinton mentioned evolution based approaches to optimizing neural networks in his slides on deep learning. He...
Is it feasible to use global optimization methods to train deep learning models?
In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradie
Is it feasible to use global optimization methods to train deep learning models? In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradient. Geoff Hinton mentioned evolution b...
Is it feasible to use global optimization methods to train deep learning models? In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradie
44,520
Is it feasible to use global optimization methods to train deep learning models?
Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a better optimum though. You can also use e.g. evolutionary algorithms the fllowing ways: optimize number of layers, number of...
Is it feasible to use global optimization methods to train deep learning models?
Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a bette
Is it feasible to use global optimization methods to train deep learning models? Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a better optimum though. You can also use e.g....
Is it feasible to use global optimization methods to train deep learning models? Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a bette
44,521
Scale parameters -- How do they work, why are they sometimes dropped?
A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cannot label the axes, except for showing where the origin is. Here is a distribution (with the origin at its left) shown a...
Scale parameters -- How do they work, why are they sometimes dropped?
A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cann
Scale parameters -- How do they work, why are they sometimes dropped? A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cannot label the axes, except for showing where the or...
Scale parameters -- How do they work, why are they sometimes dropped? A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cann
44,522
Scale parameters -- How do they work, why are they sometimes dropped?
I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics that people are interested in: Mean: Scale parameters alter the mean of most distributions, though many common distribut...
Scale parameters -- How do they work, why are they sometimes dropped?
I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics t
Scale parameters -- How do they work, why are they sometimes dropped? I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics that people are interested in: Mean: Scale paramet...
Scale parameters -- How do they work, why are they sometimes dropped? I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics t
44,523
Kaplan-Meier p-values
The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms. It is a test of the entire distribution of failure times, not just the median. The null hypothesis for the log-rank te...
Kaplan-Meier p-values
The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms
Kaplan-Meier p-values The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms. It is a test of the entire distribution of failure times, not just the median. The null hypothes...
Kaplan-Meier p-values The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms
44,524
Kaplan-Meier p-values
Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the two curves is statistically significant with P=0.04.This simply points out the obvious: that two survival curves can ha...
Kaplan-Meier p-values
Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the
Kaplan-Meier p-values Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the two curves is statistically significant with P=0.04.This simply points out the obvious: that two ...
Kaplan-Meier p-values Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the
44,525
Multicollinearity in OLS
Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd question: High collinearity can exist with moderate correlations; e.g. if we have 9 iid variables and one that is the s...
Multicollinearity in OLS
Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd
Multicollinearity in OLS Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd question: High collinearity can exist with moderate correlations; e.g. if we have 9 iid variab...
Multicollinearity in OLS Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd
44,526
What am I supposed to do if Cronbach's alpha is negative?
You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe you can use a factor analysis to check the factorial structure and correlations between the extracted factors? But give...
What am I supposed to do if Cronbach's alpha is negative?
You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe
What am I supposed to do if Cronbach's alpha is negative? You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe you can use a factor analysis to check the factorial structur...
What am I supposed to do if Cronbach's alpha is negative? You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe
44,527
What am I supposed to do if Cronbach's alpha is negative?
As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative?
As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative? As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative? As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
44,528
What am I supposed to do if Cronbach's alpha is negative?
This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors and fixes them.
What am I supposed to do if Cronbach's alpha is negative?
This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors a
What am I supposed to do if Cronbach's alpha is negative? This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors and fixes them.
What am I supposed to do if Cronbach's alpha is negative? This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors a
44,529
What am I supposed to do if Cronbach's alpha is negative?
My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So the conclusion, from my observation, may be personal or biased, is that the use consistent scales be used when calculatin...
What am I supposed to do if Cronbach's alpha is negative?
My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So t
What am I supposed to do if Cronbach's alpha is negative? My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So the conclusion, from my observation, may be personal or biased,...
What am I supposed to do if Cronbach's alpha is negative? My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So t
44,530
How to test group differences when neither parametric nor nonparametric assumptions are met?
What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or median) answers in both groups are significantly different. Means and medians are different things. What people usually d...
How to test group differences when neither parametric nor nonparametric assumptions are met?
What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or me
How to test group differences when neither parametric nor nonparametric assumptions are met? What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or median) answers in both group...
How to test group differences when neither parametric nor nonparametric assumptions are met? What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or me
44,531
How to test group differences when neither parametric nor nonparametric assumptions are met?
Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool your samples, and draw (with replacement) two groups of 50 points from this sampling space. Measure the difference between...
How to test group differences when neither parametric nor nonparametric assumptions are met?
Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool yo
How to test group differences when neither parametric nor nonparametric assumptions are met? Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool your samples, and draw (with ...
How to test group differences when neither parametric nor nonparametric assumptions are met? Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool yo
44,532
How to estimate missing data?
x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create model to predict from ndf <- data.frame(x=1:30) #create data to predict to df1$fit <- predict(lmx...
How to estimate missing data?
x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create
How to estimate missing data? x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create model to predict from ndf <- data.frame(x=1:30) #create data to pre...
How to estimate missing data? x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create
44,533
How to estimate missing data?
It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missing data may be biased. A sophisticated way to deal with data missing at random is multiple imputation, which acknowledges...
How to estimate missing data?
It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missin
How to estimate missing data? It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missing data may be biased. A sophisticated way to deal with data missing at random is multiple ...
How to estimate missing data? It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missin
44,534
How to estimate missing data?
Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
How to estimate missing data?
Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/f
How to estimate missing data? Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
How to estimate missing data? Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/f
44,535
What is the proper naming scheme for dataset parts?
It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row is an input/output tuple (Example; Observation; Data point; Datum) A single cell is a either an input feature value (or...
What is the proper naming scheme for dataset parts?
It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row
What is the proper naming scheme for dataset parts? It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row is an input/output tuple (Example; Observation; Data point; Datum) ...
What is the proper naming scheme for dataset parts? It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row
44,536
What is the proper naming scheme for dataset parts?
Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on the algorithm, I believe. Say, if you use Decision Trees then your training examples would become instances and your fe...
What is the proper naming scheme for dataset parts?
Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on
What is the proper naming scheme for dataset parts? Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on the algorithm, I believe. Say, if you use Decision Trees then your ...
What is the proper naming scheme for dataset parts? Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on
44,537
What is the proper naming scheme for dataset parts?
(1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measured variable for classification: (3) features, input features, input variable (4) class
What is the proper naming scheme for dataset parts?
(1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measu
What is the proper naming scheme for dataset parts? (1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measured variable for classification: (3) features, input features, input...
What is the proper naming scheme for dataset parts? (1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measu
44,538
What is the proper naming scheme for dataset parts?
Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer to this as a vector ij independent variables (normally in an experimental or quasi-experimental context only) dependent ...
What is the proper naming scheme for dataset parts?
Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer
What is the proper naming scheme for dataset parts? Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer to this as a vector ij independent variables (normally in an experim...
What is the proper naming scheme for dataset parts? Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer
44,539
How can I draw a boxplot without boxes in R?
The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) df <- data.frame(group = rep(c("B...
How can I draw a boxplot without boxes in R?
The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it b
How can I draw a boxplot without boxes in R? The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, ma...
How can I draw a boxplot without boxes in R? The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it b
44,540
How can I draw a boxplot without boxes in R?
One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot(), but it lacks the above method argument to separate coincident points (but see below fo one way to achieve stacking)....
How can I draw a boxplot without boxes in R?
One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot
How can I draw a boxplot without boxes in R? One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot(), but it lacks the above method argument to separate coincident points (b...
How can I draw a boxplot without boxes in R? One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot
44,541
How can I draw a boxplot without boxes in R?
I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) The violin plot library(vioplot) vioplot(business, l...
How can I draw a boxplot without boxes in R?
I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- r
How can I draw a boxplot without boxes in R? I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) The vio...
How can I draw a boxplot without boxes in R? I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- r
44,542
How to display magnitude of change over time between two series?
If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other it is changing more rapidly. The log scale may also allow you to conveniently get both sets of values onto one graph with...
How to display magnitude of change over time between two series?
If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other i
How to display magnitude of change over time between two series? If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other it is changing more rapidly. The log scale may also allo...
How to display magnitude of change over time between two series? If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other i
44,543
How to display magnitude of change over time between two series?
You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things like distort the data; it is "bad" if it is hard to read, etc. Are you interested in the difference between the two st...
How to display magnitude of change over time between two series?
You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things
How to display magnitude of change over time between two series? You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things like distort the data; it is "bad" if it is hard to rea...
How to display magnitude of change over time between two series? You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things
44,544
How to display magnitude of change over time between two series?
In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is rebasing. A base time is selected, and the values of the series are scaled so that they are all 100 there. If the first se...
How to display magnitude of change over time between two series?
In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is re
How to display magnitude of change over time between two series? In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is rebasing. A base time is selected, and the values of the ...
How to display magnitude of change over time between two series? In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is re
44,545
How to display magnitude of change over time between two series?
Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis for the stock prices range between the lowest and highest prices seen, and the range for the other values to also be the l...
How to display magnitude of change over time between two series?
Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis fo
How to display magnitude of change over time between two series? Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis for the stock prices range between the lowest and highest...
How to display magnitude of change over time between two series? Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis fo
44,546
Creating univariable smoothed scatterplot on logit scale using R
You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. uis<-read.delim("http://www.umass.edu/statdata/statdata/data/uis.dat", skip=4, sep="", ...
Creating univariable smoothed scatterplot on logit scale using R
You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. u
Creating univariable smoothed scatterplot on logit scale using R You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. uis<-read.delim("http://www.umass.edu/statdata/statdata/...
Creating univariable smoothed scatterplot on logit scale using R You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. u
44,547
Creating univariable smoothed scatterplot on logit scale using R
It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett lprob <- predict(lfit) lprob <- apply(cbind(lprob, 0.01), MARGIN=1, FUN=max) lprob <- apply(cbind(lprob, 0.99), MARGIN=1,...
Creating univariable smoothed scatterplot on logit scale using R
It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett l
Creating univariable smoothed scatterplot on logit scale using R It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett lprob <- predict(lfit) lprob <- apply(cbind(lprob, 0.01)...
Creating univariable smoothed scatterplot on logit scale using R It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett l
44,548
Creating univariable smoothed scatterplot on logit scale using R
The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of discrete x values, they can be 'grouped', or turned into percentages. Lets assume that your data are in percentages. The ...
Creating univariable smoothed scatterplot on logit scale using R
The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of dis
Creating univariable smoothed scatterplot on logit scale using R The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of discrete x values, they can be 'grouped', or turned into p...
Creating univariable smoothed scatterplot on logit scale using R The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of dis
44,549
Subdivisions in statistics
I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-parametric and robust statistics. Frequentist and Bayesian are genuine sub-categories as they are based on fundamentall...
Subdivisions in statistics
I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-
Subdivisions in statistics I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-parametric and robust statistics. Frequentist and Bayesian are genuine sub-categories as the...
Subdivisions in statistics I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-
44,550
Subdivisions in statistics
I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewhat fuzzy at the edge cases and most people in practice seem to be a mix of the two. Robust and parametric/non-parametric ...
Subdivisions in statistics
I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewha
Subdivisions in statistics I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewhat fuzzy at the edge cases and most people in practice seem to be a mix of the two. Robust and...
Subdivisions in statistics I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewha
44,551
Probability of drawing no red balls from 20 draws without replacement given finite sample
Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\binom{90}{20}}{\binom{100}{20}}$$ The last term exactly matches the @Macro's answer, but hypergeometric formula is more gen...
Probability of drawing no red balls from 20 draws without replacement given finite sample
Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\bin
Probability of drawing no red balls from 20 draws without replacement given finite sample Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\binom{90}{20}}{\binom{100}{20}}$$...
Probability of drawing no red balls from 20 draws without replacement given finite sample Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\bin
44,552
Probability of drawing no red balls from 20 draws without replacement given finite sample
Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from, so you have a $89/99$ chance of not drawing a red ball. Similarly, on the third draw, if the second draw was also not ...
Probability of drawing no red balls from 20 draws without replacement given finite sample
Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from
Probability of drawing no red balls from 20 draws without replacement given finite sample Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from, so you have a $89/99$ chance...
Probability of drawing no red balls from 20 draws without replacement given finite sample Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from
44,553
Looking for stats/probability practice problems with data and solutions
The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than others, and when there's not much material as yet there are often links to free online resources.
Looking for stats/probability practice problems with data and solutions
The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than other
Looking for stats/probability practice problems with data and solutions The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than others, and when there's not much material as yet the...
Looking for stats/probability practice problems with data and solutions The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than other
44,554
Looking for stats/probability practice problems with data and solutions
If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a self contained website devoted to it.
Looking for stats/probability practice problems with data and solutions
If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a sel
Looking for stats/probability practice problems with data and solutions If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a self contained website devoted to it.
Looking for stats/probability practice problems with data and solutions If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a sel
44,555
Looking for stats/probability practice problems with data and solutions
I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite famous, and often a link is given to the paper in which the data are described. You could use these datasets in R and then ...
Looking for stats/probability practice problems with data and solutions
I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite fam
Looking for stats/probability practice problems with data and solutions I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite famous, and often a link is given to the paper in w...
Looking for stats/probability practice problems with data and solutions I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite fam
44,556
Plotting a heatmap given a dendrogram and a distance matrix in R
I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more than basic R. Look at the source code for the cim() function in the mixOmics package for example (I choose this one because...
Plotting a heatmap given a dendrogram and a distance matrix in R
I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more tha
Plotting a heatmap given a dendrogram and a distance matrix in R I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more than basic R. Look at the source code for the cim() functi...
Plotting a heatmap given a dendrogram and a distance matrix in R I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more tha
44,557
Plotting a heatmap given a dendrogram and a distance matrix in R
Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's first simulate some data: set.seed(1) dat<-matrix(ncol=4, nrow=10, data=rnorm(40)) Then cluster the rows and columns: ...
Plotting a heatmap given a dendrogram and a distance matrix in R
Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's
Plotting a heatmap given a dendrogram and a distance matrix in R Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's first simulate some data: set.seed(1) dat<-matrix(ncol...
Plotting a heatmap given a dendrogram and a distance matrix in R Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's
44,558
Plotting a heatmap given a dendrogram and a distance matrix in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might try looking in the maptree or ape packages. ...
Plotting a heatmap given a dendrogram and a distance matrix in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Plotting a heatmap given a dendrogram and a distance matrix in R Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Plotting a heatmap given a dendrogram and a distance matrix in R Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
44,559
Is there an unpaired version of the sign test?
Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location parameter $\theta_0$ under the null hypothesis. Define an indicator variable $Z_i$ as $0$ when $D_i < 0$, and as $1$ when $D...
Is there an unpaired version of the sign test?
Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location para
Is there an unpaired version of the sign test? Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location parameter $\theta_0$ under the null hypothesis. Define an indicator variable ...
Is there an unpaired version of the sign test? Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location para
44,560
Is there an unpaired version of the sign test?
I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation there is nothing like a corresponding other value (every value in the other group could be a potential counterpart for com...
Is there an unpaired version of the sign test?
I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation th
Is there an unpaired version of the sign test? I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation there is nothing like a corresponding other value (every value in the other...
Is there an unpaired version of the sign test? I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation th
44,561
Is there an unpaired version of the sign test?
O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test?
O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test? O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test? O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
44,562
Is there an unpaired version of the sign test?
The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test?
The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test? The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test? The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
44,563
Analyze and generate "clumpy" distributions?
If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of simultaneous autoregressive models by: W. D. Kissling, G. Carl Global Ecology and Biogeography, Vol. 17, No. 1. (January...
Analyze and generate "clumpy" distributions?
If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of
Analyze and generate "clumpy" distributions? If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of simultaneous autoregressive models by: W. D. Kissling, G. Carl Global Ecolo...
Analyze and generate "clumpy" distributions? If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of
44,564
Analyze and generate "clumpy" distributions?
I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions?
I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions? I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions? I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
44,565
Analyze and generate "clumpy" distributions?
You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate such things.
Analyze and generate "clumpy" distributions?
You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate s
Analyze and generate "clumpy" distributions? You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate such things.
Analyze and generate "clumpy" distributions? You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate s
44,566
Analyze and generate "clumpy" distributions?
Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1) Use a local measure of autocorrelation - but the drawback is you don't get a single number for clumpiness. An example o...
Analyze and generate "clumpy" distributions?
Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1)
Analyze and generate "clumpy" distributions? Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1) Use a local measure of autocorrelation - but the drawback is you don't get ...
Analyze and generate "clumpy" distributions? Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1)
44,567
Ways to increase forecast accuracy [closed]
I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algorithms: to be honest, I usually find that better algorithms are always beaten by better data, and better understood data....
Ways to increase forecast accuracy [closed]
I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algor
Ways to increase forecast accuracy [closed] I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algorithms: to be honest, I usually find that better algorithms are always beaten...
Ways to increase forecast accuracy [closed] I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algor
44,568
Ways to increase forecast accuracy [closed]
First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers to use as variables? Time series methods work best for time series, of course you can use other factors to aid but wit...
Ways to increase forecast accuracy [closed]
First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers
Ways to increase forecast accuracy [closed] First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers to use as variables? Time series methods work best for time series, of cou...
Ways to increase forecast accuracy [closed] First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers
44,569
Expectation of the ratio of sum (XY) and sum(X)
I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=runif(N)/(1-phi2) f[t,]=cumsum(we*(5-4*phi2))/cumsum(we) e=e+f[t,]} with the plot of the averaged e (in red) against t...
Expectation of the ratio of sum (XY) and sum(X)
I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=run
Expectation of the ratio of sum (XY) and sum(X) I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=runif(N)/(1-phi2) f[t,]=cumsum(we*(5-4*phi2))/cumsum(we) e=e+f[t,]} wi...
Expectation of the ratio of sum (XY) and sum(X) I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=run
44,570
Expectation of the ratio of sum (XY) and sum(X)
In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with uncorrelated $X$ and $Y$. You will get that you can relate it to the following expression: $$1 + 4 E\left[\left(\frac{\sum_{...
Expectation of the ratio of sum (XY) and sum(X)
In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with unco
Expectation of the ratio of sum (XY) and sum(X) In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with uncorrelated $X$ and $Y$. You will get that you can relate it to the followi...
Expectation of the ratio of sum (XY) and sum(X) In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with unco
44,571
What is the probability a person sees a tree by looking out of the window?
How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of knowledge. The way that it is generally tackled is that we use some simplified model and apply it to the problem. But t...
What is the probability a person sees a tree by looking out of the window?
How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of
What is the probability a person sees a tree by looking out of the window? How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of knowledge. The way that it is generally tack...
What is the probability a person sees a tree by looking out of the window? How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of
44,572
What is the probability a person sees a tree by looking out of the window?
Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilities. If you have no data, there is no way to answer the problem especially when probabilities are interpreted as relativ...
What is the probability a person sees a tree by looking out of the window?
Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilit
What is the probability a person sees a tree by looking out of the window? Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilities. If you have no data, there is no way to ...
What is the probability a person sees a tree by looking out of the window? Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilit
44,573
What is the probability a person sees a tree by looking out of the window?
This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probability of a discrete outcome). Consider a deck of cards. I draw a card and ask you to guess the card without showing it to y...
What is the probability a person sees a tree by looking out of the window?
This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probabili
What is the probability a person sees a tree by looking out of the window? This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probability of a discrete outcome). Consider a deck of...
What is the probability a person sees a tree by looking out of the window? This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probabili
44,574
What is the probability a person sees a tree by looking out of the window?
When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance agent will not sell you insurance against it. In order to produce a proabability you could take one of at least two appro...
What is the probability a person sees a tree by looking out of the window?
When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance a
What is the probability a person sees a tree by looking out of the window? When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance agent will not sell you insurance against it. ...
What is the probability a person sees a tree by looking out of the window? When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance a
44,575
What is the probability a person sees a tree by looking out of the window?
Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For some such pairs $(H, I)$, we cannot evaluate the subject. e.g. $$ \text{prob}(\text{dogs are risible} | \text{it will rain...
What is the probability a person sees a tree by looking out of the window?
Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For so
What is the probability a person sees a tree by looking out of the window? Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For some such pairs $(H, I)$, we cannot evaluate th...
What is the probability a person sees a tree by looking out of the window? Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For so
44,576
What is the probability a person sees a tree by looking out of the window?
There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone being delusional, then as you noticed, there is an infinite number of possibilities, so the the answer is simple: the prob...
What is the probability a person sees a tree by looking out of the window?
There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone be
What is the probability a person sees a tree by looking out of the window? There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone being delusional, then as you noticed, there is...
What is the probability a person sees a tree by looking out of the window? There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone be
44,577
Why Spearman's rank correlation ranges from from -1 to 1
See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in the interval $[-1,1]$ is by the same proof as for the Pearson correlation. By using the Cauchy-Schwartz inequality.
Why Spearman's rank correlation ranges from from -1 to 1
See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in t
Why Spearman's rank correlation ranges from from -1 to 1 See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in the interval $[-1,1]$ is by the same proof as for the Pearson co...
Why Spearman's rank correlation ranges from from -1 to 1 See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in t
44,578
Why Spearman's rank correlation ranges from from -1 to 1
As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand the sum by expanding bracket as $$\sum_1^{m/2} (2r-1)^2 = 4\sum r^2-4\sum r +\sum 1$$ and use standard formulae for the su...
Why Spearman's rank correlation ranges from from -1 to 1
As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand th
Why Spearman's rank correlation ranges from from -1 to 1 As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand the sum by expanding bracket as $$\sum_1^{m/2} (2r-1)^2 = 4\sum r...
Why Spearman's rank correlation ranges from from -1 to 1 As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand th
44,579
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted probabilities of class membership returned by logistic regression. See the explanation of the formula in the table on the ...
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted pr
Are consistently negative Efron's pseudo-r2 in logistic regression possible? Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted probabilities of class membership returned by...
Are consistently negative Efron's pseudo-r2 in logistic regression possible? Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted pr
44,580
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ If we want to measure our ability to predict conditional means by how low of a square ...
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left(
Are consistently negative Efron's pseudo-r2 in logistic regression possible? I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ If we wan...
Are consistently negative Efron's pseudo-r2 in logistic regression possible? I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left(
44,581
Why is the z statistic of a binomial proportion test normally distributed?
Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np, \sqrt{np(1-p)}.$ and $\hat p = X/n$ is approximately $\mathsf{Norm}(p, \sqrt{p(1-p)/n}).$ This is follows from the Centr...
Why is the z statistic of a binomial proportion test normally distributed?
Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np,
Why is the z statistic of a binomial proportion test normally distributed? Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np, \sqrt{np(1-p)}.$ and $\hat p = X/n$ is approx...
Why is the z statistic of a binomial proportion test normally distributed? Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np,
44,582
Why is the z statistic of a binomial proportion test normally distributed?
The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{\sqrt{\hat p \hat q / n}},$ will be approximately equal to $ \frac{\hat p -p}{\sqrt{ p q / n}}$, it is the first term i...
Why is the z statistic of a binomial proportion test normally distributed?
The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{
Why is the z statistic of a binomial proportion test normally distributed? The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{\sqrt{\hat p \hat q / n}},$ will be approxima...
Why is the z statistic of a binomial proportion test normally distributed? The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{
44,583
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimator $\tfrac{1}{n}\sum x_i^2$ is unbiased for $1+\mu^2$. You could consider other estimators and they may have different p...
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimat
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimator $\tfrac{1}{n}\sum x_i^2$ is unbiased for $1+\mu^2$. You...
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimat
44,584
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will have the mean $1+\mu^2$, and is indeed an unbiased estimator. A more efficient estimator (an estimator with lower vari...
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will have the mean $1+\mu^2$, and is indeed an unbiased estima...
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will
44,585
Best loss function for nonlinear regression
There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even construct one yourself. The choice depends on the nature of your problem and the data you are dealing with. Recall that minim...
Best loss function for nonlinear regression
There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even const
Best loss function for nonlinear regression There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even construct one yourself. The choice depends on the nature of your problem and the ...
Best loss function for nonlinear regression There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even const
44,586
Best loss function for nonlinear regression
Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to think in terms of which conditional fit (or prediction) we want. Use the squared errors if you want conditional expectati...
Best loss function for nonlinear regression
Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to th
Best loss function for nonlinear regression Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to think in terms of which conditional fit (or prediction) we want. Use the squa...
Best loss function for nonlinear regression Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to th
44,587
Best loss function for nonlinear regression
Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard into the literature comparing them least absolute deviation least median of squares least trimmed squares metric trimmi...
Best loss function for nonlinear regression
Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard
Best loss function for nonlinear regression Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard into the literature comparing them least absolute deviation least median of...
Best loss function for nonlinear regression Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard
44,588
Convergence in $L_1$ counterexample
Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 - 0^2|) = 1$ you get that $Y_n^2 \not \overset{L_1}{\to} 0^2$ As a side note, this examples shows that $L_1$ convergenc...
Convergence in $L_1$ counterexample
Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 -
Convergence in $L_1$ counterexample Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 - 0^2|) = 1$ you get that $Y_n^2 \not \overset{L_1}{\to} 0^2$ As a side note, this ...
Convergence in $L_1$ counterexample Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 -
44,589
Convergence in $L_1$ counterexample
In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this example solves the converse problem ($L^2$ but not $L^1$), and further on a space that is not a probability space ($\mathbb R...
Convergence in $L_1$ counterexample
In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this examp
Convergence in $L_1$ counterexample In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this example solves the converse problem ($L^2$ but not $L^1$), and further on a space that is...
Convergence in $L_1$ counterexample In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this examp
44,590
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical mean $E(Y |X) $ minimizes the root mean square error of a prediction (independent of the distribution). So if minimizin...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical mean...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical
44,591
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider modeling different assumptions because it can have a substantial impact on your results. why even bother with different cost...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider model
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider modeling d...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider model
44,592
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these estimates can be wrong or highly inefficient. In the case of a simple mean, i.e., when there are no predictors X, if t...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these esti...
If the predicted value of machine learning method is E(y | x), why bother with different cost functi The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these
44,593
What is exponential entropy?
I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distribution $U^N$ over $N$ outcomes, i.e. $U^N_i = \frac{1}{N}$. It's easy to show that exponential entropy is equal to the ...
What is exponential entropy?
I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distr
What is exponential entropy? I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distribution $U^N$ over $N$ outcomes, i.e. $U^N_i = \frac{1}{N}$. It's easy to show that exponen...
What is exponential entropy? I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distr
44,594
What is exponential entropy?
It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribution $p_1...p_n$. Its entropy is $S = -\sum _i p_i \log p_i$ (just the discrete form of what you posted). Now, let's say ...
What is exponential entropy?
It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribut
What is exponential entropy? It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribution $p_1...p_n$. Its entropy is $S = -\sum _i p_i \log p_i$ (just the discrete form of what...
What is exponential entropy? It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribut
44,595
What is exponential entropy?
Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L. “Exponential Entropy as a Measure of Extent of a Distribution.” Z. Wahrscheinlichkeitstheorie verw., 5 (1966), pp. 217–...
What is exponential entropy?
Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L.
What is exponential entropy? Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L. “Exponential Entropy as a Measure of Extent of a Distribution.” Z. Wahrscheinlichkeitstheo...
What is exponential entropy? Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L.
44,596
What is exponential entropy?
Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one is then interested in the effective number of species, and it turns out this is given as the exponential of entropy, see ...
What is exponential entropy?
Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one i
What is exponential entropy? Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one is then interested in the effective number of species, and it turns out this is given as the...
What is exponential entropy? Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one i
44,597
Monotonic splines in Python [closed]
Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simple to implement (there should be also some matlab code in appendix). Anyway below you will find my own attempt :-) A bit ...
Monotonic splines in Python [closed]
Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simpl
Monotonic splines in Python [closed] Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simple to implement (there should be also some matlab code in appendix). Anyway below yo...
Monotonic splines in Python [closed] Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simpl
44,598
Monotonic splines in Python [closed]
I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html). Specifically, you can fit a generalized addi...
Monotonic splines in Python [closed]
I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/sta
Monotonic splines in Python [closed] I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html). Specifi...
Monotonic splines in Python [closed] I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/sta
44,599
Bayesian inference and testable implications
There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give some non-zero prior probability for the alternative models in that class (i.e., have a prior probability less than one for ...
Bayesian inference and testable implications
There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give som
Bayesian inference and testable implications There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give some non-zero prior probability for the alternative models in that class (i.e....
Bayesian inference and testable implications There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give som
44,600
Bayesian inference and testable implications
Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case from the posterior distribution, and then compare the distributions of the fake data, with the distribution of the observed...
Bayesian inference and testable implications
Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case fro
Bayesian inference and testable implications Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case from the posterior distribution, and then compare the distributions of the fak...
Bayesian inference and testable implications Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case fro