idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
44,501
Probability that the square of a random integer ends in 1
I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit has only ten possibilities, and that's what makes your sample space: you only focus on the last digit. Every number has only ten possible last digits, all with equal probability of appearing: $\Omega=\{0,1,2,...,9\}$, for which the favorable elements for your problem are 1 and 9; thus $P = 2/10 = 0.2$.
Probability that the square of a random integer ends in 1
I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit
Probability that the square of a random integer ends in 1 I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit has only ten possibilities, and that's what makes your sample space: you only focus on the last digit. Every number has only ten possible last digits, all with equal probability of appearing: $\Omega=\{0,1,2,...,9\}$, for which the favorable elements for your problem are 1 and 9; thus $P = 2/10 = 0.2$.
Probability that the square of a random integer ends in 1 I think the solution is simply as this: every number has only ten possible last digits, which is all that matters to tell if a number ends at 1. Thus, if you select a number at random, the last digit
44,502
Algorithm for minimization of sum of squares in regression packages
No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" multiple regression: $E(y) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p + \epsilon$ $\quad\quad\,\,=X\beta+\epsilon$ Then attempting to find the argmin of the sum of squares of errors lead to the least squares normal equations, $(X^TX)\hat\beta=X^Ty$ which have the algebraic solution $\hat\beta=(X^TX)^{-1}X^Ty$ -- but lm doesn't actually compute that. What most regression programs do instead (lm included) is to compute the QR decomposition of $X$, and then the normal equations become: $(R^TQ^TQR)\hat\beta=X^Ty$, but $Q^TQ=I$, so $(R^TR)\hat\beta=(R^T Q^T)y$ which can be recast as $R^T(R\hat\beta)=R^T (Q^Ty)$ And then (skimming over quite a few details*) the fact that $R$ is upper triangular is exploited to solve that system efficiently. * (including the use of pivots/permutation matrices, the big-R/little-R dichotomy, simplifying the above further before solving, and a bunch of other issues) If you search on QR decomposition least squares you should find sets of notes that lay out the full details (but you'll likely have to learn a number of things before it's all clear). A classic reference is Golub and Van Loan's Matrix Computations. While this is - more or less - the way most least squares regression code works these days, you may find some that use either Choleski decomposition or singular value decomposition of the $X^TX$ matrix, or in a few cases some other algorithm. If you have fairly large problems, large enough to make typical decompositions prohibitive, other algorithms like gradient descent are more likely to be used (but not by lm).
Algorithm for minimization of sum of squares in regression packages
No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" mu
Algorithm for minimization of sum of squares in regression packages No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" multiple regression: $E(y) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p + \epsilon$ $\quad\quad\,\,=X\beta+\epsilon$ Then attempting to find the argmin of the sum of squares of errors lead to the least squares normal equations, $(X^TX)\hat\beta=X^Ty$ which have the algebraic solution $\hat\beta=(X^TX)^{-1}X^Ty$ -- but lm doesn't actually compute that. What most regression programs do instead (lm included) is to compute the QR decomposition of $X$, and then the normal equations become: $(R^TQ^TQR)\hat\beta=X^Ty$, but $Q^TQ=I$, so $(R^TR)\hat\beta=(R^T Q^T)y$ which can be recast as $R^T(R\hat\beta)=R^T (Q^Ty)$ And then (skimming over quite a few details*) the fact that $R$ is upper triangular is exploited to solve that system efficiently. * (including the use of pivots/permutation matrices, the big-R/little-R dichotomy, simplifying the above further before solving, and a bunch of other issues) If you search on QR decomposition least squares you should find sets of notes that lay out the full details (but you'll likely have to learn a number of things before it's all clear). A classic reference is Golub and Van Loan's Matrix Computations. While this is - more or less - the way most least squares regression code works these days, you may find some that use either Choleski decomposition or singular value decomposition of the $X^TX$ matrix, or in a few cases some other algorithm. If you have fairly large problems, large enough to make typical decompositions prohibitive, other algorithms like gradient descent are more likely to be used (but not by lm).
Algorithm for minimization of sum of squares in regression packages No, lm in R doesn't use gradient descent to fit linear models. Linear least squares has an explicit solution. If we ignore weights, and the possibility of multiple $y$'s, and just deal with "plain" mu
44,503
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general problems. See (for example) the answer (not really the question!) to How can I group numerical data into naturally forming "brackets"? (e.g. income) That said, I am sceptical about your inclination to think that you have a clustering problem. Clustering will often be disappointing when the main characteristic of variation is that it is continuous; it is then being asked to find groups where none are well defined. In your case, given your graph I would worry greatly about the reproducibility of clusters. The estimated pdf in particular will vary greatly with kernel choices; delegating choice to e.g. automated cross-validation solves that problem only if you believe everything that goes into it. It seems that you want to make, or to guide, a decision, so perhaps that should be more central to your problem formulation.
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general problems. See (for example) the answer (not really the question!) to How can I group numerical data into naturally forming "brackets"? (e.g. income) That said, I am sceptical about your inclination to think that you have a clustering problem. Clustering will often be disappointing when the main characteristic of variation is that it is continuous; it is then being asked to find groups where none are well defined. In your case, given your graph I would worry greatly about the reproducibility of clusters. The estimated pdf in particular will vary greatly with kernel choices; delegating choice to e.g. automated cross-validation solves that problem only if you believe everything that goes into it. It seems that you want to make, or to guide, a decision, so perhaps that should be more central to your problem formulation.
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Clustering in one dimension has some special properties that on occasion have been exploited in customised methods. Often it seems neglected in textbook literature, which concentrates on more general
44,504
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optimization, for example. Kernel Density Estimation (KDE) works really well on 1-dimensional data, and by looking for minima in the density estimation, you can also segment your data set. In your case it seems to suggest there are actually 8 clusters; in contrast to k-means you don't have to pick this number beforehand.
Is clustering (kmeans) appropriate for partitioning a one-dimensional array?
Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optim
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optimization, for example. Kernel Density Estimation (KDE) works really well on 1-dimensional data, and by looking for minima in the density estimation, you can also segment your data set. In your case it seems to suggest there are actually 8 clusters; in contrast to k-means you don't have to pick this number beforehand.
Is clustering (kmeans) appropriate for partitioning a one-dimensional array? Well, k-means certainly works on 1-dimensional data. But it doesn't exploit the properties of the data well, such as being sortable. There are specialized algorithms such as Jenks Natural Breaks optim
44,505
How to partition the variance explained at group level and at individual level?
Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 + u_{0j} + e_{ij}, $$ where the random intercepts $u_{0j}$ have variance $\sigma^2_{u_0}$ and the residuals $e_{ij}$ have variance $\sigma^2_e$. Now, the correlation between two random variables $x$ and $y$ is defined as $$ corr = \frac{cov(x, y)}{\sqrt{var(x)var(y)}}. $$ So to find the formula for intra-class correlation, we use the correlation formula and let our two random variables be two observations drawn from the same $j$ group, $$ ICC = \frac{cov(\beta_0 + u_{0j} + e_{1j}, \beta_0 + u_{0j} + e_{2j})}{\sqrt{var(\beta_0 + u_{0j} + e_{1j})var(\beta_0 + u_{0j} + e_{2j})}}, $$ and if you simplify this using the definitions given above and the properties of variances/covariances, you end up with $$ ICC = \frac{\sigma^2_{u_0}}{\sigma^2_{u_0} + \sigma^2_e}. $$ So for the two-level random-intercept-only model, the intra-class correlation is given by the ratio of the random intercept variance to the total variance. If you were to use the square roots of these variances (i.e., the standard deviations), then it might still be a somewhat informative summary of how much variability we have at different levels of the model, but it could no longer be interpreted as an intra-class correlation coefficient. By the way, I looked up the page in Gelman & Hill (2007) that you mentioned (p. 448), and they clearly define the ICC in terms of variances, not standard deviations. So I think this whole question could be based on an accidental misreading of their chapter.
How to partition the variance explained at group level and at individual level?
Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 +
How to partition the variance explained at group level and at individual level? Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 + u_{0j} + e_{ij}, $$ where the random intercepts $u_{0j}$ have variance $\sigma^2_{u_0}$ and the residuals $e_{ij}$ have variance $\sigma^2_e$. Now, the correlation between two random variables $x$ and $y$ is defined as $$ corr = \frac{cov(x, y)}{\sqrt{var(x)var(y)}}. $$ So to find the formula for intra-class correlation, we use the correlation formula and let our two random variables be two observations drawn from the same $j$ group, $$ ICC = \frac{cov(\beta_0 + u_{0j} + e_{1j}, \beta_0 + u_{0j} + e_{2j})}{\sqrt{var(\beta_0 + u_{0j} + e_{1j})var(\beta_0 + u_{0j} + e_{2j})}}, $$ and if you simplify this using the definitions given above and the properties of variances/covariances, you end up with $$ ICC = \frac{\sigma^2_{u_0}}{\sigma^2_{u_0} + \sigma^2_e}. $$ So for the two-level random-intercept-only model, the intra-class correlation is given by the ratio of the random intercept variance to the total variance. If you were to use the square roots of these variances (i.e., the standard deviations), then it might still be a somewhat informative summary of how much variability we have at different levels of the model, but it could no longer be interpreted as an intra-class correlation coefficient. By the way, I looked up the page in Gelman & Hill (2007) that you mentioned (p. 448), and they clearly define the ICC in terms of variances, not standard deviations. So I think this whole question could be based on an accidental misreading of their chapter.
How to partition the variance explained at group level and at individual level? Yes, there is a consensus: you should use the variances, not the standard deviations, in computing the intra-class correlation (ICC). The two-level random-intercept-only model is $$ y_{ij} = \beta_0 +
44,506
Is a count variable with a large, but finite, number of possible values categorical or continuous?
There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy existed, it would probably be too complicated to be widely acceptable. It is best to focus on examples rather than give numerous definitions. Number of days is a counted variable. It qualifies as discrete rather than continuous, and it is possible that the discreteness is important, particularly if most values are small. Some statistical people might want to insist that only models that apply to discrete variables should be used for such a variable. At the same time, it is often the case that models and methods treat such a variable as approximately continuous. Population size is a yet more obvious example. Human populations can be in billions and many procedures effectively treat such variables as continuous, regardless of the familiar fact that people are individuals. In contrast, a variable such as temperature is in principle continuous, but as a matter of convention temperatures may only be reported to the nearest degree or tenth of a degree, so the number of possible values may be rather small in practice. This does not usually worry anyone; it would certainly be perverse to call such a variable categorical. There are some contexts in which the discreteness of reported temperature is important: in reading mercury thermometers by eye and guessing at the last digit, people show idiosyncratic preferences for or against certain digits of the ten possibilities 0 to 9. Also, what do we do with categories? Answer: we count them. We count males, females; unemployed, employed, retired, students; whatever. So, often we are modelling category counts. In short, discrete counts are a common kind of variable, as well as continuous and categorical variables.
Is a count variable with a large, but finite, number of possible values categorical or continuous?
There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy ex
Is a count variable with a large, but finite, number of possible values categorical or continuous? There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy existed, it would probably be too complicated to be widely acceptable. It is best to focus on examples rather than give numerous definitions. Number of days is a counted variable. It qualifies as discrete rather than continuous, and it is possible that the discreteness is important, particularly if most values are small. Some statistical people might want to insist that only models that apply to discrete variables should be used for such a variable. At the same time, it is often the case that models and methods treat such a variable as approximately continuous. Population size is a yet more obvious example. Human populations can be in billions and many procedures effectively treat such variables as continuous, regardless of the familiar fact that people are individuals. In contrast, a variable such as temperature is in principle continuous, but as a matter of convention temperatures may only be reported to the nearest degree or tenth of a degree, so the number of possible values may be rather small in practice. This does not usually worry anyone; it would certainly be perverse to call such a variable categorical. There are some contexts in which the discreteness of reported temperature is important: in reading mercury thermometers by eye and guessing at the last digit, people show idiosyncratic preferences for or against certain digits of the ten possibilities 0 to 9. Also, what do we do with categories? Answer: we count them. We count males, females; unemployed, employed, retired, students; whatever. So, often we are modelling category counts. In short, discrete counts are a common kind of variable, as well as continuous and categorical variables.
Is a count variable with a large, but finite, number of possible values categorical or continuous? There is, as far as I know, no taxonomy of variables that captures all the contrasts that might be important for some theoretical or practical purpose, even for statistics alone. If such a taxonomy ex
44,507
Is a count variable with a large, but finite, number of possible values categorical or continuous?
I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the units in which they are measured have identical sizes or intervals. For example, very few people need to consider the number of quanta, atoms, photons, etc. as their numbers in everyday measurements are so vast. Really it comes down to what you consider reasonable for the purposes of your study, so for example I would regard a range of 1-10000 with intervals of one as continuous and would probably even consider a range of 1-50 similarly, but not lesser ranges (the cut-off point is subjective and depends in part on the topic and purpose). What you are describing as categorical is more likely to still be scalar. Categorical variables have values that have no ordinal relationship, e.g. colours, sex, marital status. Ordinal values indicate the relative magnitudes of relationships or responses, such as in Likert scales where responses such as very happy, happy, neutral, sad, very sad can be recorded and assigned values of 1-5, but there is no definite interval between each response. Scalar variables have units of fixed length, e.g. numbers of items, feet, centimetres, nanometres, etc. and may or not be considered continuous or discrete, depending on your viewpoint as explained earlier.
Is a count variable with a large, but finite, number of possible values categorical or continuous?
I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the
Is a count variable with a large, but finite, number of possible values categorical or continuous? I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the units in which they are measured have identical sizes or intervals. For example, very few people need to consider the number of quanta, atoms, photons, etc. as their numbers in everyday measurements are so vast. Really it comes down to what you consider reasonable for the purposes of your study, so for example I would regard a range of 1-10000 with intervals of one as continuous and would probably even consider a range of 1-50 similarly, but not lesser ranges (the cut-off point is subjective and depends in part on the topic and purpose). What you are describing as categorical is more likely to still be scalar. Categorical variables have values that have no ordinal relationship, e.g. colours, sex, marital status. Ordinal values indicate the relative magnitudes of relationships or responses, such as in Likert scales where responses such as very happy, happy, neutral, sad, very sad can be recorded and assigned values of 1-5, but there is no definite interval between each response. Scalar variables have units of fixed length, e.g. numbers of items, feet, centimetres, nanometres, etc. and may or not be considered continuous or discrete, depending on your viewpoint as explained earlier.
Is a count variable with a large, but finite, number of possible values categorical or continuous? I think that for your purposes the distinction between categorical, ordinal and scalar variables is more relevant, where a scalar variable may have either discrete or pseudo-continuous values, but the
44,508
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one where inference proceeds from the posterior distribution, will satisfy the Likelihood principle. Essentially, the Likelihood principle states that if two likelihood functions are proportional to each other, then the same inference about the parameters should be obtained from the two likelihood functions. In your example we are inferring the probability of a coin landing heads up, $p$, from $n$ independent tosses, $X_1,...,X_n$, of that coin. Prior to tossing the coin, you assume that any value of $p$ in the interval $[0,1]$ is equally likely. Thus the prior distribution for the parameter $p$ is $\pi (p)=1$. Suppose we observe $k$ coin tosses where the coin lands heads up, where $0\leq k\leq n$. In the case of the model using the binomial distribution, the likelihood function is $$ l(p|X_1,...,X_n)= {n \choose k}p^k (1-p)^{n-k} $$ For the Bernoulli model, the likelihood function is $$ l_\star (p|X_1,...,X_n)=p^k(1-p)^{n-k} $$ We have observed the data, so both $n$ and $k$ are fixed values and therefore ${n \choose k}$ is just a constant, and $l(p|X_1,...,X_n) \propto l_\star(p|X_1,...,X_n)$, bearing in mind $l$ and $l_\star$ are functions of $p$. Once we have our samples from the posterior distribution from RJags, we will make the same conclusion, aside from any error due to having a finite sample from a Markov Chain that has hopefully converged. Also, if you are familiar with sufficient statistics, you could note that $k=\sum_{i=1}^n{X_i}$ is a sufficient statistic for $p$ in both models (assuming $n$ fixed).
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one wh
Should coin flips be modeled as Bernoulli or binomial draws in RJags? Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one where inference proceeds from the posterior distribution, will satisfy the Likelihood principle. Essentially, the Likelihood principle states that if two likelihood functions are proportional to each other, then the same inference about the parameters should be obtained from the two likelihood functions. In your example we are inferring the probability of a coin landing heads up, $p$, from $n$ independent tosses, $X_1,...,X_n$, of that coin. Prior to tossing the coin, you assume that any value of $p$ in the interval $[0,1]$ is equally likely. Thus the prior distribution for the parameter $p$ is $\pi (p)=1$. Suppose we observe $k$ coin tosses where the coin lands heads up, where $0\leq k\leq n$. In the case of the model using the binomial distribution, the likelihood function is $$ l(p|X_1,...,X_n)= {n \choose k}p^k (1-p)^{n-k} $$ For the Bernoulli model, the likelihood function is $$ l_\star (p|X_1,...,X_n)=p^k(1-p)^{n-k} $$ We have observed the data, so both $n$ and $k$ are fixed values and therefore ${n \choose k}$ is just a constant, and $l(p|X_1,...,X_n) \propto l_\star(p|X_1,...,X_n)$, bearing in mind $l$ and $l_\star$ are functions of $p$. Once we have our samples from the posterior distribution from RJags, we will make the same conclusion, aside from any error due to having a finite sample from a Markov Chain that has hopefully converged. Also, if you are familiar with sufficient statistics, you could note that $k=\sum_{i=1}^n{X_i}$ is a sufficient statistic for $p$ in both models (assuming $n$ fixed).
Should coin flips be modeled as Bernoulli or binomial draws in RJags? Both models will give the exact same results. Why? The Likelihood principle. RJags is an R package that uses the software JAGS to conduct Bayesian inference, and any fully Bayesian procedure, one wh
44,509
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial distribution is enough, no need for detailed model with N bernoulli flips. However, if you have data on results of individual coin flips and you need to distinguish them (e.g. because of covariates for individual coin flips), you will need more detailed model with bernoulli distribution.
Should coin flips be modeled as Bernoulli or binomial draws in RJags?
One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial
Should coin flips be modeled as Bernoulli or binomial draws in RJags? One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial distribution is enough, no need for detailed model with N bernoulli flips. However, if you have data on results of individual coin flips and you need to distinguish them (e.g. because of covariates for individual coin flips), you will need more detailed model with bernoulli distribution.
Should coin flips be modeled as Bernoulli or binomial draws in RJags? One draw from binomial distribution generally is enough. But it depends of the data you have. If you have the data of how many heads in the individual coin flips have been seen in total, then binomial
44,510
How do I validate my multiple linear regression model?
Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sample (& compare it to the RSS to assess the amount of over-fitting). For ordinary least-squares regression there's an analytic solution: $$\sum_i \left(\frac{e_i}{1-h_{ii}}\right)^2$$ where $e_i$ is the $i$th residual & $h_{ii}$ its leverage—from the diagonal of the hat matrix $$H=X(X^\mathrm{T}X)^{-1}X^\mathrm{T}$$ (where $X$ is the design matrix). In general cross-validation & bootstrap validation are preferable to splitting a sample into training & test sets: you don't lose precision in the estimates as when fitting on a smaller training set, & the performance measure on the test set will be less variable. How preferable depends on sample size.
How do I validate my multiple linear regression model?
Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sam
How do I validate my multiple linear regression model? Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sample (& compare it to the RSS to assess the amount of over-fitting). For ordinary least-squares regression there's an analytic solution: $$\sum_i \left(\frac{e_i}{1-h_{ii}}\right)^2$$ where $e_i$ is the $i$th residual & $h_{ii}$ its leverage—from the diagonal of the hat matrix $$H=X(X^\mathrm{T}X)^{-1}X^\mathrm{T}$$ (where $X$ is the design matrix). In general cross-validation & bootstrap validation are preferable to splitting a sample into training & test sets: you don't lose precision in the estimates as when fitting on a smaller training set, & the performance measure on the test set will be less variable. How preferable depends on sample size.
How do I validate my multiple linear regression model? Note that the predicted residual sum of squares, PRESS, is got by jack-knifing the sample: there's no sense in calculating it for training & test sets. Calculate it for a model fitted to the whole sam
44,511
How do I validate my multiple linear regression model?
You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated from your 80% dataset, on the independent variables (IV) of your another 20% dataset (validation dataset). In R use rmse function from hydroGOF package. Example: # create an object with dependent variable (DV) values from the validation dataset. dv_observed = c(1,2,3,4,5,6,7,8,9,10) # use the multiple linear regression model (derived from the calibration dataset) to predict DV values as from validation dataset IV values. Then, create another object. dv_predicted = c(1,3,3,4,5,6,6,8,9,10) require(hydroGOF) rmse(dv_observed,dv_predicted) [1] 0.4472136 RMSE output measurement unit is the same of your data (e.g. if DV is weight in pounds, RMSE will be pounds too).
How do I validate my multiple linear regression model?
You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated
How do I validate my multiple linear regression model? You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated from your 80% dataset, on the independent variables (IV) of your another 20% dataset (validation dataset). In R use rmse function from hydroGOF package. Example: # create an object with dependent variable (DV) values from the validation dataset. dv_observed = c(1,2,3,4,5,6,7,8,9,10) # use the multiple linear regression model (derived from the calibration dataset) to predict DV values as from validation dataset IV values. Then, create another object. dv_predicted = c(1,3,3,4,5,6,6,8,9,10) require(hydroGOF) rmse(dv_observed,dv_predicted) [1] 0.4472136 RMSE output measurement unit is the same of your data (e.g. if DV is weight in pounds, RMSE will be pounds too).
How do I validate my multiple linear regression model? You may use Root Mean Squared Error (RMSE) which is a measurement of accuracy between two set of values. Use your model of type $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_nX_n$ calibrated
44,512
Understanding the definition of omnibus tests
I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained. To understand this in detail you have to really look at the equations. I'll try to explain it anyway without introducing too many equations. In the case of a one-way ANOVA each value observed can be thought of as being composed of three sources of variance, the grand mean, the group mean deviation from the grand mean, and error... $x = \bar{\bar{x}} + (\bar{\bar{x}} - \bar{x}_k) + e$. If you assume there are no group differences then all $\bar{x}_k = \bar{\bar{x}}$ therefore, by estimating $\bar{x}_k$ you haven't 'explained' very little or no extra variance. Imagine instead that the null hypothesis is false and you go ahead and estimated the deviations of the group means from the grand means. If you then adjust each score by the deviation of the group to which it belongs and then recalculated the variance of the scores, you will find that the variance is smaller than it was before. That reduction in variance is the variance that you 'explained' by estimating means for each group. How shall I understand what an omnibus test is? Tests are referred to as omnibus if after rejecting the null hypothesis you do not know where the differences assessed by the statistical test are. In the case of F tests they are omnibus when there is more than one df in the numerator (3 or more groups) it is omnibus. In the case of Chi-square tests, when there is more than one df it is omnibus. What is a set that is not omnibus? Comparisons between two groups such as those that happen in the cases detailed above F with 1 df in the numerator and Chi Square with 1 df.
Understanding the definition of omnibus tests
I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained.
Understanding the definition of omnibus tests I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained. To understand this in detail you have to really look at the equations. I'll try to explain it anyway without introducing too many equations. In the case of a one-way ANOVA each value observed can be thought of as being composed of three sources of variance, the grand mean, the group mean deviation from the grand mean, and error... $x = \bar{\bar{x}} + (\bar{\bar{x}} - \bar{x}_k) + e$. If you assume there are no group differences then all $\bar{x}_k = \bar{\bar{x}}$ therefore, by estimating $\bar{x}_k$ you haven't 'explained' very little or no extra variance. Imagine instead that the null hypothesis is false and you go ahead and estimated the deviations of the group means from the grand means. If you then adjust each score by the deviation of the group to which it belongs and then recalculated the variance of the scores, you will find that the variance is smaller than it was before. That reduction in variance is the variance that you 'explained' by estimating means for each group. How shall I understand what an omnibus test is? Tests are referred to as omnibus if after rejecting the null hypothesis you do not know where the differences assessed by the statistical test are. In the case of F tests they are omnibus when there is more than one df in the numerator (3 or more groups) it is omnibus. In the case of Chi-square tests, when there is more than one df it is omnibus. What is a set that is not omnibus? Comparisons between two groups such as those that happen in the cases detailed above F with 1 df in the numerator and Chi Square with 1 df.
Understanding the definition of omnibus tests I was wondering what it means by a variance being explained or unexplained? It the context of ANOVA it means the variance "explained" by group membership and the variance that remains unexplained.
44,513
Understanding the definition of omnibus tests
I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmanteau statistic and factotum statistic. Over a century or more, there have been all sorts of fashions over terminology, including statisticians reaching for their Latin and Greek (ancillary, histogram, chi-square, heteroscedasticity), statisticians reaching for their thesaurus (as here), statisticians naming tests after their teachers or friends, ideally in pairs (Mann-Whitney, Kruskal-Wallis), and statisticians eager to show off their homespun sides (jackknife, bootstrap). Even the words that look familiar had to be invented (average, mode, regression).
Understanding the definition of omnibus tests
I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmant
Understanding the definition of omnibus tests I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmanteau statistic and factotum statistic. Over a century or more, there have been all sorts of fashions over terminology, including statisticians reaching for their Latin and Greek (ancillary, histogram, chi-square, heteroscedasticity), statisticians reaching for their thesaurus (as here), statisticians naming tests after their teachers or friends, ideally in pairs (Mann-Whitney, Kruskal-Wallis), and statisticians eager to show off their homespun sides (jackknife, bootstrap). Even the words that look familiar had to be invented (average, mode, regression).
Understanding the definition of omnibus tests I wouldn't look for a rigorous definition of omnibus test. It seems typically used for overall tests with wide scope, packing several tests into one. Other terms used with similar import are portmant
44,514
How random are the results of the kmeans algorithm?
There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. ordering of points. Then there is Hartigan, Wong, Forgy, ... And of course, various implementations may have implementation and optimization differences. They may treat ties differently, too! For example, many naive implementations will always assign elements to the first or last cluster when tied. Others will preserve the current clustering assignment. So when clustering integer values, where ties are much more common, but also on the Iris data set, you may see artifacts and differences caused by this. Furthermore, the clusters may end up being reordered by memory address after finishing k-means, so you cannot safely assume that cluster 1 remains cluster 1 even if k-means converged after the first iteration. Others will reorder clusters by cluster size (which actually makes sense for k-means, as this will more likely return the same result on different random initialization) But assuming that all iterate Lloyd until convergence (which original MacQueen k-means didn't!) they should all at least arrive at a local optimum. There will be only oh-so-many local optima... Consider for example the data set generated by $p_j=(\sin(2\pi \frac{j}{n}), \cos(2\pi \frac{j}{n}))$, and let $n$ be divisible by $j$. There will be a lot of local optimal solutions. Running k-means with different random seeds will indeed give your very different solutions. For appropriate parameters, I believe the chance of two different elements that were in the same cluster to be in the same cluster again in another result will be somewhere around $50\%$. In higher dimensionality, you can probably further reduce this number. For example in the $n$ dimensional data set where $p_{jj}=1$ and $p_{ij}=0$ for $i\neq j$, all points are equidistant. It's easy to see that this will cause havoc to k-means...
How random are the results of the kmeans algorithm?
There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. or
How random are the results of the kmeans algorithm? There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. ordering of points. Then there is Hartigan, Wong, Forgy, ... And of course, various implementations may have implementation and optimization differences. They may treat ties differently, too! For example, many naive implementations will always assign elements to the first or last cluster when tied. Others will preserve the current clustering assignment. So when clustering integer values, where ties are much more common, but also on the Iris data set, you may see artifacts and differences caused by this. Furthermore, the clusters may end up being reordered by memory address after finishing k-means, so you cannot safely assume that cluster 1 remains cluster 1 even if k-means converged after the first iteration. Others will reorder clusters by cluster size (which actually makes sense for k-means, as this will more likely return the same result on different random initialization) But assuming that all iterate Lloyd until convergence (which original MacQueen k-means didn't!) they should all at least arrive at a local optimum. There will be only oh-so-many local optima... Consider for example the data set generated by $p_j=(\sin(2\pi \frac{j}{n}), \cos(2\pi \frac{j}{n}))$, and let $n$ be divisible by $j$. There will be a lot of local optimal solutions. Running k-means with different random seeds will indeed give your very different solutions. For appropriate parameters, I believe the chance of two different elements that were in the same cluster to be in the same cluster again in another result will be somewhere around $50\%$. In higher dimensionality, you can probably further reduce this number. For example in the $n$ dimensional data set where $p_{jj}=1$ and $p_{ij}=0$ for $i\neq j$, all points are equidistant. It's easy to see that this will cause havoc to k-means...
How random are the results of the kmeans algorithm? There is more than one k-means algorithm. You probably refer to Lloyds algorithm, which only depends on the initial cluster centers. But there also is MacQueen's, which depends on the sequence i.e. or
44,515
How random are the results of the kmeans algorithm?
K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can be chosen the same each time, similar each time, or completely random each time. With MATLAB/R implementations, the choice is random but the result you get is the best run from among 50 or so sets of choices for initial centers. Note with the R stats::kmeans function, the default is to only run one set of initial centers (i.e., nstart = 1). Depending on your data, increasing this value may stabilize cluster assignments across runs and doing so is generally recommended. To answer your first question, it really depends on what kind of data you have. If it is nicely split into spherical-shaped clusters then you will typically get very very similar clusters. If not, then you might get pretty random clusters each time. There is no general measure for "likelihood" of being in the same cluster, but if you need one you can come up with one based on any instance's similarity/distance to the others compared to their similarity/distance to other points. Or perhaps you could run a linkage (single or complete) algorithm first and then weigh their "likelihood" of being in the same cluster by their distances to the lowest common ancestor. Or there are a number of other was you could do it depending on what your data looks like and what the application is.
How random are the results of the kmeans algorithm?
K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can
How random are the results of the kmeans algorithm? K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can be chosen the same each time, similar each time, or completely random each time. With MATLAB/R implementations, the choice is random but the result you get is the best run from among 50 or so sets of choices for initial centers. Note with the R stats::kmeans function, the default is to only run one set of initial centers (i.e., nstart = 1). Depending on your data, increasing this value may stabilize cluster assignments across runs and doing so is generally recommended. To answer your first question, it really depends on what kind of data you have. If it is nicely split into spherical-shaped clusters then you will typically get very very similar clusters. If not, then you might get pretty random clusters each time. There is no general measure for "likelihood" of being in the same cluster, but if you need one you can come up with one based on any instance's similarity/distance to the others compared to their similarity/distance to other points. Or perhaps you could run a linkage (single or complete) algorithm first and then weigh their "likelihood" of being in the same cluster by their distances to the lowest common ancestor. Or there are a number of other was you could do it depending on what your data looks like and what the application is.
How random are the results of the kmeans algorithm? K-means is only randomized in its starting centers. Once the initial candidate centers are determined, it is deterministic after that point. Depending on your implementation of kmeans the centers can
44,516
Logistic regression: categorical predictor vs. quantitative predictor
That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you only know whether an observation is below or above a certain threshold (e.g. the mean or median), while with a quantitative predictor you also know how much below or above the threshold that observation is. It is not unreasonable to suspect that if you feed your model more information (i.e. add your variable as a quantitative predictor), you will get more precise results. One of the reasons why this is not necessarily true is that if you add a variable to a regression model as a quantitative variable you assume the effect of that variable to be linear. If the effect is strongly non-linear, then that may undo the advantage of adding quantitative variables. There are however easy ways to check whether that is the case (plots of residuals against predictors), and easy ways to solve it (adding your variables as splines or polynomials are probably the easiest solutions).
Logistic regression: categorical predictor vs. quantitative predictor
That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you
Logistic regression: categorical predictor vs. quantitative predictor That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you only know whether an observation is below or above a certain threshold (e.g. the mean or median), while with a quantitative predictor you also know how much below or above the threshold that observation is. It is not unreasonable to suspect that if you feed your model more information (i.e. add your variable as a quantitative predictor), you will get more precise results. One of the reasons why this is not necessarily true is that if you add a variable to a regression model as a quantitative variable you assume the effect of that variable to be linear. If the effect is strongly non-linear, then that may undo the advantage of adding quantitative variables. There are however easy ways to check whether that is the case (plots of residuals against predictors), and easy ways to solve it (adding your variables as splines or polynomials are probably the easiest solutions).
Logistic regression: categorical predictor vs. quantitative predictor That is not a necessary result, but it is certainly plausible. If you turn a quantitive predictor into a single categorical predictor you lose a lot of information; with the categorical predictor you
44,517
Logistic regression: categorical predictor vs. quantitative predictor
It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing analyses treating the predictor values, say ${1,2,3,4,5,6,7,8,9,10}$, as either continuous or categorical, in the latter case you fit nine parameters & the resulting drop in residual degrees of freedom can make the regression insignificant, especially in smaller data-sets.
Logistic regression: categorical predictor vs. quantitative predictor
It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing
Logistic regression: categorical predictor vs. quantitative predictor It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing analyses treating the predictor values, say ${1,2,3,4,5,6,7,8,9,10}$, as either continuous or categorical, in the latter case you fit nine parameters & the resulting drop in residual degrees of freedom can make the regression insignificant, especially in smaller data-sets.
Logistic regression: categorical predictor vs. quantitative predictor It depends what you mean by "the same variable except it is continuous". Binning a truly continuous variable into two or more categories loses information as described by @Maarten. If you're comparing
44,518
Logistic regression: categorical predictor vs. quantitative predictor
As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can see how much information you loose by discretizing by different schemas. You may also want to have a look at Frank Harrell's list on the categorization subject. While the residual plot against the continuous predictor is a simple approach to checking the linearity assumption I find that the ANOVA is really convenient here. The rms-package in R allows you effortlessly to test the linearity, straight from the man-page for the lrm()-function: #Fit a logistic model containing predictors age, blood.pressure, sex #and cholesterol, with age fitted with a smooth 5-knot restricted cubic #spline function and a different shape of the age relationship for males #and females. As an intermediate step, predict mean cholesterol from #age using a proportional odds ordinal logistic model # n <- 1000 # define sample size set.seed(17) # so can reproduce the results age <- rnorm(n, 50, 10) blood.pressure <- rnorm(n, 120, 15) cholesterol <- rnorm(n, 200, 25) sex <- factor(sample(c('female','male'), n,TRUE)) label(age) <- 'Age' # label is in Hmisc label(cholesterol) <- 'Total Cholesterol' label(blood.pressure) <- 'Systolic Blood Pressure' label(sex) <- 'Sex' units(cholesterol) <- 'mg/dl' # uses units.default in Hmisc units(blood.pressure) <- 'mmHg' # Specify population model for log odds that Y=1 L <- .4*(sex=='male') + .045*(age-50) + (log(cholesterol - 10)-5.2)*(-2*(sex=='female') + 2*(sex=='male')) # Simulate binary y to have Prob(y=1) = 1/[1+exp(-L)] y <- ifelse(runif(n) < plogis(L), 1, 0) cholesterol[1:3] <- NA # 3 missings, at random ddist <- datadist(age, blood.pressure, cholesterol, sex) options(datadist='ddist') fit <- lrm(y ~ blood.pressure + sex * (age + rcs(cholesterol,4)), x=TRUE, y=TRUE) # x=TRUE, y=TRUE allows use of resid(), which.influence below # could define d <- datadist(fit) after lrm(), but data distribution # summary would not be stored with fit, so later uses of Predict # or summary.rms would require access to the original dataset or # d or specifying all variable values to summary, Predict, nomogram anova(fit) Gives you the ANOVA output: Wald Statistics Response: y Factor Chi-Square d.f. P blood.pressure 0.23 1 0.6315 sex (Factor+Higher Order Factors) 38.17 5 <.0001 All Interactions 26.25 4 <.0001 age (Factor+Higher Order Factors) 30.48 2 <.0001 All Interactions 3.68 1 0.0552 cholesterol (Factor+Higher Order Factors) 24.15 6 0.0005 All Interactions 22.74 3 <.0001 Nonlinear (Factor+Higher Order Factors) 5.11 4 0.2759 sex * age (Factor+Higher Order Factors) 3.68 1 0.0552 sex * cholesterol (Factor+Higher Order Factors) 22.74 3 <.0001 Nonlinear 4.54 2 0.1031 Nonlinear Interaction : f(A,B) vs. AB 4.54 2 0.1031 TOTAL NONLINEAR 5.11 4 0.2759 TOTAL INTERACTION 26.25 4 <.0001 TOTAL NONLINEAR + INTERACTION 26.98 6 0.0001 TOTAL 62.10 10 <.0001 As you see there is no strong support for non-linearity in this example. I find it surprisingly easy to test very complicated models in this way. Hope this helps.
Logistic regression: categorical predictor vs. quantitative predictor
As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can
Logistic regression: categorical predictor vs. quantitative predictor As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can see how much information you loose by discretizing by different schemas. You may also want to have a look at Frank Harrell's list on the categorization subject. While the residual plot against the continuous predictor is a simple approach to checking the linearity assumption I find that the ANOVA is really convenient here. The rms-package in R allows you effortlessly to test the linearity, straight from the man-page for the lrm()-function: #Fit a logistic model containing predictors age, blood.pressure, sex #and cholesterol, with age fitted with a smooth 5-knot restricted cubic #spline function and a different shape of the age relationship for males #and females. As an intermediate step, predict mean cholesterol from #age using a proportional odds ordinal logistic model # n <- 1000 # define sample size set.seed(17) # so can reproduce the results age <- rnorm(n, 50, 10) blood.pressure <- rnorm(n, 120, 15) cholesterol <- rnorm(n, 200, 25) sex <- factor(sample(c('female','male'), n,TRUE)) label(age) <- 'Age' # label is in Hmisc label(cholesterol) <- 'Total Cholesterol' label(blood.pressure) <- 'Systolic Blood Pressure' label(sex) <- 'Sex' units(cholesterol) <- 'mg/dl' # uses units.default in Hmisc units(blood.pressure) <- 'mmHg' # Specify population model for log odds that Y=1 L <- .4*(sex=='male') + .045*(age-50) + (log(cholesterol - 10)-5.2)*(-2*(sex=='female') + 2*(sex=='male')) # Simulate binary y to have Prob(y=1) = 1/[1+exp(-L)] y <- ifelse(runif(n) < plogis(L), 1, 0) cholesterol[1:3] <- NA # 3 missings, at random ddist <- datadist(age, blood.pressure, cholesterol, sex) options(datadist='ddist') fit <- lrm(y ~ blood.pressure + sex * (age + rcs(cholesterol,4)), x=TRUE, y=TRUE) # x=TRUE, y=TRUE allows use of resid(), which.influence below # could define d <- datadist(fit) after lrm(), but data distribution # summary would not be stored with fit, so later uses of Predict # or summary.rms would require access to the original dataset or # d or specifying all variable values to summary, Predict, nomogram anova(fit) Gives you the ANOVA output: Wald Statistics Response: y Factor Chi-Square d.f. P blood.pressure 0.23 1 0.6315 sex (Factor+Higher Order Factors) 38.17 5 <.0001 All Interactions 26.25 4 <.0001 age (Factor+Higher Order Factors) 30.48 2 <.0001 All Interactions 3.68 1 0.0552 cholesterol (Factor+Higher Order Factors) 24.15 6 0.0005 All Interactions 22.74 3 <.0001 Nonlinear (Factor+Higher Order Factors) 5.11 4 0.2759 sex * age (Factor+Higher Order Factors) 3.68 1 0.0552 sex * cholesterol (Factor+Higher Order Factors) 22.74 3 <.0001 Nonlinear 4.54 2 0.1031 Nonlinear Interaction : f(A,B) vs. AB 4.54 2 0.1031 TOTAL NONLINEAR 5.11 4 0.2759 TOTAL INTERACTION 26.25 4 <.0001 TOTAL NONLINEAR + INTERACTION 26.98 6 0.0001 TOTAL 62.10 10 <.0001 As you see there is no strong support for non-linearity in this example. I find it surprisingly easy to test very complicated models in this way. Hope this helps.
Logistic regression: categorical predictor vs. quantitative predictor As @MaartenBuis wrote, you lose a lot of information by categorizing. Lagakos wrote an excellent article a while ago about the loss of power when mismodeling explanatory variables. In table IV you can
44,519
Is it feasible to use global optimization methods to train deep learning models?
In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradient. Geoff Hinton mentioned evolution based approaches to optimizing neural networks in his slides on deep learning. He says that they don't really work, and they scale poorly to networks that have many weights. Using the gradient to optimize helps immensely to do efficient training. Successful approaches to training deep neural networks have gone in the direction of approximating the second derivative of the objective function. I am very skeptical that general optimization procedures that don't know about the structure of the neural nets are going to have much success.
Is it feasible to use global optimization methods to train deep learning models?
In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradie
Is it feasible to use global optimization methods to train deep learning models? In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradient. Geoff Hinton mentioned evolution based approaches to optimizing neural networks in his slides on deep learning. He says that they don't really work, and they scale poorly to networks that have many weights. Using the gradient to optimize helps immensely to do efficient training. Successful approaches to training deep neural networks have gone in the direction of approximating the second derivative of the objective function. I am very skeptical that general optimization procedures that don't know about the structure of the neural nets are going to have much success.
Is it feasible to use global optimization methods to train deep learning models? In general, gradient based techniques for optimizing neural networks are more specific and optimized for the task than the two generic optimization algorithms you mention, which don't require a gradie
44,520
Is it feasible to use global optimization methods to train deep learning models?
Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a better optimum though. You can also use e.g. evolutionary algorithms the fllowing ways: optimize number of layers, number of neurons and meta parameters of the network, which is an open question at the moment and traditionally requires much human interaction. build a multipoint search algorithm where other than crossover and mutation, a new evolutionary operator is the one based on the gradient evolve good starting point for the gradient search algorithm
Is it feasible to use global optimization methods to train deep learning models?
Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a bette
Is it feasible to use global optimization methods to train deep learning models? Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a better optimum though. You can also use e.g. evolutionary algorithms the fllowing ways: optimize number of layers, number of neurons and meta parameters of the network, which is an open question at the moment and traditionally requires much human interaction. build a multipoint search algorithm where other than crossover and mutation, a new evolutionary operator is the one based on the gradient evolve good starting point for the gradient search algorithm
Is it feasible to use global optimization methods to train deep learning models? Probably it's less researched subject at the moment as multipoint search algorithms require more processing power than using the gradient (usually). Multipoint search algorithms do converge to a bette
44,521
Scale parameters -- How do they work, why are they sometimes dropped?
A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cannot label the axes, except for showing where the origin is. Here is a distribution (with the origin at its left) shown as a PDF. We may establish its scale by showing one (or more) values along the x-axis. Here is a picture of the same shape with three separate scales shown: Because a PDF uses area to show probability, it is unnecessary to label the vertical axis: we know the total area must be $1$. For instance, if the scale is set so the ticks are at $1$ and $2$, then I can see that this shape--which is close to triangular--must have a height of about $1$ in order for its total area to be $1$. (In fact, its peak is at $0.812$--close enough.) Suppose the numbers on this scale are feet and we re-express the values in inches. That does nothing other than relabel the x-axis: the ticks now are labeled $12$ and $24$ inches, respectively, as shown in the middle row of labels. Obviously relabeling does not change the shape (or the origin). The height must change, though: since numerically the base is now $12$ times greater, the height must shrink by the same factor. We deduce the maximum value equals $0.812/12$--but again, there's no need to show this, because we know the total area is unity. If we have to, we draw a y-axis and we merely relabel it. In general, when the scale is $\sigma \gt 0$ times greater than the original, we label tick $1$ with $\sigma$, tick $2$ with $2\sigma$, and so on. The axis labels may change, but the shape is constant. In many statistical problems we do not want our conclusions to depend on the units we use to express the measurements. In most cases the units are arbitrary and we don't want the conclusions to be arbitrary! The only exceptions occur when the units are unique; the best example is where the x-axis is a count (but such nice continuous PDFs do not arise in that situation: they are usually shown as bar graphs instead). Therefore, for many purposes we may ignore the scale throughout all calculations and freely introduce it back at the end. Although I'm not sure, I think this might address the concerns behind the question. Let's end with some remarks about mathematical notation. If the equation of the PDF for the initial labeling (with ticks at $1$ and $2$) is $f(x)$, then the relabeling turns $x$ into $x \sigma$, as is evident from the second figure. The height at $y = x \sigma$ is found by first dividing by $\sigma$ to find the original expression for $x$ and then applying $f$--but don't forget to divide by $\sigma$ to keep the total area to $1$! Therefore, the same distribution using units of $\sigma$--that is, a "scale factor" of $\sigma$--has the PDF $$\frac{1}{\sigma} f(y/\sigma).$$ An excellent way to remember this--from the right point of view it's perfectly rigorous--is always to write your PDF explicitly as a product of a length and a height. The height is $f(x)$ and the length is the differential $dx$, so the proper way to write the PDF is $f(x)dx$. Now when we change $x$ to $x/\sigma$ the PDF becomes $$f(y) dy = f(x/\sigma) d(x/\sigma) = \frac{1}{\sigma} f(x/\sigma) dx,$$ exactly as it should. (Differentials follow the rules of differentiation: for any function $g$, $dg(x) = g'(x)dx$. That's how I determined that $d(x/\sigma) = (dx)/\sigma$.) In short, the $dx$ is a formal reminder to adjust the y-axis in order to keep the total area to unity. For example, a Gamma$(3)$ distribution has a PDF proportional to $e^{-3x} x^2 dx$, by definition. (The first figure is a portrait of this distribution.) Often we do not need to know the constant of proportionality because it's there only to make sure the total area is unity. To change the scale to $\sigma$, the distribution would now take the form $$e^{-3x/\sigma} (x/\sigma)^2 d(x/\sigma) = \frac{1}{\sigma^3} x^2 e^{-3x/\sigma} dx.$$ The constant of proportionality goes along for the ride: it multiplies both expressions. In this fashion we can obtain a great deal of information while needing to remember very little: all that is required is knowledge of the basic form of the distribution for one particular nice scale.
Scale parameters -- How do they work, why are they sometimes dropped?
A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cann
Scale parameters -- How do they work, why are they sometimes dropped? A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cannot label the axes, except for showing where the origin is. Here is a distribution (with the origin at its left) shown as a PDF. We may establish its scale by showing one (or more) values along the x-axis. Here is a picture of the same shape with three separate scales shown: Because a PDF uses area to show probability, it is unnecessary to label the vertical axis: we know the total area must be $1$. For instance, if the scale is set so the ticks are at $1$ and $2$, then I can see that this shape--which is close to triangular--must have a height of about $1$ in order for its total area to be $1$. (In fact, its peak is at $0.812$--close enough.) Suppose the numbers on this scale are feet and we re-express the values in inches. That does nothing other than relabel the x-axis: the ticks now are labeled $12$ and $24$ inches, respectively, as shown in the middle row of labels. Obviously relabeling does not change the shape (or the origin). The height must change, though: since numerically the base is now $12$ times greater, the height must shrink by the same factor. We deduce the maximum value equals $0.812/12$--but again, there's no need to show this, because we know the total area is unity. If we have to, we draw a y-axis and we merely relabel it. In general, when the scale is $\sigma \gt 0$ times greater than the original, we label tick $1$ with $\sigma$, tick $2$ with $2\sigma$, and so on. The axis labels may change, but the shape is constant. In many statistical problems we do not want our conclusions to depend on the units we use to express the measurements. In most cases the units are arbitrary and we don't want the conclusions to be arbitrary! The only exceptions occur when the units are unique; the best example is where the x-axis is a count (but such nice continuous PDFs do not arise in that situation: they are usually shown as bar graphs instead). Therefore, for many purposes we may ignore the scale throughout all calculations and freely introduce it back at the end. Although I'm not sure, I think this might address the concerns behind the question. Let's end with some remarks about mathematical notation. If the equation of the PDF for the initial labeling (with ticks at $1$ and $2$) is $f(x)$, then the relabeling turns $x$ into $x \sigma$, as is evident from the second figure. The height at $y = x \sigma$ is found by first dividing by $\sigma$ to find the original expression for $x$ and then applying $f$--but don't forget to divide by $\sigma$ to keep the total area to $1$! Therefore, the same distribution using units of $\sigma$--that is, a "scale factor" of $\sigma$--has the PDF $$\frac{1}{\sigma} f(y/\sigma).$$ An excellent way to remember this--from the right point of view it's perfectly rigorous--is always to write your PDF explicitly as a product of a length and a height. The height is $f(x)$ and the length is the differential $dx$, so the proper way to write the PDF is $f(x)dx$. Now when we change $x$ to $x/\sigma$ the PDF becomes $$f(y) dy = f(x/\sigma) d(x/\sigma) = \frac{1}{\sigma} f(x/\sigma) dx,$$ exactly as it should. (Differentials follow the rules of differentiation: for any function $g$, $dg(x) = g'(x)dx$. That's how I determined that $d(x/\sigma) = (dx)/\sigma$.) In short, the $dx$ is a formal reminder to adjust the y-axis in order to keep the total area to unity. For example, a Gamma$(3)$ distribution has a PDF proportional to $e^{-3x} x^2 dx$, by definition. (The first figure is a portrait of this distribution.) Often we do not need to know the constant of proportionality because it's there only to make sure the total area is unity. To change the scale to $\sigma$, the distribution would now take the form $$e^{-3x/\sigma} (x/\sigma)^2 d(x/\sigma) = \frac{1}{\sigma^3} x^2 e^{-3x/\sigma} dx.$$ The constant of proportionality goes along for the ride: it multiplies both expressions. In this fashion we can obtain a great deal of information while needing to remember very little: all that is required is knowledge of the basic form of the distribution for one particular nice scale.
Scale parameters -- How do they work, why are they sometimes dropped? A scale parameter merely establishes a unit of measurement, such as a foot, inch, angstrom, or parsec. Without the scale parameter, we still know the shape and location of the distribution but we cann
44,522
Scale parameters -- How do they work, why are they sometimes dropped?
I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics that people are interested in: Mean: Scale parameters alter the mean of most distributions, though many common distributions have separate location parameters that control the mean (e.g., the normal distribution). So then, I wouldn't rely on this property in general. It's pretty easy to think up distributions where the mean and variance are both tightly coupled and would both be changed by any scaling factor (e.g., uniform distribution, gamma distribution). Variance: Almost by definition if you change the scaling parameter, you are changing the variance. There are counter-examples to this, but they're basically edge cases (zero variance, infinite variance, undefined variance). Shape: The overall shape of the distribution shouldn't change because you change a scaling parameter. You're mainly just stretching it or shrinking it with respect to the CDF. So then, if you only care about the general shape or family of distributions, sometimes you don't care about the scaling parameter. For example, if you were trying to identify if data came from a normal distribution versus an exponential distribution, you might not care what their exact scaling parameters were. If you care at all about variances being equal, you care about the scaling parameter. This covers a greater range of cases than you'd think. Different variants of statistical tests need to be used for unequal variances than equal variances. Or, in other words, even if you only care about testing the difference in means for normal distributions... scaling parameters are still important. In the general case, if you care about means, you care about scaling. However, for quite a few popular distributions the two are independent. So, as others have stated, in general it depends on the distribution you're interested in. Sometimes nearly everything about the distribution (except its general shape), depends on a single parameter that controls the scaling. The standard exponential distribution is a good example of this.
Scale parameters -- How do they work, why are they sometimes dropped?
I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics t
Scale parameters -- How do they work, why are they sometimes dropped? I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics that people are interested in: Mean: Scale parameters alter the mean of most distributions, though many common distributions have separate location parameters that control the mean (e.g., the normal distribution). So then, I wouldn't rely on this property in general. It's pretty easy to think up distributions where the mean and variance are both tightly coupled and would both be changed by any scaling factor (e.g., uniform distribution, gamma distribution). Variance: Almost by definition if you change the scaling parameter, you are changing the variance. There are counter-examples to this, but they're basically edge cases (zero variance, infinite variance, undefined variance). Shape: The overall shape of the distribution shouldn't change because you change a scaling parameter. You're mainly just stretching it or shrinking it with respect to the CDF. So then, if you only care about the general shape or family of distributions, sometimes you don't care about the scaling parameter. For example, if you were trying to identify if data came from a normal distribution versus an exponential distribution, you might not care what their exact scaling parameters were. If you care at all about variances being equal, you care about the scaling parameter. This covers a greater range of cases than you'd think. Different variants of statistical tests need to be used for unequal variances than equal variances. Or, in other words, even if you only care about testing the difference in means for normal distributions... scaling parameters are still important. In the general case, if you care about means, you care about scaling. However, for quite a few popular distributions the two are independent. So, as others have stated, in general it depends on the distribution you're interested in. Sometimes nearly everything about the distribution (except its general shape), depends on a single parameter that controls the scaling. The standard exponential distribution is a good example of this.
Scale parameters -- How do they work, why are they sometimes dropped? I would say that the importance of proportionality and equality depend entirely on what you're trying to say about the distribution or data. Let's think about some standard properties in statistics t
44,523
Kaplan-Meier p-values
The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms. It is a test of the entire distribution of failure times, not just the median. The null hypothesis for the log-rank test for censured survival data is that the time-averaged hazard ratio for failure comparing treatment and control arms is 1. It's worth mentioning that the power of this test isn't driven by the number of individuals at-risk in the various treatment arms or strata, but the number of failures observed. So, even if you have two fold sample size in one stratum, if many are censored before a failure is observed, it's not bizarre to see greater power in the other stratum --even if the KM curves look identical-- due to a larger number of failures. If we reject the null hypothesis and find that subgroup2 has a significant difference in survival comparing treatment to control but subgroup1 does not have such a difference, then there is evidence of effect modification of treatment by subgroup. That suggests there is a difference in survival among those in subgroup2 but not subgroup1. As a sensitivity analysis, it would be useful to display the Kaplan Meier curves and possibly a smoothed estimate of the hazard ratio as a function of time. My guess is that, while the survival may be comparable in the median, it is the sequence of events in the first quartile of failure times that drives much of the inference and you see a quick drop-off in survival in one of the treatment arms for subgroup 2.
Kaplan-Meier p-values
The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms
Kaplan-Meier p-values The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms. It is a test of the entire distribution of failure times, not just the median. The null hypothesis for the log-rank test for censured survival data is that the time-averaged hazard ratio for failure comparing treatment and control arms is 1. It's worth mentioning that the power of this test isn't driven by the number of individuals at-risk in the various treatment arms or strata, but the number of failures observed. So, even if you have two fold sample size in one stratum, if many are censored before a failure is observed, it's not bizarre to see greater power in the other stratum --even if the KM curves look identical-- due to a larger number of failures. If we reject the null hypothesis and find that subgroup2 has a significant difference in survival comparing treatment to control but subgroup1 does not have such a difference, then there is evidence of effect modification of treatment by subgroup. That suggests there is a difference in survival among those in subgroup2 but not subgroup1. As a sensitivity analysis, it would be useful to display the Kaplan Meier curves and possibly a smoothed estimate of the hazard ratio as a function of time. My guess is that, while the survival may be comparable in the median, it is the sequence of events in the first quartile of failure times that drives much of the inference and you see a quick drop-off in survival in one of the treatment arms for subgroup 2.
Kaplan-Meier p-values The p-value to which you are referring is result of the log-rank test or possibly the Wilcoxon. This test compares expected to observed failures at each failure time in both treatment and control arms
44,524
Kaplan-Meier p-values
Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the two curves is statistically significant with P=0.04.This simply points out the obvious: that two survival curves can have the same median survival but differ in other ways. Adapted from Figure 29.4 of Intuitive Biostatistics.
Kaplan-Meier p-values
Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the
Kaplan-Meier p-values Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the two curves is statistically significant with P=0.04.This simply points out the obvious: that two survival curves can have the same median survival but differ in other ways. Adapted from Figure 29.4 of Intuitive Biostatistics.
Kaplan-Meier p-values Here is a made-up example of two survival curves that have almost the same median survival (and same five-year survival) but are very different. The log-rank test finds that the difference between the
44,525
Multicollinearity in OLS
Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd question: High collinearity can exist with moderate correlations; e.g. if we have 9 iid variables and one that is the sum of the other 9, no pairwise correlation will be high but there is perfect collinearity. Collinearity is a property of sets of independent variables, not just pairs of them.
Multicollinearity in OLS
Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd
Multicollinearity in OLS Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd question: High collinearity can exist with moderate correlations; e.g. if we have 9 iid variables and one that is the sum of the other 9, no pairwise correlation will be high but there is perfect collinearity. Collinearity is a property of sets of independent variables, not just pairs of them.
Multicollinearity in OLS Re your 1st question Collinearity does not make the estimators biased or inconsistent, it just makes them subject to the problems Greene lists (with @whuber 's comments for clarification). Re your 3rd
44,526
What am I supposed to do if Cronbach's alpha is negative?
You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe you can use a factor analysis to check the factorial structure and correlations between the extracted factors? But given the data you provide, I think it will no be very helpful, except maybe if you have a theory to guide your interpretation of the results. Do you have a theory or prior results predicting that your variables should correlates positively (i.e. allowing the use of Cronbach's alpha)? If so, then your results are pretty strange...
What am I supposed to do if Cronbach's alpha is negative?
You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe
What am I supposed to do if Cronbach's alpha is negative? You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe you can use a factor analysis to check the factorial structure and correlations between the extracted factors? But given the data you provide, I think it will no be very helpful, except maybe if you have a theory to guide your interpretation of the results. Do you have a theory or prior results predicting that your variables should correlates positively (i.e. allowing the use of Cronbach's alpha)? If so, then your results are pretty strange...
What am I supposed to do if Cronbach's alpha is negative? You have only weak to very weak correlations (and sometimes negative) between your variables. Your alpha value is negative surely because the mean of all the inter-item correlations is negative. Maybe
44,527
What am I supposed to do if Cronbach's alpha is negative?
As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative?
As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative? As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
What am I supposed to do if Cronbach's alpha is negative? As @alric said, all your correlations are weak. I'd conclude that these questions are not a scale, should not be added together or combined in some other way, and are each really separate entities.
44,528
What am I supposed to do if Cronbach's alpha is negative?
This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors and fixes them.
What am I supposed to do if Cronbach's alpha is negative?
This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors a
What am I supposed to do if Cronbach's alpha is negative? This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors and fixes them.
What am I supposed to do if Cronbach's alpha is negative? This almost always means that you have some variables which should be reverse scored, and you have not reversed them. The R package psych contains a function alpha() which checks for reversal errors a
44,529
What am I supposed to do if Cronbach's alpha is negative?
My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So the conclusion, from my observation, may be personal or biased, is that the use consistent scales be used when calculating Cronbach's Alpha.
What am I supposed to do if Cronbach's alpha is negative?
My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So t
What am I supposed to do if Cronbach's alpha is negative? My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So the conclusion, from my observation, may be personal or biased, is that the use consistent scales be used when calculating Cronbach's Alpha.
What am I supposed to do if Cronbach's alpha is negative? My personal observation has been that when someone calculates Alpha for a mixture of scales like Dichotomous, polychotomous, likert etc, then probability of alpha being negative or low is higher. So t
44,530
How to test group differences when neither parametric nor nonparametric assumptions are met?
What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or median) answers in both groups are significantly different. Means and medians are different things. What people usually do, is that they think in terms of means, not medians, so by default this is also what you should aim for. The Likert scale was invented by the guy Rensis Likert precisely with the intention to make it useful for computing means (not only medians). See the James Carifio, Rocco Perla, "Resolving the 50-year debate around using and misusing Likert scales" in commentaries in Medical Education, 2008, Blackwell Publishing Ltd. The problem is that I can not use parametric (t-test) neither nonparametric (Mann-Whitney) test since data are nonnormal (thus t-test is not appropriate) (...), multimodal (...) Definitely not! Both tests are robust with respect to the shape of the distribution. Only Mann-Whitney (it is usually called Wilcoxon-Mann-Whitney, or WMW test) requires both distributions to have the same shape. (...) ordinal and thus not continuous (...) "Ordinal variable" means "arithmetic means doesn't make sense on it". Like education measured in 3-level scale "1 - grammar school", "2 - college" and "3 - university". This does not imply it is not continuous (although usually it is the case). Neither the WMW nor the t-test require continuous variables. (...) with different variances (...) When you use the t-test with Welch's correction (Welch B.L. The generalization of Student's problem when several different population variances are involved, Biometrika, 34, 28-38, 1938) than you don't need to worry about the unequal variances (and shapes). The t-test (as with any test based on means) is already very robust with respect to departures from normality (see e.g. Michael R. Chernick, Robert H. Friis "Introductory Biostatistics for The Health Sciences", Willey Interscience 2003 and many other books). This property comes from the fact, that it is based on means. By the virtue of the Central Limit Theorem, the distribution of the mean very quickly converges to normal distribution. (...) and with different shapes (thus Mann-Whitney test is not appropriate). Yes, you've got it right. Technically speaking, the Mann-Whitney U test does not test for median, but whether one distribution is offset from the other, which is something subtly different. In particular, it makes this test sensitive to differences in distribution between groups. (see Morten W. Fagerland and Leiv Sandvik "The Wilcoxon-Mann-Whitney test under scrutiny", John Willey & Sons, 2009). These differences can translate e.g. into differences in variance or skewness. So this test, in contrast to the Welch test (the t-test with Welch modification), is not safe for when there is no variance homogeneity.
How to test group differences when neither parametric nor nonparametric assumptions are met?
What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or me
How to test group differences when neither parametric nor nonparametric assumptions are met? What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or median) answers in both groups are significantly different. Means and medians are different things. What people usually do, is that they think in terms of means, not medians, so by default this is also what you should aim for. The Likert scale was invented by the guy Rensis Likert precisely with the intention to make it useful for computing means (not only medians). See the James Carifio, Rocco Perla, "Resolving the 50-year debate around using and misusing Likert scales" in commentaries in Medical Education, 2008, Blackwell Publishing Ltd. The problem is that I can not use parametric (t-test) neither nonparametric (Mann-Whitney) test since data are nonnormal (thus t-test is not appropriate) (...), multimodal (...) Definitely not! Both tests are robust with respect to the shape of the distribution. Only Mann-Whitney (it is usually called Wilcoxon-Mann-Whitney, or WMW test) requires both distributions to have the same shape. (...) ordinal and thus not continuous (...) "Ordinal variable" means "arithmetic means doesn't make sense on it". Like education measured in 3-level scale "1 - grammar school", "2 - college" and "3 - university". This does not imply it is not continuous (although usually it is the case). Neither the WMW nor the t-test require continuous variables. (...) with different variances (...) When you use the t-test with Welch's correction (Welch B.L. The generalization of Student's problem when several different population variances are involved, Biometrika, 34, 28-38, 1938) than you don't need to worry about the unequal variances (and shapes). The t-test (as with any test based on means) is already very robust with respect to departures from normality (see e.g. Michael R. Chernick, Robert H. Friis "Introductory Biostatistics for The Health Sciences", Willey Interscience 2003 and many other books). This property comes from the fact, that it is based on means. By the virtue of the Central Limit Theorem, the distribution of the mean very quickly converges to normal distribution. (...) and with different shapes (thus Mann-Whitney test is not appropriate). Yes, you've got it right. Technically speaking, the Mann-Whitney U test does not test for median, but whether one distribution is offset from the other, which is something subtly different. In particular, it makes this test sensitive to differences in distribution between groups. (see Morten W. Fagerland and Leiv Sandvik "The Wilcoxon-Mann-Whitney test under scrutiny", John Willey & Sons, 2009). These differences can translate e.g. into differences in variance or skewness. So this test, in contrast to the Welch test (the t-test with Welch modification), is not safe for when there is no variance homogeneity.
How to test group differences when neither parametric nor nonparametric assumptions are met? What you write is a compilation of many common misconceptions about these tests. The short answer is: use the t test with Welch correction. Now, the details. I would like to test whether mean (or me
44,531
How to test group differences when neither parametric nor nonparametric assumptions are met?
Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool your samples, and draw (with replacement) two groups of 50 points from this sampling space. Measure the difference between the means and medians of these two groups of points. Repeat a few hundred times (which, since you have a computer, should take a few seconds at most!). Now, measure the difference in mean and median between your two original samples . Are 95% of the bootstrapped differences smaller (or larger) than this difference? Then your difference is significant at a 95% confidence level.
How to test group differences when neither parametric nor nonparametric assumptions are met?
Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool yo
How to test group differences when neither parametric nor nonparametric assumptions are met? Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool your samples, and draw (with replacement) two groups of 50 points from this sampling space. Measure the difference between the means and medians of these two groups of points. Repeat a few hundred times (which, since you have a computer, should take a few seconds at most!). Now, measure the difference in mean and median between your two original samples . Are 95% of the bootstrapped differences smaller (or larger) than this difference? Then your difference is significant at a 95% confidence level.
How to test group differences when neither parametric nor nonparametric assumptions are met? Perhaps you could use bootstrapping. You have 100 points. If there is not a significant difference, then these 100 points together are representative of the entire distribution of values. So, pool yo
44,532
How to estimate missing data?
x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create model to predict from ndf <- data.frame(x=1:30) #create data to predict to df1$fit <- predict(lmx, newdata=ndf) #get predictions df1$y2 <- with(df1, ifelse(is.na(y) == T, fit, y)) The last line creates a new variable in the data frame that has all of the old variables as well as the fitted variables from the regression.
How to estimate missing data?
x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create
How to estimate missing data? x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create model to predict from ndf <- data.frame(x=1:30) #create data to predict to df1$fit <- predict(lmx, newdata=ndf) #get predictions df1$y2 <- with(df1, ifelse(is.na(y) == T, fit, y)) The last line creates a new variable in the data frame that has all of the old variables as well as the fitted variables from the regression.
How to estimate missing data? x <- 1:30; y <- c(rnorm(25) + 1:25, rep(NA, 5)) #generate data with NAs df1 <- data.frame(x, y) #combine into data frame lmx <- lm(y~x, data=df1) #create
44,533
How to estimate missing data?
It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missing data may be biased. A sophisticated way to deal with data missing at random is multiple imputation, which acknowledges that there is uncertainty about the values of the missing quantities. This can be done in R using the MICE package. Here is a reproducible example using the nhanes data that comes with the package: library(mice) imp <-mice(nhanes) fit <-with(imp, lm(bmi~chl+hyp)) fit summary(pool(fit)) complete(imp) # returns the data with first imputed values. complete(imp,2) returns 2nd set
How to estimate missing data?
It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missin
How to estimate missing data? It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missing data may be biased. A sophisticated way to deal with data missing at random is multiple imputation, which acknowledges that there is uncertainty about the values of the missing quantities. This can be done in R using the MICE package. Here is a reproducible example using the nhanes data that comes with the package: library(mice) imp <-mice(nhanes) fit <-with(imp, lm(bmi~chl+hyp)) fit summary(pool(fit)) complete(imp) # returns the data with first imputed values. complete(imp,2) returns 2nd set
How to estimate missing data? It is often a good idea to consider the possible reasons for data being missing, ie mising completely at random, missing at random, missing not at random. Depending on this, methods to estimate missin
44,534
How to estimate missing data?
Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
How to estimate missing data?
Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/f
How to estimate missing data? Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
How to estimate missing data? Another approach would be to use simulation solution like Gibbs Sampling based on statistics on past observations. I believe there is support for that in R : http://darrenjw.wordpress.com/2011/07/31/f
44,535
What is the proper naming scheme for dataset parts?
It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row is an input/output tuple (Example; Observation; Data point; Datum) A single cell is a either an input feature value (or attribute) or output value Input data or Training Set Outputs or Targets Or mathematically, you'll often see: ${\bf{x}_i, y_i}$ Either $x_{ij}$ or $y_i$ depending which column you select $\bf{X}$ $\bf{y}$ It's worth looking at the wiki page on Cross-validation to see how to split a dataset up correctly.
What is the proper naming scheme for dataset parts?
It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row
What is the proper naming scheme for dataset parts? It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row is an input/output tuple (Example; Observation; Data point; Datum) A single cell is a either an input feature value (or attribute) or output value Input data or Training Set Outputs or Targets Or mathematically, you'll often see: ${\bf{x}_i, y_i}$ Either $x_{ij}$ or $y_i$ depending which column you select $\bf{X}$ $\bf{y}$ It's worth looking at the wiki page on Cross-validation to see how to split a dataset up correctly.
What is the proper naming scheme for dataset parts? It seems like in your setup, your inputs (the data that you're using to model) and your outputs (what you'd like to predict) are both in the same table. In that case it's a bit complicated, as: A row
44,536
What is the proper naming scheme for dataset parts?
Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on the algorithm, I believe. Say, if you use Decision Trees then your training examples would become instances and your features would become attributes.
What is the proper naming scheme for dataset parts?
Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on
What is the proper naming scheme for dataset parts? Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on the algorithm, I believe. Say, if you use Decision Trees then your training examples would become instances and your features would become attributes.
What is the proper naming scheme for dataset parts? Based on Andrew Ng's ml-class.org and Tom Mitchell's "Machine Learning" book I think they will be called Training example Feature value Training set Output/target variable But naming will depend on
44,537
What is the proper naming scheme for dataset parts?
(1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measured variable for classification: (3) features, input features, input variable (4) class
What is the proper naming scheme for dataset parts?
(1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measu
What is the proper naming scheme for dataset parts? (1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measured variable for classification: (3) features, input features, input variable (4) class
What is the proper naming scheme for dataset parts? (1) data point, (2) feature value I think that for regression: (3) regressors, explanatory variables, input variables, predictor variables, (4) regressand, exogenous variable, response variable, measu
44,538
What is the proper naming scheme for dataset parts?
Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer to this as a vector ij independent variables (normally in an experimental or quasi-experimental context only) dependent variable I feel for people in different disciplines. I wish we all referred to the same things with the same names.
What is the proper naming scheme for dataset parts?
Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer
What is the proper naming scheme for dataset parts? Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer to this as a vector ij independent variables (normally in an experimental or quasi-experimental context only) dependent variable I feel for people in different disciplines. I wish we all referred to the same things with the same names.
What is the proper naming scheme for dataset parts? Answering more generally, as I'm not sure if your datasets or textbooks are always going to be restricted to weather data, and not duplicating the answers above observations, or cases I always refer
44,539
How can I draw a boxplot without boxes in R?
The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) df <- data.frame(group = rep(c("Business", "Law"), each = 50), value = c(business, law), stringsAsFactors = FALSE) stripchart(value ~ group, data = df, main = "Salary Example (dots)", pch = 16, col = c("red", "green"))
How can I draw a boxplot without boxes in R?
The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it b
How can I draw a boxplot without boxes in R? The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) df <- data.frame(group = rep(c("Business", "Law"), each = 50), value = c(business, law), stringsAsFactors = FALSE) stripchart(value ~ group, data = df, main = "Salary Example (dots)", pch = 16, col = c("red", "green"))
How can I draw a boxplot without boxes in R? The stripchart function in the graphics library seems to be what you want if you want to plot the data 1 dimensionally for each group. It produces a somewhat basic plot but you can customize it b
44,540
How can I draw a boxplot without boxes in R?
One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot(), but it lacks the above method argument to separate coincident points (but see below fo one way to achieve stacking). An alternative way of doing what you want is to use Cleveland's dotchart. Here are some variations around this idea using lattice: my.df <- data.frame(x=sample(rnorm(100), 100, replace=TRUE), g=factor(sample(letters[1:2], 100, replace=TRUE))) library(lattice) dotplot(x ~ g, data=my.df) # g on the x-axis dotplot(g ~ x, data=my.df, aspect="xy") # g on the y-axis ## add some vertical jittering (use `factor=` to change ## its amount in both case) dotplot(g ~ x, data=my.df, jitter.y=TRUE) stripplot(g ~ x, data=my.df, jitter.data=TRUE) ## use stacking (require the `HH` package) stripplot(g ~ x, data=my.df, panel=HH::panel.dotplot.tb, factor=.2) ## using a custom sunflowers panel, available through ## http://r.789695.n4.nabble.com/ Grid- graphics- ## issues- tp797307p797307.html stripplot(as.numeric(g) ~ x, data=my.df, panel=panel.sunflowerplot, col="black", seg.col="black", seg.lwd=1, size=.08) ## with overlapping data, it is also possible ## to use transparency dotplot(g ~ x, data=my.df, aspect=1.5, alpha=.5, pch=19) Some previews of the above commands:
How can I draw a boxplot without boxes in R?
One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot
How can I draw a boxplot without boxes in R? One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot(), but it lacks the above method argument to separate coincident points (but see below fo one way to achieve stacking). An alternative way of doing what you want is to use Cleveland's dotchart. Here are some variations around this idea using lattice: my.df <- data.frame(x=sample(rnorm(100), 100, replace=TRUE), g=factor(sample(letters[1:2], 100, replace=TRUE))) library(lattice) dotplot(x ~ g, data=my.df) # g on the x-axis dotplot(g ~ x, data=my.df, aspect="xy") # g on the y-axis ## add some vertical jittering (use `factor=` to change ## its amount in both case) dotplot(g ~ x, data=my.df, jitter.y=TRUE) stripplot(g ~ x, data=my.df, jitter.data=TRUE) ## use stacking (require the `HH` package) stripplot(g ~ x, data=my.df, panel=HH::panel.dotplot.tb, factor=.2) ## using a custom sunflowers panel, available through ## http://r.789695.n4.nabble.com/ Grid- graphics- ## issues- tp797307p797307.html stripplot(as.numeric(g) ~ x, data=my.df, panel=panel.sunflowerplot, col="black", seg.col="black", seg.lwd=1, size=.08) ## with overlapping data, it is also possible ## to use transparency dotplot(g ~ x, data=my.df, aspect=1.5, alpha=.5, pch=19) Some previews of the above commands:
How can I draw a boxplot without boxes in R? One interesting application of R's stripchart() is that you can use jittering or stacking when there is some overlap in data points (see method=). With lattice, the corresponding function is stripplot
44,541
How can I draw a boxplot without boxes in R?
I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) The violin plot library(vioplot) vioplot(business, law, names=c("Business", "Law"), horizontal=T, col=c("lightblue"), rectCol=c('gold')) Gives below, different colors aren't possible without a tweak: For getting different colors I found this slightly more advanced solution from Ben Bolker plot(1,1,ylim=c(0,2.5),xlim=range(c(business, law)),type="n", xlab="",ylab="",axes=FALSE) ## bottom axis, with user-specified labels axis(side=2,at=1:2,labels=c("Business", "Law")) axis(side=1) vioplot(business,at=1,col="blue",add=TRUE, horizontal=T) vioplot(law,at=2,col="gold",add=TRUE, horizontal=T) And it looks like this: The beanplot In my search I also stumbled across the beanplot from Peter Kampstra that seems interesting: library(beanplot) beanplot(business, law, horizontal=T, names=c("Business", "Law"), col=c("blue", "gold")) Gives this:
How can I draw a boxplot without boxes in R?
I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- r
How can I draw a boxplot without boxes in R? I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- runif(50, min = 65, max = 100) law <- runif(50, min = 60, max = 95) The violin plot library(vioplot) vioplot(business, law, names=c("Business", "Law"), horizontal=T, col=c("lightblue"), rectCol=c('gold')) Gives below, different colors aren't possible without a tweak: For getting different colors I found this slightly more advanced solution from Ben Bolker plot(1,1,ylim=c(0,2.5),xlim=range(c(business, law)),type="n", xlab="",ylab="",axes=FALSE) ## bottom axis, with user-specified labels axis(side=2,at=1:2,labels=c("Business", "Law")) axis(side=1) vioplot(business,at=1,col="blue",add=TRUE, horizontal=T) vioplot(law,at=2,col="gold",add=TRUE, horizontal=T) And it looks like this: The beanplot In my search I also stumbled across the beanplot from Peter Kampstra that seems interesting: library(beanplot) beanplot(business, law, horizontal=T, names=c("Business", "Law"), col=c("blue", "gold")) Gives this:
How can I draw a boxplot without boxes in R? I got a little curious of how the violinplot works when I saw this question. This also led me to the beanplot that might be on the same theme. The base data creation for all three plots: business <- r
44,542
How to display magnitude of change over time between two series?
If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other it is changing more rapidly. The log scale may also allow you to conveniently get both sets of values onto one graph without having to normalize the values in any way.
How to display magnitude of change over time between two series?
If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other i
How to display magnitude of change over time between two series? If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other it is changing more rapidly. The log scale may also allow you to conveniently get both sets of values onto one graph without having to normalize the values in any way.
How to display magnitude of change over time between two series? If you are interested in the changes as a fraction, then simply plot the logarithm of the values. A fixed distance in log space is a fixed fractional change, so if one line is steeper than the other i
44,543
How to display magnitude of change over time between two series?
You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things like distort the data; it is "bad" if it is hard to read, etc. Are you interested in the difference between the two stock prices? Then subtract one from the other and plot that. Are you interested in the ratio? Then divide the larger by the smaller and plot that. (Cleveland showed that it is easier to interpret a single line than the relationship between two lines; his example was imports and exports from some country (England, IIRC) over time). Do you need both series? Well, you could standardize (see earlier answers) or you could just multiply one series by some convenient number (be sure to state this!) - the latter may be easier for your audience to grasp. I highly recommend William Cleveland's books.
How to display magnitude of change over time between two series?
You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things
How to display magnitude of change over time between two series? You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things like distort the data; it is "bad" if it is hard to read, etc. Are you interested in the difference between the two stock prices? Then subtract one from the other and plot that. Are you interested in the ratio? Then divide the larger by the smaller and plot that. (Cleveland showed that it is easier to interpret a single line than the relationship between two lines; his example was imports and exports from some country (England, IIRC) over time). Do you need both series? Well, you could standardize (see earlier answers) or you could just multiply one series by some convenient number (be sure to state this!) - the latter may be easier for your audience to grasp. I highly recommend William Cleveland's books.
How to display magnitude of change over time between two series? You ask "is this correct?" and "is there a better way to do it?" but the answers to these questions depend on what exactly you are trying to do. A statistical graph is "wrong" only if it does things
44,544
How to display magnitude of change over time between two series?
In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is rebasing. A base time is selected, and the values of the series are scaled so that they are all 100 there. If the first series is €40 in the base period, but €48 later, these become 100 and 120 (which indicates a 20% rise since the base period). If the second series was €500 in the base period and €450 later, these become 100 and 90 (showing a 10% fall). Here is an example in the Economist (in case that is paywalled, this is link to the image itself). Alternatively, just the percentage changes might be shown. So in my example, the first series would start at 0 and move up to 20, while the second series will start at 0 and move down to -10. Here is an example from the Financial Times (image link). Usually the first time included in the graph, on the far left, is chosen as the base period. Occasionally we see plots rebased so the final value is 100, like this one (taken from this BBC article). I've also seen charts which have been rebased to a period in the middle of the graph. This might make sense if you were comparing GDP series for two countries before and after a financial crisis - to make the results comparable you might rebase them to the period with the peak pre-crisis GDP. Note that the faster-growing economy in the run-up to the crisis will have a steeper graph to the left of the crisis, but this means its graph will dip below the one it is being compared to. To someone who doesn't understand how to interpret the vertical scale of the graph, this might suggest it is the weaker economy prior to the crisis! This sort of confusion is avoided by rebasing to the left, but that is not always appropriate. There are some advantages to just plotting the ratio of the two series. One is that it is possible to extend this concept to more than two series on the same graph (see this example from BBC - taken from this article). But beware of the disadvantages of rebasing - the choice of the base period is important, because it makes the series arbitrarily cross there. Generally people rebase so that the graphs all start together at 100, and the series will appear to cross over again if they ever return to their "original" ratio. But unless there is a very good reason to start the graph of the series there - perhaps because the stock price graph begins at flotation, or a GDP graph begins at national independence - then the starting point doesn't really represent anything genuinely original or special. If you'd made a different choice about where in the data series you start the graph from, then features like subsequent cross-overs can look quite different. This kind of arbitrary crossing-over is one reason that people are skeptical of charts with two separate y-axes for two time series. I would also second Peter Flom's answer, that it is easier to interpret one line than two, so if only the ratio of the two series is interesting, then only the ratio of the series need be plotted!
How to display magnitude of change over time between two series?
In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is re
How to display magnitude of change over time between two series? In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is rebasing. A base time is selected, and the values of the series are scaled so that they are all 100 there. If the first series is €40 in the base period, but €48 later, these become 100 and 120 (which indicates a 20% rise since the base period). If the second series was €500 in the base period and €450 later, these become 100 and 90 (showing a 10% fall). Here is an example in the Economist (in case that is paywalled, this is link to the image itself). Alternatively, just the percentage changes might be shown. So in my example, the first series would start at 0 and move up to 20, while the second series will start at 0 and move down to -10. Here is an example from the Financial Times (image link). Usually the first time included in the graph, on the far left, is chosen as the base period. Occasionally we see plots rebased so the final value is 100, like this one (taken from this BBC article). I've also seen charts which have been rebased to a period in the middle of the graph. This might make sense if you were comparing GDP series for two countries before and after a financial crisis - to make the results comparable you might rebase them to the period with the peak pre-crisis GDP. Note that the faster-growing economy in the run-up to the crisis will have a steeper graph to the left of the crisis, but this means its graph will dip below the one it is being compared to. To someone who doesn't understand how to interpret the vertical scale of the graph, this might suggest it is the weaker economy prior to the crisis! This sort of confusion is avoided by rebasing to the left, but that is not always appropriate. There are some advantages to just plotting the ratio of the two series. One is that it is possible to extend this concept to more than two series on the same graph (see this example from BBC - taken from this article). But beware of the disadvantages of rebasing - the choice of the base period is important, because it makes the series arbitrarily cross there. Generally people rebase so that the graphs all start together at 100, and the series will appear to cross over again if they ever return to their "original" ratio. But unless there is a very good reason to start the graph of the series there - perhaps because the stock price graph begins at flotation, or a GDP graph begins at national independence - then the starting point doesn't really represent anything genuinely original or special. If you'd made a different choice about where in the data series you start the graph from, then features like subsequent cross-overs can look quite different. This kind of arbitrary crossing-over is one reason that people are skeptical of charts with two separate y-axes for two time series. I would also second Peter Flom's answer, that it is easier to interpret one line than two, so if only the ratio of the two series is interesting, then only the ratio of the series need be plotted!
How to display magnitude of change over time between two series? In the financial press, a common way to display two or more time series (such as GDP or - relevant to the original question - stock prices) in a way that allows changes over time to be compared, is re
44,545
How to display magnitude of change over time between two series?
Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis for the stock prices range between the lowest and highest prices seen, and the range for the other values to also be the lowest/highest values seen.
How to display magnitude of change over time between two series?
Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis fo
How to display magnitude of change over time between two series? Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis for the stock prices range between the lowest and highest prices seen, and the range for the other values to also be the lowest/highest values seen.
How to display magnitude of change over time between two series? Try plotting both numbers using different scales for each one's Y-axis? (I don't know Python's matplotlib library, but I'd be suprised if it can't handle that.) The idea would be to make the Y-axis fo
44,546
Creating univariable smoothed scatterplot on logit scale using R
You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. uis<-read.delim("http://www.umass.edu/statdata/statdata/data/uis.dat", skip=4, sep="", header=FALSE) names(uis)<-c("id","age","beck","ivhx","ndrugx","race","reat","site","dfree") lfit<-loess(uis$dfree~uis$age) lgpred<-log(predict(lfit)/(1-predict(lfit))) plot(lgpred~uis$age) As @Momo said, from there you can play around withe the smoothing parameter to get a better reproduction.
Creating univariable smoothed scatterplot on logit scale using R
You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. u
Creating univariable smoothed scatterplot on logit scale using R You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. uis<-read.delim("http://www.umass.edu/statdata/statdata/data/uis.dat", skip=4, sep="", header=FALSE) names(uis)<-c("id","age","beck","ivhx","ndrugx","race","reat","site","dfree") lfit<-loess(uis$dfree~uis$age) lgpred<-log(predict(lfit)/(1-predict(lfit))) plot(lgpred~uis$age) As @Momo said, from there you can play around withe the smoothing parameter to get a better reproduction.
Creating univariable smoothed scatterplot on logit scale using R You can find the H&L ALR on the web. I believe what L&H are doing is simply fitting a loess to the dfree ~ age relationship and then transforming the expected probabilities to logits. See below. u
44,547
Creating univariable smoothed scatterplot on logit scale using R
It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett lprob <- predict(lfit) lprob <- apply(cbind(lprob, 0.01), MARGIN=1, FUN=max) lprob <- apply(cbind(lprob, 0.99), MARGIN=1, FUN=min) As a newbie working through Hosmer and Lemeshow, I found it interesting to plot the loess fit (as a probability) against age -- you get a good idea how it is forming a 'weighted average' between the unsmoothed 0's and 1's as age increases. By the way to get pretty close to the graph H+L made, try lfit <- loess(uis$dfree ~ uis$age, span=.6, degree=1)
Creating univariable smoothed scatterplot on logit scale using R
It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett l
Creating univariable smoothed scatterplot on logit scale using R It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett lprob <- predict(lfit) lprob <- apply(cbind(lprob, 0.01), MARGIN=1, FUN=max) lprob <- apply(cbind(lprob, 0.99), MARGIN=1, FUN=min) As a newbie working through Hosmer and Lemeshow, I found it interesting to plot the loess fit (as a probability) against age -- you get a good idea how it is forming a 'weighted average' between the unsmoothed 0's and 1's as age increases. By the way to get pretty close to the graph H+L made, try lfit <- loess(uis$dfree ~ uis$age, span=.6, degree=1)
Creating univariable smoothed scatterplot on logit scale using R It didn't happen in this example, but you have to watch that the loess model doesn't get carried away and produce 'smoothed' probabilities that lie outside of (0,1). Following the example from Brett l
44,548
Creating univariable smoothed scatterplot on logit scale using R
The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of discrete x values, they can be 'grouped', or turned into percentages. Lets assume that your data are in percentages. The logit transformation is: $$l=\ln\left(\frac{p}{1-p}\right)$$ where $l$ is the logit, $p$ is the percentage and $\ln$ (obviously) is the natural log. Given these values, the plot could be created in R with plot(lowess(logit~age)). If your data are not grouped (or group-able), then this would not work. (For example, the natural log of $0$ is -Inf. and $1/0$ is undefined. In such a case, you might fit a lowess to your untransformed $y$ first (which would yield predicted probabilities) and assign the lowess fit to a variable. Then the variable can be transformed, as above, and plotted.
Creating univariable smoothed scatterplot on logit scale using R
The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of dis
Creating univariable smoothed scatterplot on logit scale using R The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of discrete x values, they can be 'grouped', or turned into percentages. Lets assume that your data are in percentages. The logit transformation is: $$l=\ln\left(\frac{p}{1-p}\right)$$ where $l$ is the logit, $p$ is the percentage and $\ln$ (obviously) is the natural log. Given these values, the plot could be created in R with plot(lowess(logit~age)). If your data are not grouped (or group-able), then this would not work. (For example, the natural log of $0$ is -Inf. and $1/0$ is undefined. In such a case, you might fit a lowess to your untransformed $y$ first (which would yield predicted probabilities) and assign the lowess fit to a variable. Then the variable can be transformed, as above, and plotted.
Creating univariable smoothed scatterplot on logit scale using R The key here is that the logit is plotted on the y axis. When you're running a logistic regression, typically your data are a column of 1's and 0's. When values only occur at a limited number of dis
44,549
Subdivisions in statistics
I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-parametric and robust statistics. Frequentist and Bayesian are genuine sub-categories as they are based on fundamentally different definitions of a probability. Frequentists and Bayesians will both vary the strength of assumptions made depending on the requirements of the application. So I would say that particular subdivision into four categories is not widely recognised in statistics. In my opinion, both Bayesian and frequentist methods can be used for most statistical problems, however they are not always equally useful, for example whether a frequentist confidence interval or a Bayesian credible interval is more appropriate depends on whether you want to ask a question about what to expect if the experiment were replicated, or what we can conclude about the statistics as a result of the particular experiment that we have actually performed (I would suggest in most cases it is the latter, but scientists generally use frequentist methods anyway).
Subdivisions in statistics
I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-
Subdivisions in statistics I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-parametric and robust statistics. Frequentist and Bayesian are genuine sub-categories as they are based on fundamentally different definitions of a probability. Frequentists and Bayesians will both vary the strength of assumptions made depending on the requirements of the application. So I would say that particular subdivision into four categories is not widely recognised in statistics. In my opinion, both Bayesian and frequentist methods can be used for most statistical problems, however they are not always equally useful, for example whether a frequentist confidence interval or a Bayesian credible interval is more appropriate depends on whether you want to ask a question about what to expect if the experiment were replicated, or what we can conclude about the statistics as a result of the particular experiment that we have actually performed (I would suggest in most cases it is the latter, but scientists generally use frequentist methods anyway).
Subdivisions in statistics I wouldn't consider non-parametric or robust as being sub-categories of statistics in the way that frequentist and Bayesian are, simply because there are both frequentist and Bayesian methods for non-
44,550
Subdivisions in statistics
I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewhat fuzzy at the edge cases and most people in practice seem to be a mix of the two. Robust and parametric/non-parametric aren't really divisions as much as different tools for different problems. Admittedly, there are people who only work in problems that lend themselves to one or the other, but that's people, not the actual statistics - and I'd argue not even most people. To use an example, I'd argue there's no "Subdivision in carpentry" between hammers and screw drivers, even though I know a guy who hates using nails. I'd say the far more profound division in statistics is how its viewed from the perspective of a mathematician versus a dedicated statistician versus a statistically-literate applied researcher. To answer the second bit of your question: Sometimes There are times when you must use one method - because that method was designed to work when others fail. Exact statistics come to mind. But there are many, many questions where multiple approaches work. For example, a project I'm working on could be approached using either Bayesian or Frequentist methods, and use either a parametric, semi-parametric or non-parametric approach. That's six possible combinations of tools, and credible arguments for each. In the end, I chose the method that would be the most useful for me, in this project.
Subdivisions in statistics
I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewha
Subdivisions in statistics I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewhat fuzzy at the edge cases and most people in practice seem to be a mix of the two. Robust and parametric/non-parametric aren't really divisions as much as different tools for different problems. Admittedly, there are people who only work in problems that lend themselves to one or the other, but that's people, not the actual statistics - and I'd argue not even most people. To use an example, I'd argue there's no "Subdivision in carpentry" between hammers and screw drivers, even though I know a guy who hates using nails. I'd say the far more profound division in statistics is how its viewed from the perspective of a mathematician versus a dedicated statistician versus a statistically-literate applied researcher. To answer the second bit of your question: Sometimes There are times when you must use one method - because that method was designed to work when others fail. Exact statistics come to mind. But there are many, many questions where multiple approaches work. For example, a project I'm working on could be approached using either Bayesian or Frequentist methods, and use either a parametric, semi-parametric or non-parametric approach. That's six possible combinations of tools, and credible arguments for each. In the end, I chose the method that would be the most useful for me, in this project.
Subdivisions in statistics I would not necessarily assert that those are the subdivisions present in statistics. If pressed, I'd argue that Frequentist versus Bayesian is the most clear division, although even that gets somewha
44,551
Probability of drawing no red balls from 20 draws without replacement given finite sample
Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\binom{90}{20}}{\binom{100}{20}}$$ The last term exactly matches the @Macro's answer, but hypergeometric formula is more general. The idea beyond the formula is simple: get the number of ways to draw $20$ $B$ of $90$, number of ways to draw $0$ $R$ from $10$ (there is only one possibility) and divide their product by the number or ways to draw any $20$ balls from $100$. Hope this was not your homework ;)
Probability of drawing no red balls from 20 draws without replacement given finite sample
Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\bin
Probability of drawing no red balls from 20 draws without replacement given finite sample Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\binom{90}{20}}{\binom{100}{20}}$$ The last term exactly matches the @Macro's answer, but hypergeometric formula is more general. The idea beyond the formula is simple: get the number of ways to draw $20$ $B$ of $90$, number of ways to draw $0$ $R$ from $10$ (there is only one possibility) and divide their product by the number or ways to draw any $20$ balls from $100$. Hope this was not your homework ;)
Probability of drawing no red balls from 20 draws without replacement given finite sample Let $B$ denotes blue balls, $R$ denotes red balls, then you may apply the formula for hypergeometric distribution: $$P(B = 20, R = 0) = \frac{\binom{10}{0}\binom{90}{20}}{\binom{100}{20}} = \frac{\bin
44,552
Probability of drawing no red balls from 20 draws without replacement given finite sample
Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from, so you have a $89/99$ chance of not drawing a red ball. Similarly, on the third draw, if the second draw was also not a red ball, then you have a $88/98$ chance of picking a red ball, and so on. In general, if you attempt $k$ times independently without replacement, the probability you seek is $$ \prod_{i=1}^{k} \frac{ 90-i+1 }{100-i+1} $$ One important thing to note is that this probability actually doesn't arise from a binomial distribution. You are not conducting independent trials with equal probability and counting the number of "successes". The trials are not independent because the success probability of a future trial depends on whether a past trial was a success, making it fundamentally different from the binomial distribution. If there was replacement, then you'd be correct in saying the number of success follows a binomial distribution.
Probability of drawing no red balls from 20 draws without replacement given finite sample
Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from
Probability of drawing no red balls from 20 draws without replacement given finite sample Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from, so you have a $89/99$ chance of not drawing a red ball. Similarly, on the third draw, if the second draw was also not a red ball, then you have a $88/98$ chance of picking a red ball, and so on. In general, if you attempt $k$ times independently without replacement, the probability you seek is $$ \prod_{i=1}^{k} \frac{ 90-i+1 }{100-i+1} $$ One important thing to note is that this probability actually doesn't arise from a binomial distribution. You are not conducting independent trials with equal probability and counting the number of "successes". The trials are not independent because the success probability of a future trial depends on whether a past trial was a success, making it fundamentally different from the binomial distribution. If there was replacement, then you'd be correct in saying the number of success follows a binomial distribution.
Probability of drawing no red balls from 20 draws without replacement given finite sample Well, on the first try, you have a $90/100$ probability of not drawing a red ball; if the first was not a red ball, then on the second try there are still 10 red balls left, but only 99 to choose from
44,553
Looking for stats/probability practice problems with data and solutions
The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than others, and when there's not much material as yet there are often links to free online resources.
Looking for stats/probability practice problems with data and solutions
The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than other
Looking for stats/probability practice problems with data and solutions The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than others, and when there's not much material as yet there are often links to free online resources.
Looking for stats/probability practice problems with data and solutions The Statistics topic area on Wikiversity is worth a look. It's got a long way to go before it's a comprehensive stand-alone syllabus, to be honest, but some of the Courses are more advanced than other
44,554
Looking for stats/probability practice problems with data and solutions
If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a self contained website devoted to it.
Looking for stats/probability practice problems with data and solutions
If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a sel
Looking for stats/probability practice problems with data and solutions If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a self contained website devoted to it.
Looking for stats/probability practice problems with data and solutions If you are interested in Statistical Machine Learning, which seems to be THE thing these days, Tibshirani, Hastie, and Friedman's book is an invaluable resource. It is the latest edition and has a sel
44,555
Looking for stats/probability practice problems with data and solutions
I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite famous, and often a link is given to the paper in which the data are described. You could use these datasets in R and then after you finish your analysis look at what the authors of the paper did with the same data. That being said, its rare for there to be a right answer in any real data analysis problem, mostly one learns by realising that your techniques were not appropriate and re-iterating until one reaches some level of statisfaction. Obviosuly this is a moving target though, as your skills increase an older dataset may yield new insights.
Looking for stats/probability practice problems with data and solutions
I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite fam
Looking for stats/probability practice problems with data and solutions I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite famous, and often a link is given to the paper in which the data are described. You could use these datasets in R and then after you finish your analysis look at what the authors of the paper did with the same data. That being said, its rare for there to be a right answer in any real data analysis problem, mostly one learns by realising that your techniques were not appropriate and re-iterating until one reaches some level of statisfaction. Obviosuly this is a moving target though, as your skills increase an older dataset may yield new insights.
Looking for stats/probability practice problems with data and solutions I realise that this may not be what you are looking for, but R core and all packages come with data sets on which to practice the functionalities in each package. Many of these data sets are quite fam
44,556
Plotting a heatmap given a dendrogram and a distance matrix in R
I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more than basic R. Look at the source code for the cim() function in the mixOmics package for example (I choose this one because source code is very easy to read; you will find other functions on the Bioconductor project). The interesting parts of the code are l. 92-113, where they assign the result of HC to ddc, and around l. 193-246 where they devised the plotting regions (you should input the values of your distance matrix in place of mat when they call image()). HTH Edit A recent Google search on a related subject lead me to dendrogramGrob() from the latticeExtra package. Assuming you already have your sorted dendrogram object, you can skip the first lines of the example code from the on-line help and get something like this (here, with the mtcars dataset):
Plotting a heatmap given a dendrogram and a distance matrix in R
I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more tha
Plotting a heatmap given a dendrogram and a distance matrix in R I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more than basic R. Look at the source code for the cim() function in the mixOmics package for example (I choose this one because source code is very easy to read; you will find other functions on the Bioconductor project). The interesting parts of the code are l. 92-113, where they assign the result of HC to ddc, and around l. 193-246 where they devised the plotting regions (you should input the values of your distance matrix in place of mat when they call image()). HTH Edit A recent Google search on a related subject lead me to dendrogramGrob() from the latticeExtra package. Assuming you already have your sorted dendrogram object, you can skip the first lines of the example code from the on-line help and get something like this (here, with the mtcars dataset):
Plotting a heatmap given a dendrogram and a distance matrix in R I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more tha
44,557
Plotting a heatmap given a dendrogram and a distance matrix in R
Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's first simulate some data: set.seed(1) dat<-matrix(ncol=4, nrow=10, data=rnorm(40)) Then cluster the rows and columns: rd<-dist(dat) rc<-hclust(rd) cd<-dist(t(dat)) cc<-hclust(cd) After this we have 1) the raw data (dat) 2) a distance matrix (rd) and a dendrogram (rc) for rows of the raw data matrix 3) a distance matrix (cd) and and a dendrogram (cc) for columns of the raw data Distance matrices are not actually needed for the further steps, but the raw data on which the clustering was performed, and the resulting dendrogram(s) are. With the raw data these dendrograms can be used as input to the function heatmap(). If both row and column dendrograms are needed, use: heatmap(dat, Rowv=as.dendrogram(rc), Colv=as.dendrogram(cc)) If only row or column dendrogram is needed, use NA as an input for either Rowv or Colv parameter in heatmap(): # Dendrogram for rows only heatmap(dat, Rowv=as.dendrogram(rc), Colv=NA) # Dendrogram for columns only heatmap(dat, Rowv=NA, Colv=as.dendrogram(cc))
Plotting a heatmap given a dendrogram and a distance matrix in R
Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's
Plotting a heatmap given a dendrogram and a distance matrix in R Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's first simulate some data: set.seed(1) dat<-matrix(ncol=4, nrow=10, data=rnorm(40)) Then cluster the rows and columns: rd<-dist(dat) rc<-hclust(rd) cd<-dist(t(dat)) cc<-hclust(cd) After this we have 1) the raw data (dat) 2) a distance matrix (rd) and a dendrogram (rc) for rows of the raw data matrix 3) a distance matrix (cd) and and a dendrogram (cc) for columns of the raw data Distance matrices are not actually needed for the further steps, but the raw data on which the clustering was performed, and the resulting dendrogram(s) are. With the raw data these dendrograms can be used as input to the function heatmap(). If both row and column dendrograms are needed, use: heatmap(dat, Rowv=as.dendrogram(rc), Colv=as.dendrogram(cc)) If only row or column dendrogram is needed, use NA as an input for either Rowv or Colv parameter in heatmap(): # Dendrogram for rows only heatmap(dat, Rowv=as.dendrogram(rc), Colv=NA) # Dendrogram for columns only heatmap(dat, Rowv=NA, Colv=as.dendrogram(cc))
Plotting a heatmap given a dendrogram and a distance matrix in R Assuming you also have the raw data, you can use function heatmap(). It can take one or two dendrograms as input, if you want to avoid calculating the distances and clustering the objects again. Let's
44,558
Plotting a heatmap given a dendrogram and a distance matrix in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might try looking in the maptree or ape packages. What are you trying to do?
Plotting a heatmap given a dendrogram and a distance matrix in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Plotting a heatmap given a dendrogram and a distance matrix in R Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might try looking in the maptree or ape packages. What are you trying to do?
Plotting a heatmap given a dendrogram and a distance matrix in R Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
44,559
Is there an unpaired version of the sign test?
Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location parameter $\theta_0$ under the null hypothesis. Define an indicator variable $Z_i$ as $0$ when $D_i < 0$, and as $1$ when $D_i > 0$. Since $X$ is continuous, $P(D_i = 0) = 0$. Calculate test statistic $T=\sum_i Z_i$. The distribution of $T$ is is found by generating all $2^N$ possible outcomes of the $Z_i$ indicator variables (2 possibilities for each observation with equal probability $\frac{1}{2}$ under H0). This leads to the binomial distribution as in the sign test for 2 dependent samples. The justification for step 4 is: Suppose we had lost track of the signs of the deviations [...]. We could attach new signs at random [...]. If we are correct in our hypothesis that the variable has a symmetric distribution about $\theta_0$, the resulting values should have precisely the same distribution as the original observations. That is, the absolute values of the deviations are sufficient for regenerating the sample. (p34f) I agree that this reasoning seems somewhat different from a 2-sample permutation test where you re-assign experimental conditions to observations with the justification of exchangeability under H0. Good, P. 2005. Permutation, Parametric, and Bootstrap Tests of Hypotheses. New York: Springer.
Is there an unpaired version of the sign test?
Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location para
Is there an unpaired version of the sign test? Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location parameter $\theta_0$ under the null hypothesis. Define an indicator variable $Z_i$ as $0$ when $D_i < 0$, and as $1$ when $D_i > 0$. Since $X$ is continuous, $P(D_i = 0) = 0$. Calculate test statistic $T=\sum_i Z_i$. The distribution of $T$ is is found by generating all $2^N$ possible outcomes of the $Z_i$ indicator variables (2 possibilities for each observation with equal probability $\frac{1}{2}$ under H0). This leads to the binomial distribution as in the sign test for 2 dependent samples. The justification for step 4 is: Suppose we had lost track of the signs of the deviations [...]. We could attach new signs at random [...]. If we are correct in our hypothesis that the variable has a symmetric distribution about $\theta_0$, the resulting values should have precisely the same distribution as the original observations. That is, the absolute values of the deviations are sufficient for regenerating the sample. (p34f) I agree that this reasoning seems somewhat different from a 2-sample permutation test where you re-assign experimental conditions to observations with the justification of exchangeability under H0. Good, P. 2005. Permutation, Parametric, and Bootstrap Tests of Hypotheses. New York: Springer.
Is there an unpaired version of the sign test? Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows: Take the difference $D_i$ of each observation to the location para
44,560
Is there an unpaired version of the sign test?
I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation there is nothing like a corresponding other value (every value in the other group could be a potential counterpart for comparison). Correct me please, if I'm not getting the point...
Is there an unpaired version of the sign test?
I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation th
Is there an unpaired version of the sign test? I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation there is nothing like a corresponding other value (every value in the other group could be a potential counterpart for comparison). Correct me please, if I'm not getting the point...
Is there an unpaired version of the sign test? I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation th
44,561
Is there an unpaired version of the sign test?
O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test?
O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test? O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
Is there an unpaired version of the sign test? O.k, I found that there is an unpaired solution to a sign test (A test of medians). It is called "Median test" And you can read about it in Wikipedia.
44,562
Is there an unpaired version of the sign test?
The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test?
The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test? The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
Is there an unpaired version of the sign test? The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
44,563
Analyze and generate "clumpy" distributions?
If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of simultaneous autoregressive models by: W. D. Kissling, G. Carl Global Ecology and Biogeography, Vol. 17, No. 1. (January 2008), pp. 59-71. (PDF available here) Unfortunately they do not have the code in R they used to generate the simulated data, but they do have the code available of how they fit each of the models in the supplementary material. It would definately help though if you could be a little more clear about the nature of your data. Many of the techniques intended for spatial analysis will probably not be implemented in higher dimensional data, and I am sure there are other techniques that are more suitable. Some type of K-nearest neighbors technique might be useful, and make sure to change your search term from clumpy to cluster. Some other references you may find helpful. I would imagine the best resources for simulating data in such a manner would be with packages in the R program. Websites I suggest you check out the Spatstat R package page, and the R Cran Task View for spatial data. I would also suggest you check out the GeoDa center page, and you never know the OpenSpace Google group may have some helpful info. I also came across this R mailing list concerning geo data, but I have not combed the archive very much at this point (but I'm sure there is useful data in there). Edit: For those interested in simulating a pre-specified amount of spatial auto-correlation in a distribution, I recently came across a paper that gives a quite simple recommended procedure (Dray, 2011, page 136); I used the following steps to obtain a sample with a given autocorrelation level $\rho$: (1) generate a vector $y$ containing 100 iid normally distributed random values, (2) compute the inverse matrix $(I - \rho{W})^{-1}$ , and (3) premultiply the vector $y$ by the matrix obtained in (2) to obtain autocorrelated data in the vector $x$ (i.e., $x = (I - \rho{W})^{-1}y$ ). The only thing not defined here is that $W$ is an A priori defined spatial weighting matrix. I'm not sure how this would translate to multivariate case, but hopefully it is helpful to someone! Citation: Dray, Stephane. 2011. A new perspective about Moran's coefficient: Spatial autocorrelation as a linear regression problem. Geographical Analysis 43(2):127-141. (unfortunately I did not come across a public pdf of the document)
Analyze and generate "clumpy" distributions?
If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of
Analyze and generate "clumpy" distributions? If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of simultaneous autoregressive models by: W. D. Kissling, G. Carl Global Ecology and Biogeography, Vol. 17, No. 1. (January 2008), pp. 59-71. (PDF available here) Unfortunately they do not have the code in R they used to generate the simulated data, but they do have the code available of how they fit each of the models in the supplementary material. It would definately help though if you could be a little more clear about the nature of your data. Many of the techniques intended for spatial analysis will probably not be implemented in higher dimensional data, and I am sure there are other techniques that are more suitable. Some type of K-nearest neighbors technique might be useful, and make sure to change your search term from clumpy to cluster. Some other references you may find helpful. I would imagine the best resources for simulating data in such a manner would be with packages in the R program. Websites I suggest you check out the Spatstat R package page, and the R Cran Task View for spatial data. I would also suggest you check out the GeoDa center page, and you never know the OpenSpace Google group may have some helpful info. I also came across this R mailing list concerning geo data, but I have not combed the archive very much at this point (but I'm sure there is useful data in there). Edit: For those interested in simulating a pre-specified amount of spatial auto-correlation in a distribution, I recently came across a paper that gives a quite simple recommended procedure (Dray, 2011, page 136); I used the following steps to obtain a sample with a given autocorrelation level $\rho$: (1) generate a vector $y$ containing 100 iid normally distributed random values, (2) compute the inverse matrix $(I - \rho{W})^{-1}$ , and (3) premultiply the vector $y$ by the matrix obtained in (2) to obtain autocorrelated data in the vector $x$ (i.e., $x = (I - \rho{W})^{-1}y$ ). The only thing not defined here is that $W$ is an A priori defined spatial weighting matrix. I'm not sure how this would translate to multivariate case, but hopefully it is helpful to someone! Citation: Dray, Stephane. 2011. A new perspective about Moran's coefficient: Spatial autocorrelation as a linear regression problem. Geographical Analysis 43(2):127-141. (unfortunately I did not come across a public pdf of the document)
Analyze and generate "clumpy" distributions? If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R. Spatial autocorrelation and the selection of
44,564
Analyze and generate "clumpy" distributions?
I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions?
I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions? I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
Analyze and generate "clumpy" distributions? I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as Moran's I and Geary's C. Spatial statistics is not my area and I don't know about simulation though.
44,565
Analyze and generate "clumpy" distributions?
You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate such things.
Analyze and generate "clumpy" distributions?
You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate s
Analyze and generate "clumpy" distributions? You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate such things.
Analyze and generate "clumpy" distributions? You could calculate an index of dispersion measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate s
44,566
Analyze and generate "clumpy" distributions?
Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1) Use a local measure of autocorrelation - but the drawback is you don't get a single number for clumpiness. An example of this would be Local Moran's I* Here is a document (from a google search) that at least introduces the terms and gives some derivations http://onlinelibrary.wiley.com/doi/10.1111/0022-4146.00224/abstract 2) Use a statistic specifically geared towards point distributions and their clumpieness at various spatial scales, such as Ripley's K http://scholar.google.com/scholar?q=Ripley%27s+K&hl=en&as_sdt=0&as_vis=1&oi=scholart
Analyze and generate "clumpy" distributions?
Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1)
Analyze and generate "clumpy" distributions? Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1) Use a local measure of autocorrelation - but the drawback is you don't get a single number for clumpiness. An example of this would be Local Moran's I* Here is a document (from a google search) that at least introduces the terms and gives some derivations http://onlinelibrary.wiley.com/doi/10.1111/0022-4146.00224/abstract 2) Use a statistic specifically geared towards point distributions and their clumpieness at various spatial scales, such as Ripley's K http://scholar.google.com/scholar?q=Ripley%27s+K&hl=en&as_sdt=0&as_vis=1&oi=scholart
Analyze and generate "clumpy" distributions? Typical measures of autocorrelation, such as Moran's I, are global estimates of clumpiness and could be masked by a trend or by "averaging" of clumpiness. There are two ways you could handle this: 1)
44,567
Ways to increase forecast accuracy [closed]
I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algorithms: to be honest, I usually find that better algorithms are always beaten by better data, and better understood data. More complex methods will often give better results. (Often, not always. In the recent M5 forecasting competition, a trivial benchmark beat 92.5% of the submissions at the lowest granularity, see Kolassa, 2022.) What is often useful is thinking about what the forecasting method should be capable of doing. If you have important causal drivers, you should use a method that can use them, so not plain vanilla Exponential Smoothing or ARIMA - but a simple regression will often be quite competitive with a highly complex DL network, at a fraction of the cost and headache. Enriching the data: if you use external drivers, remember that you will need to forecast these themselves in production, and forecasts of macroeconomic and many other series is highly imprecise! It's easy to fall into a trap here, using actual future values in testing such predictors and thus overestimate your certainty and accuracy improvement you will get in production. And as above, the question is whether any improvement is worth the added cost and complexity of acquiring data and feeding it into the pipeline (and maintaining all this). I usually find that cleansing the data you do have, and understanding drivers, is much more important. Are there no sales because of supply chain problems during some months? Mark these and ignore them in training (if your method will allow so). If this happened, did demand switch to substitute products? If so, mark these periods on the substitutes, because when the original product comes back online, the substitutes will presumably see a drop in demand. Do you run promotions or similar activities? Model these. Understanding your data and cleaning it is always more important than trying more complex models. Feasibility: this is the elephant in the forecasting room. You can't forecast a flipped coin with more than 50% accuracy, and if your business stakeholders "require" more accuracy than you can achieve, they have a problem. See How to know that your machine learning problem is hopeless?. A few other thoughts: Making processes more forecastable: some activities make life actively harder on the forecasters (and on the rest of the supply chain, too). Promotions in retail are notoriously hard to predict, and have cannibalization impacts on other products, and on the focal product after the promotion. There is a reason why well performing retailers like Walmart and dm drogeriemarkt in Germany run Every Day Low Price strategies - it's just easier on the supply chain, and makes forecasting easier, too. Relatedly, there have been spats between Consumer Packaged Goods manufacturers and retailers, which went as far as the manufacturers stopping deliveries. This will make forecasting harder down the line for everyone involved. Similar issues arise from product proliferation; it's easier to forecast if we have five flavors of yogurt than if we have thirty, half of which are listed and delisted all the time, even if your marketing department loves new product introductions. No, I'm not saying the forecaster has the clout to change business processes. But it might be worthwhile sitting down with other people and figuring out how their activities are negatively impacting the forecasting function. Mitigation. Imprecise forecasts can be mitigated through safety stocks. No, nobody likes these, but once we have reached the end of our tether in terms of forecast accuracy, we can buffer the impact. Reducing the role of forecasting: relatedly, we can reduce the reliance on forecasts by pushing customization down the line as far as possible: if we paint our widgets only right before they are shipped out, we may only need to forecast "total widgets", rather than "red widgets", "yellow widgets" and "light pink-mauve widgets" separately (which will be harder). Measure the costs of bad forecasts and accuracy improvements: per above, often there is a point of diminishing returns in forecast improvements, where you can start spending serious money to only get a small accuracy improvement. It's worthwhile to figure out how much your accuracy improvement is worth in currency terms. I give a couple of examples in Kolassa (2022), and issue 68 of Foresight is devoted to this topic (full disclosure: I'm a Deputy Editor at Foresight). Essentially, if your logistical constraints and economic batch sizes are "large", then even better forecasts may lead to the exact same business and production decisions. Accuracy measures: there is a huge and embarrassing disconnect between forecast accuracy measures and business relevance. Many people like the Mean Absolute Percentage Error, because it looks so easy to interpret. It isn't, it can be highly misleading, it can easily be gamed, and I have never seen a business process that would profit from a forecast only because its MAPE is lower. (If your bonus depends on the MAPE and you are cynical, you can simply game it.) The MSE and scaled variants at least elicit unbiased expectation forecasts, but in a world of safety stocks, surprisingly few business processes really leverage expectation forecasts. The relationship between quantile losses and safety stocks/service measures is a little better, but quantile forecasting is often underappreciated. So I would seriously recommend you look at your error measures and figure out whether they are useful for your business processes. Hierarchical forecasting: you may be able to leverage hierarchies, whether in the product, the location or the time dimension. There has been a lot of work on optimal reconciliation, and it typically improves forecasts across the board. However, most of the work here up to very recently has only been on expectation forecasting, and per the previous point, quantile forecasts are often much more relevant. Talk to experts: forecasting is a science, and there are experts out there, many of whom will be happy to talk to you, some even for free. I have been involved with the IIF and its publications, notably Foresight, there is also the Institute of Business Forecasters, and depending on where you are in the world, you might want to reach out to institutions like the Centre for Marketing Analytics and Forecasting at Lancaster University Management School. They regularly offer to have their M.Sc. students do a thesis in a company, and such students might be a reasonably cheap source of new ideas. (Full disclosure again: I'm affiliated with the CMAF.)
Ways to increase forecast accuracy [closed]
I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algor
Ways to increase forecast accuracy [closed] I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algorithms: to be honest, I usually find that better algorithms are always beaten by better data, and better understood data. More complex methods will often give better results. (Often, not always. In the recent M5 forecasting competition, a trivial benchmark beat 92.5% of the submissions at the lowest granularity, see Kolassa, 2022.) What is often useful is thinking about what the forecasting method should be capable of doing. If you have important causal drivers, you should use a method that can use them, so not plain vanilla Exponential Smoothing or ARIMA - but a simple regression will often be quite competitive with a highly complex DL network, at a fraction of the cost and headache. Enriching the data: if you use external drivers, remember that you will need to forecast these themselves in production, and forecasts of macroeconomic and many other series is highly imprecise! It's easy to fall into a trap here, using actual future values in testing such predictors and thus overestimate your certainty and accuracy improvement you will get in production. And as above, the question is whether any improvement is worth the added cost and complexity of acquiring data and feeding it into the pipeline (and maintaining all this). I usually find that cleansing the data you do have, and understanding drivers, is much more important. Are there no sales because of supply chain problems during some months? Mark these and ignore them in training (if your method will allow so). If this happened, did demand switch to substitute products? If so, mark these periods on the substitutes, because when the original product comes back online, the substitutes will presumably see a drop in demand. Do you run promotions or similar activities? Model these. Understanding your data and cleaning it is always more important than trying more complex models. Feasibility: this is the elephant in the forecasting room. You can't forecast a flipped coin with more than 50% accuracy, and if your business stakeholders "require" more accuracy than you can achieve, they have a problem. See How to know that your machine learning problem is hopeless?. A few other thoughts: Making processes more forecastable: some activities make life actively harder on the forecasters (and on the rest of the supply chain, too). Promotions in retail are notoriously hard to predict, and have cannibalization impacts on other products, and on the focal product after the promotion. There is a reason why well performing retailers like Walmart and dm drogeriemarkt in Germany run Every Day Low Price strategies - it's just easier on the supply chain, and makes forecasting easier, too. Relatedly, there have been spats between Consumer Packaged Goods manufacturers and retailers, which went as far as the manufacturers stopping deliveries. This will make forecasting harder down the line for everyone involved. Similar issues arise from product proliferation; it's easier to forecast if we have five flavors of yogurt than if we have thirty, half of which are listed and delisted all the time, even if your marketing department loves new product introductions. No, I'm not saying the forecaster has the clout to change business processes. But it might be worthwhile sitting down with other people and figuring out how their activities are negatively impacting the forecasting function. Mitigation. Imprecise forecasts can be mitigated through safety stocks. No, nobody likes these, but once we have reached the end of our tether in terms of forecast accuracy, we can buffer the impact. Reducing the role of forecasting: relatedly, we can reduce the reliance on forecasts by pushing customization down the line as far as possible: if we paint our widgets only right before they are shipped out, we may only need to forecast "total widgets", rather than "red widgets", "yellow widgets" and "light pink-mauve widgets" separately (which will be harder). Measure the costs of bad forecasts and accuracy improvements: per above, often there is a point of diminishing returns in forecast improvements, where you can start spending serious money to only get a small accuracy improvement. It's worthwhile to figure out how much your accuracy improvement is worth in currency terms. I give a couple of examples in Kolassa (2022), and issue 68 of Foresight is devoted to this topic (full disclosure: I'm a Deputy Editor at Foresight). Essentially, if your logistical constraints and economic batch sizes are "large", then even better forecasts may lead to the exact same business and production decisions. Accuracy measures: there is a huge and embarrassing disconnect between forecast accuracy measures and business relevance. Many people like the Mean Absolute Percentage Error, because it looks so easy to interpret. It isn't, it can be highly misleading, it can easily be gamed, and I have never seen a business process that would profit from a forecast only because its MAPE is lower. (If your bonus depends on the MAPE and you are cynical, you can simply game it.) The MSE and scaled variants at least elicit unbiased expectation forecasts, but in a world of safety stocks, surprisingly few business processes really leverage expectation forecasts. The relationship between quantile losses and safety stocks/service measures is a little better, but quantile forecasting is often underappreciated. So I would seriously recommend you look at your error measures and figure out whether they are useful for your business processes. Hierarchical forecasting: you may be able to leverage hierarchies, whether in the product, the location or the time dimension. There has been a lot of work on optimal reconciliation, and it typically improves forecasts across the board. However, most of the work here up to very recently has only been on expectation forecasting, and per the previous point, quantile forecasts are often much more relevant. Talk to experts: forecasting is a science, and there are experts out there, many of whom will be happy to talk to you, some even for free. I have been involved with the IIF and its publications, notably Foresight, there is also the Institute of Business Forecasters, and depending on where you are in the world, you might want to reach out to institutions like the Centre for Marketing Analytics and Forecasting at Lancaster University Management School. They regularly offer to have their M.Sc. students do a thesis in a company, and such students might be a reasonably cheap source of new ideas. (Full disclosure again: I'm affiliated with the CMAF.)
Ways to increase forecast accuracy [closed] I have been forecasting retail demand for 16 years now. Retail is probably not what you are interested in, but a few comments on your ideas plus a few other ideas might be helpful. Tweaking the algor
44,568
Ways to increase forecast accuracy [closed]
First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers to use as variables? Time series methods work best for time series, of course you can use other factors to aid but with 1 time series going to 1 model you also need to be careful with those features. ML (boosted trees / RFs like you suggest) work best for tabular data where you tend to lose your time series structure so you have to make up for that with good tabular features and simply 'represent' time with other features. Things like price of products, marketing expense etc. If you don't have these types of variables for your domain then I would bet a decent stat engine outperforms a state-of-the-art ML model in a production setting. That production setting piece is important, with an ML model you have very little control of the actual forecast -you get what you get. A stat engine should allow you to on-the-fly switch to another method if the forecast of the current one is wonky, which leads to my next thought. Just remember though, if you use something like GDP you then probably have to forecast for GDP to use it in the future which is probably very problematic! Or use lagged GDP which may not be as useful. What makes a decent stat engine? Your model portfolio (what you are looking into now) is important but model selection and a business logic layer is everything. For model selection look to time series cross validation. For the business logic layer I would lean on the stakeholders of the forecast. For example, you probably want to assign a 'demand type' for each given time series. Like if 30% or more of the series is 0 then you want to assign it a 'type' which would only allow certain models to be selected such as simple Exp smoothing or croston or mean. An arima may product wonky results in those settings. You could also check to ensure the forecast doesn't go from 5 units to 50 million, something that is possible in an overparameterized arima. You could check to see if there a certain product lifecycles at play, like if there is a build up and fall off over the years and then fit a more local model or weigh the more recent years more if your method takes sample weights. A lot of possibilities here for adding logic that aids the engine. In summary, Add some naive methods, you could add some other methods but I personally would stay away from Prophet - autoarima + autoets + naive methods (mean, last period, last seasonal period) will be a good start, take a look at your model selection criteria to ensure it is robust, add some 'logic' to help ensure that the model is appropriate and isn't just merely the one that minimizes some loss function. But most importantly - Look at your forecasts. Setup some quick flags to surface forecasts where the model suggests new maxes/mins or the average of the forecast period is significantly different than the average of the history. Figure out if there are commonalities between these flagged series like a ton of zeros etc. Many times it is just an odd bug in your code meaning that your outlier detection isn't working right or some other issue that causes bad results. If you have done all of that and want additional models to try my main recommendations would be: Theta - there are tons of implementations across python and r. Theta plus auto arima do well in general. Croston - pretty standard for intermittent data. A lot of 'AutoML' time series methods literally try everything under the sun, take a lot of time, and don't add too much value beyond all the methods listed above. Additionally you could try out some of my personal projects in the field ThymeBoost which is just gradient boosted time series decomposition with traditional methods like ETS and ARIMA TimeMurmur my newest which does large scale LightGBM time series forecasting, probably wouldn't use it in prod but you could give it a shot as a baseline.
Ways to increase forecast accuracy [closed]
First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers
Ways to increase forecast accuracy [closed] First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers to use as variables? Time series methods work best for time series, of course you can use other factors to aid but with 1 time series going to 1 model you also need to be careful with those features. ML (boosted trees / RFs like you suggest) work best for tabular data where you tend to lose your time series structure so you have to make up for that with good tabular features and simply 'represent' time with other features. Things like price of products, marketing expense etc. If you don't have these types of variables for your domain then I would bet a decent stat engine outperforms a state-of-the-art ML model in a production setting. That production setting piece is important, with an ML model you have very little control of the actual forecast -you get what you get. A stat engine should allow you to on-the-fly switch to another method if the forecast of the current one is wonky, which leads to my next thought. Just remember though, if you use something like GDP you then probably have to forecast for GDP to use it in the future which is probably very problematic! Or use lagged GDP which may not be as useful. What makes a decent stat engine? Your model portfolio (what you are looking into now) is important but model selection and a business logic layer is everything. For model selection look to time series cross validation. For the business logic layer I would lean on the stakeholders of the forecast. For example, you probably want to assign a 'demand type' for each given time series. Like if 30% or more of the series is 0 then you want to assign it a 'type' which would only allow certain models to be selected such as simple Exp smoothing or croston or mean. An arima may product wonky results in those settings. You could also check to ensure the forecast doesn't go from 5 units to 50 million, something that is possible in an overparameterized arima. You could check to see if there a certain product lifecycles at play, like if there is a build up and fall off over the years and then fit a more local model or weigh the more recent years more if your method takes sample weights. A lot of possibilities here for adding logic that aids the engine. In summary, Add some naive methods, you could add some other methods but I personally would stay away from Prophet - autoarima + autoets + naive methods (mean, last period, last seasonal period) will be a good start, take a look at your model selection criteria to ensure it is robust, add some 'logic' to help ensure that the model is appropriate and isn't just merely the one that minimizes some loss function. But most importantly - Look at your forecasts. Setup some quick flags to surface forecasts where the model suggests new maxes/mins or the average of the forecast period is significantly different than the average of the history. Figure out if there are commonalities between these flagged series like a ton of zeros etc. Many times it is just an odd bug in your code meaning that your outlier detection isn't working right or some other issue that causes bad results. If you have done all of that and want additional models to try my main recommendations would be: Theta - there are tons of implementations across python and r. Theta plus auto arima do well in general. Croston - pretty standard for intermittent data. A lot of 'AutoML' time series methods literally try everything under the sun, take a lot of time, and don't add too much value beyond all the methods listed above. Additionally you could try out some of my personal projects in the field ThymeBoost which is just gradient boosted time series decomposition with traditional methods like ETS and ARIMA TimeMurmur my newest which does large scale LightGBM time series forecasting, probably wouldn't use it in prod but you could give it a shot as a baseline.
Ways to increase forecast accuracy [closed] First of all I agree 100% with Stephen's answer, I'll just add a little bit from my 2 years of experience! The ML vs traditional methods IMO boils down to a simple question: Do you have good drivers
44,569
Expectation of the ratio of sum (XY) and sum(X)
I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=runif(N)/(1-phi2) f[t,]=cumsum(we*(5-4*phi2))/cumsum(we) e=e+f[t,]} with the plot of the averaged e (in red) against the f[t,]'s (in gray): The expectation thus appears to decrease with $M$. Due to the lack of expectation of the individual weights $\sigma^2_{0,i}/(1-\varphi_i^2)$, it is unclear that the average within the expectation enjoys a finite variance, as indicated by the repeated jumps in the individual gray curves. Note that an equivalent expression for the expectation is $$1+4\mathbb E^{U,W}\left[\sum_{i=1}^M U_i\Big/\sum_{i=1}^M W_i \right]$$ with$$(U_i,W_i)\sim \frac{u}{w^2}\mathbb I_{(0,1)}(u)\mathbb I_{w>u}$$ by a change of variables and that it is approximately $$1+2\mathbb E^{W}[\overline W_M^{-1}]\tag{1}$$for $M$ large. (Or asymptotically equivalent by Slutsky's theorem.) Note also that the marginal distribution of the $W_i$'s is $$W\sim \frac{(w\wedge 1)^2}{2w^2}=\frac{\mathbb I_{0<w<1}}{2}+ \frac{\mathbb I_{w>1}}{2w^2}$$ This means that an asymptotic equivalent to (1) is $$1+4\mathbb E^{W}[1\big/(\overline S_{M/2}+\overline R_{M/2})]$$ where$$\overline S_{n}=\frac{1}{n}\sum_{i=1}^n U_i\qquad \overline R_n=\frac{1}{n}\sum_{i=1}^n V_i^{-1}\qquad U_i,V_i\sim\mathcal{U}(0,1)$$hence $$1+8\mathbb E^{W}[1\big/(1+2\overline R_{M/2})]$$ Comparing the distribution of$$1+4\sum_{i=1}^M U_i\Big/\sum_{i=1}^M W_i$$ with the distribution of$$1+8\big/(1+2\overline R_{M})$$does not exhibit any significant difference: The distribution of a sum of Pareto variates is particularly intricate. However, the limiting distribution of the centred average is a stable distribution. Namely, $$\frac{\overline{R}_M-\log(M)-C}{\pi/2}\approx F_{1,1}$$ where $C\equiv 0.8744...$ and $F_{1,1}$ is the stable distribution for $\alpha=\beta=1$. With cdf $$F_{1,1}(x)=2\left(1-\Phi(2/\sqrt\pi \exp\{-1/2-\pi x\sqrt2/4)\}\right)$$
Expectation of the ratio of sum (XY) and sum(X)
I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=run
Expectation of the ratio of sum (XY) and sum(X) I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=runif(N)/(1-phi2) f[t,]=cumsum(we*(5-4*phi2))/cumsum(we) e=e+f[t,]} with the plot of the averaged e (in red) against the f[t,]'s (in gray): The expectation thus appears to decrease with $M$. Due to the lack of expectation of the individual weights $\sigma^2_{0,i}/(1-\varphi_i^2)$, it is unclear that the average within the expectation enjoys a finite variance, as indicated by the repeated jumps in the individual gray curves. Note that an equivalent expression for the expectation is $$1+4\mathbb E^{U,W}\left[\sum_{i=1}^M U_i\Big/\sum_{i=1}^M W_i \right]$$ with$$(U_i,W_i)\sim \frac{u}{w^2}\mathbb I_{(0,1)}(u)\mathbb I_{w>u}$$ by a change of variables and that it is approximately $$1+2\mathbb E^{W}[\overline W_M^{-1}]\tag{1}$$for $M$ large. (Or asymptotically equivalent by Slutsky's theorem.) Note also that the marginal distribution of the $W_i$'s is $$W\sim \frac{(w\wedge 1)^2}{2w^2}=\frac{\mathbb I_{0<w<1}}{2}+ \frac{\mathbb I_{w>1}}{2w^2}$$ This means that an asymptotic equivalent to (1) is $$1+4\mathbb E^{W}[1\big/(\overline S_{M/2}+\overline R_{M/2})]$$ where$$\overline S_{n}=\frac{1}{n}\sum_{i=1}^n U_i\qquad \overline R_n=\frac{1}{n}\sum_{i=1}^n V_i^{-1}\qquad U_i,V_i\sim\mathcal{U}(0,1)$$hence $$1+8\mathbb E^{W}[1\big/(1+2\overline R_{M/2})]$$ Comparing the distribution of$$1+4\sum_{i=1}^M U_i\Big/\sum_{i=1}^M W_i$$ with the distribution of$$1+8\big/(1+2\overline R_{M})$$does not exhibit any significant difference: The distribution of a sum of Pareto variates is particularly intricate. However, the limiting distribution of the centred average is a stable distribution. Namely, $$\frac{\overline{R}_M-\log(M)-C}{\pi/2}\approx F_{1,1}$$ where $C\equiv 0.8744...$ and $F_{1,1}$ is the stable distribution for $\alpha=\beta=1$. With cdf $$F_{1,1}(x)=2\left(1-\Phi(2/\sqrt\pi \exp\{-1/2-\pi x\sqrt2/4)\}\right)$$
Expectation of the ratio of sum (XY) and sum(X) I will assume $a=0$ and $b=1$ in the following. Here is a simulation experiment to look at the variability of the expectation in $M$: e=rep(0,N) f=matrix(0,T,N) for(t in 1:T){ phi2=runif(N) we=run
44,570
Expectation of the ratio of sum (XY) and sum(X)
In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with uncorrelated $X$ and $Y$. You will get that you can relate it to the following expression: $$1 + 4 E\left[\left(\frac{\sum_{i=1}^{M} X_iZ_i}{\sum_{i=1}^{M} X_i}\right)^{-1}\right]$$ where the weights are $X_i \sim U(a,b)$ and the variable $Z_i \sim Pareto(\alpha = 1, x_m = 1)$ follows a Pareto distribution. Possibly it is easier to first solve the simpler (but still difficult) problem with $\beta_i = 1$ constant. $$E\left[\left(\frac{1}{M}{\sum_{i=1}^{M} Z_i}\right)^{-1}\right]$$ Derivation The equation could be simplified (easier to read) by using different variables like $\alpha_i = 1-\varphi_i^2 \sim U(0,1)$ and $\beta_{i} = \phi_{0,i}^2 \sim U(a,b)$ such that your question becomes $$E\left[ {\frac{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}} \cdot \left( {1+4\alpha_i } \right)} }}{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}}} }}} \right]$$ where $\beta _{i} \sim U\left( {a,b} \right),b > a > 0$ and $\alpha_i \sim U\left( {0,1} \right)$. The expression can also be simplified further $$E\left[ {\frac{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}} \cdot \left( {1+4\alpha_i } \right)} }}{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}}} }}} \right] = E\left[ {\frac{{\sum\limits_{i = 1}^M { \frac{{\beta_{i}}}{{\alpha_i}} + 4 \beta_{i} } }}{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right]= E\left[ {\frac{{\sum\limits_{i = 1}^M { \frac{{\beta_{i}}}{{\alpha_i}} + \sum\limits_{i = 1}^M 4 \beta_{i} } }}{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right] = 1 + 4 \cdot E\left[ {\frac{\sum\limits_{i = 1}^M \beta_{i} }{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right]$$ Alternative viewpoint as random walk For $M=1$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1}}{\frac{\beta _{1}}{\alpha_1}} \right] = 1 + 4 \cdot E\left[ {\alpha_1} \right] = 3$$ For $M=2$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1} + \beta_{2}}{\frac{\beta _{1}}{\alpha_1} + \frac{\beta_{2}}{\alpha_2}} \right] = 1 + 4 \cdot E\left[ \frac{\beta_{1}\alpha_2\cdot \alpha_1 + \beta_{2}\alpha_1\cdot \alpha_2}{\beta_{1}\alpha_2\phantom{\cdot \alpha_1} + \beta_{2}\alpha_1\phantom{\cdot \alpha_1} } \right] $$ For $M=3$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1} + \beta_{2}}{\frac{\beta _{1}}{\alpha_1} + \frac{\beta_{2}}{\alpha_2}} \right] = 1 + 4 \cdot E\left[ \frac{\beta_{1}\alpha_2\alpha_3\cdot \alpha_1 + \beta_{2}\alpha_1\alpha_3\cdot \alpha_2+ \beta_{3}\alpha_1\alpha_2\cdot \alpha_3}{\beta_{1}\alpha_2\alpha_3 \phantom{\cdot \alpha_1} + \beta_{2}\alpha_1\alpha_3 \phantom{\cdot \alpha_1} + \beta_{3}\alpha_1\alpha_2\phantom{\cdot \alpha_1} } \right] $$ For more general $M$ You seem to get the expectation of a weighted average of $\alpha_i$ where the wheighing is $\beta_i \prod_{l \neq i} \alpha_l $. $$X_{k} = \frac{\sum_{i = 1}^k \left( \beta_i \prod_{l \neq i} \alpha_l \right) \cdot \alpha_i} {\sum_{i = 1}^k \left( \beta_i \prod_{l \neq i} \alpha_l \right)}$$ When we add a sample $\beta_{k+1},\alpha_{k+1}$ to a sample of size $k$ then we can recompute the value as $$X_{k+1} = \frac{\left(Q_k\alpha_{k+1}\right) \cdot X_{k} + \left(\beta_{k+1} \prod_{i=1}^k \alpha_i \right) \cdot \alpha_{k+1}}{\left(Q_k\alpha_{k+1}\right) \hphantom{\cdot X_{k}} + \left(\beta_{k+1} \prod_{i=1}^k \alpha_i \right) \hphantom{\cdot \alpha_{k+1}}} $$ where $Q_k = \sum_{i = 1}^k \left( \beta_i \prod_{k \neq i} \alpha_k \right)$ This seems like some sort of random walk $$X_{k+1} = \phi_{k+1} X_k + (1-\phi_{k+1}) \alpha_{k+1}$$ with $$\phi_{k+1} = \frac{Q_k\alpha_{k+1}}{Q_k\alpha_{k+1} + \beta_{k+1} P_k } $$ and $P_{k+1} = P_{k} \alpha_{k+1}$ and $Q_{k+1} = Q_k \alpha_{k+1} + P_k \beta_{k+1}$ The below code demonstrates this alternative view of how the random variable $X_n$ evolves from $n$ to $n+1$ a = 1 b = 3 alpha = runif(1,0,1) beta = runif(1,a,b) X = alpha Q = beta P = 1 for (i in 1:1000) { alpha_n = runif(1,0,1) beta_n = runif(1,a,b) alpha = c(alpha, alpha_n) beta = c(beta, beta_n) phi = Q*alpha_n /(Q*alpha_n + P*beta_n) X = c(X,phi * tail(X,1) + (1-phi) * alpha_n) Q = Q * alpha_n + P * beta_n P = P * alpha_n } ### two different ways to compute a series of X plot(X*4+1, type = "l") plot(1+4*cumsum(beta)/cumsum(beta/alpha), type = "l") I wonder if we can use this iterative process to express the expectation value as some recursive relationship. Geometric interpretation The problem is equivalent to evaluating the expression $$E\left[ \left( \frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} } \right)^{-1}\right]$$ with $\alpha_i \sim U(0,1)$ and $\beta_i \sim U(a,b)$ We can focus on the sum inside $$\frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} }$$ This is equal to the integral of a random path which that is created by ordering the $\alpha_i$ and making a horizontal step of size $\frac{\beta_i}{\sum\limits_{i = 1}^M \beta_{i} }$ at a height of $\alpha$ Let this curve be $\alpha(x)$ then we have $$ \int_0^1 \frac{1}{\alpha(x)} dx = \frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} }$$ and we can see the expectation as $$E\left[\frac{1}{\int_0^1 \frac{1}{\alpha(x)} dx }\right]$$ here this curve $\alpha(x)$ resembles an empirical distribution function for a uniform variable.
Expectation of the ratio of sum (XY) and sum(X)
In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with unco
Expectation of the ratio of sum (XY) and sum(X) In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with uncorrelated $X$ and $Y$. You will get that you can relate it to the following expression: $$1 + 4 E\left[\left(\frac{\sum_{i=1}^{M} X_iZ_i}{\sum_{i=1}^{M} X_i}\right)^{-1}\right]$$ where the weights are $X_i \sim U(a,b)$ and the variable $Z_i \sim Pareto(\alpha = 1, x_m = 1)$ follows a Pareto distribution. Possibly it is easier to first solve the simpler (but still difficult) problem with $\beta_i = 1$ constant. $$E\left[\left(\frac{1}{M}{\sum_{i=1}^{M} Z_i}\right)^{-1}\right]$$ Derivation The equation could be simplified (easier to read) by using different variables like $\alpha_i = 1-\varphi_i^2 \sim U(0,1)$ and $\beta_{i} = \phi_{0,i}^2 \sim U(a,b)$ such that your question becomes $$E\left[ {\frac{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}} \cdot \left( {1+4\alpha_i } \right)} }}{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}}} }}} \right]$$ where $\beta _{i} \sim U\left( {a,b} \right),b > a > 0$ and $\alpha_i \sim U\left( {0,1} \right)$. The expression can also be simplified further $$E\left[ {\frac{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}} \cdot \left( {1+4\alpha_i } \right)} }}{{\sum\limits_{i = 1}^M {\frac{{\beta_{i}}}{{\alpha_i}}} }}} \right] = E\left[ {\frac{{\sum\limits_{i = 1}^M { \frac{{\beta_{i}}}{{\alpha_i}} + 4 \beta_{i} } }}{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right]= E\left[ {\frac{{\sum\limits_{i = 1}^M { \frac{{\beta_{i}}}{{\alpha_i}} + \sum\limits_{i = 1}^M 4 \beta_{i} } }}{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right] = 1 + 4 \cdot E\left[ {\frac{\sum\limits_{i = 1}^M \beta_{i} }{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}} \right]$$ Alternative viewpoint as random walk For $M=1$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1}}{\frac{\beta _{1}}{\alpha_1}} \right] = 1 + 4 \cdot E\left[ {\alpha_1} \right] = 3$$ For $M=2$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1} + \beta_{2}}{\frac{\beta _{1}}{\alpha_1} + \frac{\beta_{2}}{\alpha_2}} \right] = 1 + 4 \cdot E\left[ \frac{\beta_{1}\alpha_2\cdot \alpha_1 + \beta_{2}\alpha_1\cdot \alpha_2}{\beta_{1}\alpha_2\phantom{\cdot \alpha_1} + \beta_{2}\alpha_1\phantom{\cdot \alpha_1} } \right] $$ For $M=3$ we get: $$1 + 4 \cdot E\left[ \frac{\beta_{1} + \beta_{2}}{\frac{\beta _{1}}{\alpha_1} + \frac{\beta_{2}}{\alpha_2}} \right] = 1 + 4 \cdot E\left[ \frac{\beta_{1}\alpha_2\alpha_3\cdot \alpha_1 + \beta_{2}\alpha_1\alpha_3\cdot \alpha_2+ \beta_{3}\alpha_1\alpha_2\cdot \alpha_3}{\beta_{1}\alpha_2\alpha_3 \phantom{\cdot \alpha_1} + \beta_{2}\alpha_1\alpha_3 \phantom{\cdot \alpha_1} + \beta_{3}\alpha_1\alpha_2\phantom{\cdot \alpha_1} } \right] $$ For more general $M$ You seem to get the expectation of a weighted average of $\alpha_i$ where the wheighing is $\beta_i \prod_{l \neq i} \alpha_l $. $$X_{k} = \frac{\sum_{i = 1}^k \left( \beta_i \prod_{l \neq i} \alpha_l \right) \cdot \alpha_i} {\sum_{i = 1}^k \left( \beta_i \prod_{l \neq i} \alpha_l \right)}$$ When we add a sample $\beta_{k+1},\alpha_{k+1}$ to a sample of size $k$ then we can recompute the value as $$X_{k+1} = \frac{\left(Q_k\alpha_{k+1}\right) \cdot X_{k} + \left(\beta_{k+1} \prod_{i=1}^k \alpha_i \right) \cdot \alpha_{k+1}}{\left(Q_k\alpha_{k+1}\right) \hphantom{\cdot X_{k}} + \left(\beta_{k+1} \prod_{i=1}^k \alpha_i \right) \hphantom{\cdot \alpha_{k+1}}} $$ where $Q_k = \sum_{i = 1}^k \left( \beta_i \prod_{k \neq i} \alpha_k \right)$ This seems like some sort of random walk $$X_{k+1} = \phi_{k+1} X_k + (1-\phi_{k+1}) \alpha_{k+1}$$ with $$\phi_{k+1} = \frac{Q_k\alpha_{k+1}}{Q_k\alpha_{k+1} + \beta_{k+1} P_k } $$ and $P_{k+1} = P_{k} \alpha_{k+1}$ and $Q_{k+1} = Q_k \alpha_{k+1} + P_k \beta_{k+1}$ The below code demonstrates this alternative view of how the random variable $X_n$ evolves from $n$ to $n+1$ a = 1 b = 3 alpha = runif(1,0,1) beta = runif(1,a,b) X = alpha Q = beta P = 1 for (i in 1:1000) { alpha_n = runif(1,0,1) beta_n = runif(1,a,b) alpha = c(alpha, alpha_n) beta = c(beta, beta_n) phi = Q*alpha_n /(Q*alpha_n + P*beta_n) X = c(X,phi * tail(X,1) + (1-phi) * alpha_n) Q = Q * alpha_n + P * beta_n P = P * alpha_n } ### two different ways to compute a series of X plot(X*4+1, type = "l") plot(1+4*cumsum(beta)/cumsum(beta/alpha), type = "l") I wonder if we can use this iterative process to express the expectation value as some recursive relationship. Geometric interpretation The problem is equivalent to evaluating the expression $$E\left[ \left( \frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} } \right)^{-1}\right]$$ with $\alpha_i \sim U(0,1)$ and $\beta_i \sim U(a,b)$ We can focus on the sum inside $$\frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} }$$ This is equal to the integral of a random path which that is created by ordering the $\alpha_i$ and making a horizontal step of size $\frac{\beta_i}{\sum\limits_{i = 1}^M \beta_{i} }$ at a height of $\alpha$ Let this curve be $\alpha(x)$ then we have $$ \int_0^1 \frac{1}{\alpha(x)} dx = \frac{{\sum\limits_{i = 1}^M {\frac{{\beta _{i}}}{{\alpha_i}}} }}{\sum\limits_{i = 1}^M \beta_{i} }$$ and we can see the expectation as $$E\left[\frac{1}{\int_0^1 \frac{1}{\alpha(x)} dx }\right]$$ here this curve $\alpha(x)$ resembles an empirical distribution function for a uniform variable.
Expectation of the ratio of sum (XY) and sum(X) In your case of $\text{sum}(XY)/\text{sum}(X)$ you have that the $X$ and $Y$ are correlated. We can rewrite it in a different form such that we have a similar weighted average expression but with unco
44,571
What is the probability a person sees a tree by looking out of the window?
How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of knowledge. The way that it is generally tackled is that we use some simplified model and apply it to the problem. But the model is wrong and we have no way to express how wrong exactly. Still, as long as the range of error is small, or smaller than the statistical variations, then the model is good enough to apply. See also: https://en.m.wikipedia.org/wiki/All_models_are_wrong
What is the probability a person sees a tree by looking out of the window?
How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of
What is the probability a person sees a tree by looking out of the window? How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of knowledge. The way that it is generally tackled is that we use some simplified model and apply it to the problem. But the model is wrong and we have no way to express how wrong exactly. Still, as long as the range of error is small, or smaller than the statistical variations, then the model is good enough to apply. See also: https://en.m.wikipedia.org/wiki/All_models_are_wrong
What is the probability a person sees a tree by looking out of the window? How to calculate the probability with any degrees of accuracy?? There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of
44,572
What is the probability a person sees a tree by looking out of the window?
Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilities. If you have no data, there is no way to answer the problem especially when probabilities are interpreted as relative frequencies. If you interpret probabilities as uncertainty about the problem as Bayesians do, then you can make a statistical conclusion based on that. For example in the absence of knowledge, you might assume that $Pr(tree) = 0.5$ which serves as your prior beliefs. Then whatever data you do observe, you update these beliefs and end up with some posterior beliefs. Obviously the more data you observe the better your estimates will be.
What is the probability a person sees a tree by looking out of the window?
Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilit
What is the probability a person sees a tree by looking out of the window? Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilities. If you have no data, there is no way to answer the problem especially when probabilities are interpreted as relative frequencies. If you interpret probabilities as uncertainty about the problem as Bayesians do, then you can make a statistical conclusion based on that. For example in the absence of knowledge, you might assume that $Pr(tree) = 0.5$ which serves as your prior beliefs. Then whatever data you do observe, you update these beliefs and end up with some posterior beliefs. Obviously the more data you observe the better your estimates will be.
What is the probability a person sees a tree by looking out of the window? Well that's what statistics is about, right? All those variables you mentioned are unobserved and can impact the outcome, therefore we choose to encode this uncertainty about the problem as probabilit
44,573
What is the probability a person sees a tree by looking out of the window?
This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probability of a discrete outcome). Consider a deck of cards. I draw a card and ask you to guess the card without showing it to you. You have a $1/52$ probability of guessing the right card, so little less than $2\%$ chance. If I then show you that that the card is red, you’ve ruled out half of the cards and know that it must be a diamond or a heart. Your probability of guessing the right card increased from $1/52$ when you were complexly ignorant about the card I drew to $1/26$ when I gave you some information about the card. In machine learning or predictive modeling, those details about the cards are called features or predictors (probability some other terms, too). How to use the available features and synthesize new ones from existing features is the special sauce of accurate predictive models. If your example, in the absence of much information about the viewer, you might think there is a low chance of her seeing a tree. However, if you know that she looked in the direction of a tree in the middle of the daytime when she should be able to see, perhaps you would think there is a high probability of her seeing a tree. Conversely, if you knew that she looked at night without the help of a flashlight and on an overcast night that even had a new moon (so no moonlight), you might expect there to be a particularly low probability of her seeing a tree. How to model something like this is an open question that machine learning and predictive modeling practitioners tackle every day.
What is the probability a person sees a tree by looking out of the window?
This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probabili
What is the probability a person sees a tree by looking out of the window? This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probability of a discrete outcome). Consider a deck of cards. I draw a card and ask you to guess the card without showing it to you. You have a $1/52$ probability of guessing the right card, so little less than $2\%$ chance. If I then show you that that the card is red, you’ve ruled out half of the cards and know that it must be a diamond or a heart. Your probability of guessing the right card increased from $1/52$ when you were complexly ignorant about the card I drew to $1/26$ when I gave you some information about the card. In machine learning or predictive modeling, those details about the cards are called features or predictors (probability some other terms, too). How to use the available features and synthesize new ones from existing features is the special sauce of accurate predictive models. If your example, in the absence of much information about the viewer, you might think there is a low chance of her seeing a tree. However, if you know that she looked in the direction of a tree in the middle of the daytime when she should be able to see, perhaps you would think there is a high probability of her seeing a tree. Conversely, if you knew that she looked at night without the help of a flashlight and on an overcast night that even had a new moon (so no moonlight), you might expect there to be a particularly low probability of her seeing a tree. How to model something like this is an open question that machine learning and predictive modeling practitioners tackle every day.
What is the probability a person sees a tree by looking out of the window? This is what supervised learn does, particularly so-called “classification” models (most of which make probability predictions, but “classification” is all but a euphemism for predicting the probabili
44,574
What is the probability a person sees a tree by looking out of the window?
When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance agent will not sell you insurance against it. In order to produce a proabability you could take one of at least two approaches: Get the person to look out of the window lots of times, at various angles, times of day, etc, record when they saw a tree and when they didn't, and come up with a frequency. Use this as a probability. You do not even necessarily need to know how many of the trials occured during the day, and how many at night. You don't know what factors affect whether a tree is seen or not. You just know that you've measured the event you're interested in. Consider all the variables, make lots of measurements, define a more precise model to describe what is going on, put probability distributions on each of the parameters of your model (such as "time of day", "angle of glance"), and derive a probability for the event from what your model tells you are the times/angles/etc that produce tree-sightings. Very loosely speaking, the former approach might be used by an office manager for a question like "what is the probability that someone in my office has COVID-19?", where you really can't do a lot of careful research and modelling, but perhaps you do have access to the self-reported results of various kinds of tests, or failing that to government estimates of COVID-19 prevalance in your population as a whole. The scond approach might be used by "scientists"[*] for a question like "what is the probability that a person with COVID-19, on walking into a crowded supermarket, will infect at least one other person?", which is the sort of thing a committed epidemiologist might try to tackle. Doesn't necessarily mean all epidemiologists would come up with the same answer, of course, since they might make decisions about what to ignore, what to include in the model, and how to model it, which means they get to a different number. You can't generally put a probability on, "my theory of physics/biology/shopping is completely wrong and therefore everything which follows from it is bunk", since you have neither a good model nor good observed frequency for that. It's best not to think in terms of "every conceivable event has a probability, and my task is to calculate it". Rather, the actual physical world has observed events, and any probabilistic model you make of the world generates probabilities, and any relation between the two is down to whether your model is any good or not. The reason we're 100% confident that a uniform selection from 1-4 has probability of 0.25 of giving the number 3, is that this is a mathematical theorem following immediately from the definition. We are sure of mathematical definitions. No real-world events are even described in the sentence whose truth we are sure of: it's just a straight application of the definition of "uniform discrete probability distribution". The fact that we're 100% sure of mathematics (which itself arguably is a matter of opinion, but you say you are and I believe you) doesn't help us put a number on how sure we are of optics, or the medical theory of hallucinations, or that those trees won't blow down in the night and therefore the probability of spotting them tomorrow is very different from what it is today. Statisticians working for insurance companies, however, might actually have quite good data on the nationwide incidence of trees blowing down in the night. The reason they could plausibly care is that if you have a tree near your house, they might want to have an opinion on whether they should instruct you to remove it, or at least charge you higher premiums on your buildings insurance than they charge for buildings far from trees. So, any particular factor is subject to study, but to produce a probability you always have to decide at some point to ignore everything you haven't studied. [*] The same ones from the dread-inducing journalistic expression, "scientists say that...", which with high probability means that all scientific theory and common-sense nuance will be omitted from the remainder of the article.
What is the probability a person sees a tree by looking out of the window?
When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance a
What is the probability a person sees a tree by looking out of the window? When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance agent will not sell you insurance against it. In order to produce a proabability you could take one of at least two approaches: Get the person to look out of the window lots of times, at various angles, times of day, etc, record when they saw a tree and when they didn't, and come up with a frequency. Use this as a probability. You do not even necessarily need to know how many of the trials occured during the day, and how many at night. You don't know what factors affect whether a tree is seen or not. You just know that you've measured the event you're interested in. Consider all the variables, make lots of measurements, define a more precise model to describe what is going on, put probability distributions on each of the parameters of your model (such as "time of day", "angle of glance"), and derive a probability for the event from what your model tells you are the times/angles/etc that produce tree-sightings. Very loosely speaking, the former approach might be used by an office manager for a question like "what is the probability that someone in my office has COVID-19?", where you really can't do a lot of careful research and modelling, but perhaps you do have access to the self-reported results of various kinds of tests, or failing that to government estimates of COVID-19 prevalance in your population as a whole. The scond approach might be used by "scientists"[*] for a question like "what is the probability that a person with COVID-19, on walking into a crowded supermarket, will infect at least one other person?", which is the sort of thing a committed epidemiologist might try to tackle. Doesn't necessarily mean all epidemiologists would come up with the same answer, of course, since they might make decisions about what to ignore, what to include in the model, and how to model it, which means they get to a different number. You can't generally put a probability on, "my theory of physics/biology/shopping is completely wrong and therefore everything which follows from it is bunk", since you have neither a good model nor good observed frequency for that. It's best not to think in terms of "every conceivable event has a probability, and my task is to calculate it". Rather, the actual physical world has observed events, and any probabilistic model you make of the world generates probabilities, and any relation between the two is down to whether your model is any good or not. The reason we're 100% confident that a uniform selection from 1-4 has probability of 0.25 of giving the number 3, is that this is a mathematical theorem following immediately from the definition. We are sure of mathematical definitions. No real-world events are even described in the sentence whose truth we are sure of: it's just a straight application of the definition of "uniform discrete probability distribution". The fact that we're 100% sure of mathematics (which itself arguably is a matter of opinion, but you say you are and I believe you) doesn't help us put a number on how sure we are of optics, or the medical theory of hallucinations, or that those trees won't blow down in the night and therefore the probability of spotting them tomorrow is very different from what it is today. Statisticians working for insurance companies, however, might actually have quite good data on the nationwide incidence of trees blowing down in the night. The reason they could plausibly care is that if you have a tree near your house, they might want to have an opinion on whether they should instruct you to remove it, or at least charge you higher premiums on your buildings insurance than they charge for buildings far from trees. So, any particular factor is subject to study, but to produce a probability you always have to decide at some point to ignore everything you haven't studied. [*] The same ones from the dread-inducing journalistic expression, "scientists say that...", which with high probability means that all scientific theory and common-sense nuance will be omitted from the remainder of the article.
What is the probability a person sees a tree by looking out of the window? When there are that many unknowns, generally you'd say "I don't know the probability". For example, your local book-maker will not give you odds on this event with the tree, and your local insurance a
44,575
What is the probability a person sees a tree by looking out of the window?
Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For some such pairs $(H, I)$, we cannot evaluate the subject. e.g. $$ \text{prob}(\text{dogs are risible} | \text{it will rain next Tuesday}) $$ Now consider your subject: $$ \begin{align} \text{prob}(&\text{a person sees a tree by looking out of the window} | \\ &\text{there are two trees that could be seen from that window}) \end{align} $$ You haven't supplied enough information to calculate the probability, so your subject falls into the same category as my example involving dogs and rain. That's the end of the matter. To make some quantitative progress, we'd have to add information that gives rise to symmetries in the situation which enable us to assign probabilities. For example, let $I$ be: There are two trees outside. Each time Bob glances out the window, he sees tree 1 with probability 0.3, and if he sees tree 1, he sees tree 2 with probability 0.1, and if he doesn't see tree 1, he sees tree 2 with probability 0.4. Bob takes two glances out the window. and let $H$ be "Bob sees tree 1 or tree 2 or both". Then we could calculate the probability. The business of forming probabilistic subjects that are relevant yet amenable to calculation, and evaluating them, is essentially the whole subject and art of probability!
What is the probability a person sees a tree by looking out of the window?
Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For so
What is the probability a person sees a tree by looking out of the window? Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For some such pairs $(H, I)$, we cannot evaluate the subject. e.g. $$ \text{prob}(\text{dogs are risible} | \text{it will rain next Tuesday}) $$ Now consider your subject: $$ \begin{align} \text{prob}(&\text{a person sees a tree by looking out of the window} | \\ &\text{there are two trees that could be seen from that window}) \end{align} $$ You haven't supplied enough information to calculate the probability, so your subject falls into the same category as my example involving dogs and rain. That's the end of the matter. To make some quantitative progress, we'd have to add information that gives rise to symmetries in the situation which enable us to assign probabilities. For example, let $I$ be: There are two trees outside. Each time Bob glances out the window, he sees tree 1 with probability 0.3, and if he sees tree 1, he sees tree 2 with probability 0.1, and if he doesn't see tree 1, he sees tree 2 with probability 0.4. Bob takes two glances out the window. and let $H$ be "Bob sees tree 1 or tree 2 or both". Then we could calculate the probability. The business of forming probabilistic subjects that are relevant yet amenable to calculation, and evaluating them, is essentially the whole subject and art of probability!
What is the probability a person sees a tree by looking out of the window? Consider the probabilistic subject $$ \text{prob}(H | I) $$ i.e. "the probability that $H$, given that $I$". Here $H$ and $I$ are meaningful propositions, and $I$ must be not necessarily false. For so
44,576
What is the probability a person sees a tree by looking out of the window?
There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone being delusional, then as you noticed, there is an infinite number of possibilities, so the the answer is simple: the probability is zero. It is something divided by infinity, so it approaches zero. The calculation is pretty useless because the question is too broad. That is why we usually simplify such questions limiting the scenarios to consider (for most use cases you probably can ignore delusions, etc). Not seeing a tree because of being blind is a different thing than because of living in the desert.
What is the probability a person sees a tree by looking out of the window?
There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone be
What is the probability a person sees a tree by looking out of the window? There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone being delusional, then as you noticed, there is an infinite number of possibilities, so the the answer is simple: the probability is zero. It is something divided by infinity, so it approaches zero. The calculation is pretty useless because the question is too broad. That is why we usually simplify such questions limiting the scenarios to consider (for most use cases you probably can ignore delusions, etc). Not seeing a tree because of being blind is a different thing than because of living in the desert.
What is the probability a person sees a tree by looking out of the window? There are many things to consider [...] And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. If you want to consider even extreme cases like someone be
44,577
Why Spearman's rank correlation ranges from from -1 to 1
See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in the interval $[-1,1]$ is by the same proof as for the Pearson correlation. By using the Cauchy-Schwartz inequality.
Why Spearman's rank correlation ranges from from -1 to 1
See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in t
Why Spearman's rank correlation ranges from from -1 to 1 See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in the interval $[-1,1]$ is by the same proof as for the Pearson correlation. By using the Cauchy-Schwartz inequality.
Why Spearman's rank correlation ranges from from -1 to 1 See Wikipedia for the definition. Note that Spearman correlation is just the usual Pearson correlation, but calculated using the ranks of the data, not the data itself. So the reason it is always in t
44,578
Why Spearman's rank correlation ranges from from -1 to 1
As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand the sum by expanding bracket as $$\sum_1^{m/2} (2r-1)^2 = 4\sum r^2-4\sum r +\sum 1$$ and use standard formulae for the sums: $\sum_1^n r^2 = \frac16 n(n+1)(2n+1)$ and $\sum_1^n r =\frac12 n(n+1)$ This gives $$\frac46 (m/2)((m/2)+1)(m+1) - \frac42 m/2(m/2+1) + m/2= \frac16 (m^3-m).$$ We can now substitute this into your expression: $$\frac{6\cdot(2\sum_1^{m/2} (2r-1)^2)}{m^3-m}$$ $$=\frac{6\cdot(2\cdot \frac16 (m^3-m)}{m^3-m}$$ $$=\frac{2 (m^3 -m)}{m^3-m}$$ $$=2$$ And so the Spearman rank coefficient is -1 So in this case, one can directly calculate the value. The standard formulae may not be familiar to you. Typically they are proved by induction, but a visual proof is offered on maths StackExchange: https://math.stackexchange.com/questions/122546/gaussian-proof-for-the-sum-of-squares
Why Spearman's rank correlation ranges from from -1 to 1
As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand th
Why Spearman's rank correlation ranges from from -1 to 1 As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand the sum by expanding bracket as $$\sum_1^{m/2} (2r-1)^2 = 4\sum r^2-4\sum r +\sum 1$$ and use standard formulae for the sums: $\sum_1^n r^2 = \frac16 n(n+1)(2n+1)$ and $\sum_1^n r =\frac12 n(n+1)$ This gives $$\frac46 (m/2)((m/2)+1)(m+1) - \frac42 m/2(m/2+1) + m/2= \frac16 (m^3-m).$$ We can now substitute this into your expression: $$\frac{6\cdot(2\sum_1^{m/2} (2r-1)^2)}{m^3-m}$$ $$=\frac{6\cdot(2\cdot \frac16 (m^3-m)}{m^3-m}$$ $$=\frac{2 (m^3 -m)}{m^3-m}$$ $$=2$$ And so the Spearman rank coefficient is -1 So in this case, one can directly calculate the value. The standard formulae may not be familiar to you. Typically they are proved by induction, but a visual proof is offered on maths StackExchange: https://math.stackexchange.com/questions/122546/gaussian-proof-for-the-sum-of-squares
Why Spearman's rank correlation ranges from from -1 to 1 As you note the sum includes $1^2+3^2+5^2+\cdots+(m-1)^2$ or $\sum_1^{m/2} (2r-1)^2$. For simplicity I'll look at the case when there are an even number of rows, ie when $m$ is even. You can expand th
44,579
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted probabilities of class membership returned by logistic regression. See the explanation of the formula in the table on the UCLA web page that you cite. Logistic regression does not return class membership assignments. It sometimes appears to, based on a hidden assumption of p = 0.5 for categorization after modeling. But even if strict assignments are needed, that's not necessarily the best cutoff choice. Your formula is related to the accuracy of the classification (at the chosen probability cutoff) which is not a good choice for evaluating classification models of any sort. The numerator of the second term in that formula, with $\pi$ values correctly taken as probabilities, is the basis of the Brier score, the mean-square error between observations and predicted probabilities. That's a strictly proper scoring rule, which you might consider using on its own. Chapter 8 of Frank Harrell's course notes covers several ways to evaluate the quality of logistic regression models.
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted pr
Are consistently negative Efron's pseudo-r2 in logistic regression possible? Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted probabilities of class membership returned by logistic regression. See the explanation of the formula in the table on the UCLA web page that you cite. Logistic regression does not return class membership assignments. It sometimes appears to, based on a hidden assumption of p = 0.5 for categorization after modeling. But even if strict assignments are needed, that's not necessarily the best cutoff choice. Your formula is related to the accuracy of the classification (at the chosen probability cutoff) which is not a good choice for evaluating classification models of any sort. The numerator of the second term in that formula, with $\pi$ values correctly taken as probabilities, is the basis of the Brier score, the mean-square error between observations and predicted probabilities. That's a strictly proper scoring rule, which you might consider using on its own. Chapter 8 of Frank Harrell's course notes covers several ways to evaluate the quality of logistic regression models.
Are consistently negative Efron's pseudo-r2 in logistic regression possible? Your problem is here: $\pi$ is an array of 1s and 0s, representing the predicted outcome labels as a result of the logistic regressions. That's incorrect. The $\pi$ values should be the predicted pr
44,580
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ If we want to measure our ability to predict conditional means by how low of a square loss we have, we better have lower square loss than the naïve model that guesses $\bar y$ every time! This is exactly what is going on in the pseudo-$R^2$. If you do a worse job of predicting the conditional probability (not label) than a naïve model that always predicts the overall prevalence, then the numerator is larger than the denominator, resulting in pseudo-$R^2<0$. In the case of a probability model, square loss is called Brier score and is not the usual loss function. Brier score is, however, a strictly proper scoring rule, which means, a little loosely speaking, that it seeks out the true conditional probability values. The typical loss function in logistic regression is log loss, which corresponds to maximum likelihood estimation of the coefficients. It makes sense to compare the log loss values in a similar way. This is McFadden’s $R^2$. Indeed, I say that it always makes sense to compare how your model does on a loss function of interest compared to some baseline model. In OLS linear regression, there is a convenient interpretation about the “proportion of variance explained”, but even if we lack such an interpretation, comparing our performance to the performance of a baseline model gives us some idea of if our model provides value. UCLA has a nice webpage about $R^2$-style metrics for probability models like logistic regression. Vanderbilt's Frank Harrell has some thoughts on how to measure the value added by a model. EDIT Worth remembering is that, if the logistic regression coefficients are estimated by minimizing log loss (equivalent to maximum likelihood estimation), even the in-sample Efron $R^2$ can be less than one, since the objective of the coefficient estimation is not to minimize Brier score but log loss. In contrast, this cannot happen for in-sample McFadden's $R^2$ that addresses the explicit objective function used to estimate the model, except for computational funkiness.
Are consistently negative Efron's pseudo-r2 in logistic regression possible?
I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left(
Are consistently negative Efron's pseudo-r2 in logistic regression possible? I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ If we want to measure our ability to predict conditional means by how low of a square loss we have, we better have lower square loss than the naïve model that guesses $\bar y$ every time! This is exactly what is going on in the pseudo-$R^2$. If you do a worse job of predicting the conditional probability (not label) than a naïve model that always predicts the overall prevalence, then the numerator is larger than the denominator, resulting in pseudo-$R^2<0$. In the case of a probability model, square loss is called Brier score and is not the usual loss function. Brier score is, however, a strictly proper scoring rule, which means, a little loosely speaking, that it seeks out the true conditional probability values. The typical loss function in logistic regression is log loss, which corresponds to maximum likelihood estimation of the coefficients. It makes sense to compare the log loss values in a similar way. This is McFadden’s $R^2$. Indeed, I say that it always makes sense to compare how your model does on a loss function of interest compared to some baseline model. In OLS linear regression, there is a convenient interpretation about the “proportion of variance explained”, but even if we lack such an interpretation, comparing our performance to the performance of a baseline model gives us some idea of if our model provides value. UCLA has a nice webpage about $R^2$-style metrics for probability models like logistic regression. Vanderbilt's Frank Harrell has some thoughts on how to measure the value added by a model. EDIT Worth remembering is that, if the logistic regression coefficients are estimated by minimizing log loss (equivalent to maximum likelihood estimation), even the in-sample Efron $R^2$ can be less than one, since the objective of the coefficient estimation is not to minimize Brier score but log loss. In contrast, this cannot happen for in-sample McFadden's $R^2$ that addresses the explicit objective function used to estimate the model, except for computational funkiness.
Are consistently negative Efron's pseudo-r2 in logistic regression possible? I think it’s important to remember what $R^2$ means in the linear case. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left(
44,581
Why is the z statistic of a binomial proportion test normally distributed?
Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np, \sqrt{np(1-p)}.$ and $\hat p = X/n$ is approximately $\mathsf{Norm}(p, \sqrt{p(1-p)/n}).$ This is follows from the Central Limit Theorem and other considerations. However, that does not quite answer your question. In your expression, notice that you have $\hat p$ instead of $p.$ Again, if $n$ is sufficiently large, the Law of Large Numbers says (rougly) that $\hat p \approx p.$ Therefore, it is not exactly correct to say that $\frac{\hat p - p}{\sqrt{\hat p(1-\hat p)/n}}$ has a standard normal distribution. [In case $n$ is in the thousands, this expression is nearly standard normal.] Hypothesis tests. Usually, the test statistic for testing $H_0: p = p_0$ against $H_a: p \ne p_0$ would be $z = \frac{\hat p - p_0}{p_0(1-p_0)/n},$ where $p_0$ is the value of $p$ specified in the null hypothesis. Confidence intervals. However, if you are making a confidence interval, there is no specified hypothetical value $p = p_0.$ The Wald 95% confidence interval is of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p )}{n}}.$ Strictly speaking, this is an asymptotic confidence interval. That is, it is approximately correct only if $n$ is very large. A slight modification of the Wald CI is the Agresti-Coull CI, which has been shown to be more accurate than the Wald interval for small and moderate $n.$ Let $\check p = \frac{X+2}{n+4}.$ Then the A-C 95% CI is of the form $\check p \pm 1.96\sqrt{\frac{\check p(1-\check p )}{n+4}}.$ Note_ See @Glen_b's link to Slutsky's Theorem as justification for use of $\hat p$ when $n$ is very large.
Why is the z statistic of a binomial proportion test normally distributed?
Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np,
Why is the z statistic of a binomial proportion test normally distributed? Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np, \sqrt{np(1-p)}.$ and $\hat p = X/n$ is approximately $\mathsf{Norm}(p, \sqrt{p(1-p)/n}).$ This is follows from the Central Limit Theorem and other considerations. However, that does not quite answer your question. In your expression, notice that you have $\hat p$ instead of $p.$ Again, if $n$ is sufficiently large, the Law of Large Numbers says (rougly) that $\hat p \approx p.$ Therefore, it is not exactly correct to say that $\frac{\hat p - p}{\sqrt{\hat p(1-\hat p)/n}}$ has a standard normal distribution. [In case $n$ is in the thousands, this expression is nearly standard normal.] Hypothesis tests. Usually, the test statistic for testing $H_0: p = p_0$ against $H_a: p \ne p_0$ would be $z = \frac{\hat p - p_0}{p_0(1-p_0)/n},$ where $p_0$ is the value of $p$ specified in the null hypothesis. Confidence intervals. However, if you are making a confidence interval, there is no specified hypothetical value $p = p_0.$ The Wald 95% confidence interval is of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p )}{n}}.$ Strictly speaking, this is an asymptotic confidence interval. That is, it is approximately correct only if $n$ is very large. A slight modification of the Wald CI is the Agresti-Coull CI, which has been shown to be more accurate than the Wald interval for small and moderate $n.$ Let $\check p = \frac{X+2}{n+4}.$ Then the A-C 95% CI is of the form $\check p \pm 1.96\sqrt{\frac{\check p(1-\check p )}{n+4}}.$ Note_ See @Glen_b's link to Slutsky's Theorem as justification for use of $\hat p$ when $n$ is very large.
Why is the z statistic of a binomial proportion test normally distributed? Maybe the reason this "isn't obvious" to you is that it's not exactly true. If $n$ is large and $p$ is not too far from $1/2,$ then $X\sim\mathsf{Binom}(n, p)$ $X$ is approximately $\mathsf{Norm}(np,
44,582
Why is the z statistic of a binomial proportion test normally distributed?
The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{\sqrt{\hat p \hat q / n}},$ will be approximately equal to $ \frac{\hat p -p}{\sqrt{ p q / n}}$, it is the first term in Taylor series expansion around $p$.
Why is the z statistic of a binomial proportion test normally distributed?
The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{
Why is the z statistic of a binomial proportion test normally distributed? The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{\sqrt{\hat p \hat q / n}},$ will be approximately equal to $ \frac{\hat p -p}{\sqrt{ p q / n}}$, it is the first term in Taylor series expansion around $p$.
Why is the z statistic of a binomial proportion test normally distributed? The statistic should be $z = \frac{\hat p -p}{\sqrt{ p q / n}}$ which follows a scaled binomial distribution. This approaches the normal distribution. But also, the expression $z = \frac{\hat p -p}{
44,583
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimator $\tfrac{1}{n}\sum x_i^2$ is unbiased for $1+\mu^2$. You could consider other estimators and they may have different properties (e.g. smaller or larger variance).
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimat
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimator $\tfrac{1}{n}\sum x_i^2$ is unbiased for $1+\mu^2$. You could consider other estimators and they may have different properties (e.g. smaller or larger variance).
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population Any function of the data is called an estimator. There is no such thing as "THE" estimator of a quantity. Various estimators can have different properties. You have shown (correctly) that your estimat
44,584
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will have the mean $1+\mu^2$, and is indeed an unbiased estimator. A more efficient estimator (an estimator with lower variance) is to be made with $S_2 = \left( \sum x_i \right)^2$. If we scale this like $\frac{1}{n} S_2$ it will be distributed as non central chi squared distribution with 1 degree of freedom and mean $1+n\mu^2$. This distribution for $\frac{1}{n} S_2$ has a lower variance than the case with $S_1$. You can verify this easily based on the lower mean and lower degrees of freedom (and the variance is a function of both of those). So, the estimator $\frac{1}{n^2} S_2 + 1 - \frac{1}{n}$ will be a more efficient estimator. The last solution is an estimator. And like bdeonovic stresses in his answer it is not the estimator. However, there is a type of estimator which is unique and that is the minimum variance unbiased estimator (MVUE). The estimator $\frac{1}{n^2} S_2 + 1 - \frac{1}{n}$ is the MVUE. This is because it is a function based on the complete and sufficient statistic $\sum x_i$. See also the Lehmann Sheffé theorem.
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population
You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will have the mean $1+\mu^2$, and is indeed an unbiased estimator. A more efficient estimator (an estimator with lower variance) is to be made with $S_2 = \left( \sum x_i \right)^2$. If we scale this like $\frac{1}{n} S_2$ it will be distributed as non central chi squared distribution with 1 degree of freedom and mean $1+n\mu^2$. This distribution for $\frac{1}{n} S_2$ has a lower variance than the case with $S_1$. You can verify this easily based on the lower mean and lower degrees of freedom (and the variance is a function of both of those). So, the estimator $\frac{1}{n^2} S_2 + 1 - \frac{1}{n}$ will be a more efficient estimator. The last solution is an estimator. And like bdeonovic stresses in his answer it is not the estimator. However, there is a type of estimator which is unique and that is the minimum variance unbiased estimator (MVUE). The estimator $\frac{1}{n^2} S_2 + 1 - \frac{1}{n}$ is the MVUE. This is because it is a function based on the complete and sufficient statistic $\sum x_i$. See also the Lehmann Sheffé theorem.
Unbiased estimator of $ 1 + \mu^{2}$ from a Normal population You can use the properties of a non-central chi squared distribution to construct a non-biased estimator from the sum $$S_1 = \sum_{i=1}^n x_i^2$$ This sum $S_1$ has the mean $n+n\mu^2$. So $S/n$ will
44,585
Best loss function for nonlinear regression
There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even construct one yourself. The choice depends on the nature of your problem and the data you are dealing with. Recall that minimizing some loss is equivalent to maximizing a likelihood function (e.g. using squared error is an equivalent of assuming Gaussian likelihood function), so it is tightly connected to the assumptions you are making about the distribution of errors. More formally, if you think of the model as of something like $$ y = f(X) + \varepsilon $$ then the choice of model (e.g. linear regression, nonlinear regression, deep neural network, etc) is related to estimating the expectation $E[y] = f(X)$, while the choice of the loss function impacts how do you treat the residuals $y - f(X) = \varepsilon$. For example, choosing squared error over absolute error penalizes outliers more, so it would be preferable if this is what you want to achieve. On another hand, absolute error is less prone to outliers, this can be an advantage in another scenario. The most common choice is defaulting to squared error, though it is somehow an arbitrary choice and doesn't have to be the best in all cases.
Best loss function for nonlinear regression
There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even const
Best loss function for nonlinear regression There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even construct one yourself. The choice depends on the nature of your problem and the data you are dealing with. Recall that minimizing some loss is equivalent to maximizing a likelihood function (e.g. using squared error is an equivalent of assuming Gaussian likelihood function), so it is tightly connected to the assumptions you are making about the distribution of errors. More formally, if you think of the model as of something like $$ y = f(X) + \varepsilon $$ then the choice of model (e.g. linear regression, nonlinear regression, deep neural network, etc) is related to estimating the expectation $E[y] = f(X)$, while the choice of the loss function impacts how do you treat the residuals $y - f(X) = \varepsilon$. For example, choosing squared error over absolute error penalizes outliers more, so it would be preferable if this is what you want to achieve. On another hand, absolute error is less prone to outliers, this can be an advantage in another scenario. The most common choice is defaulting to squared error, though it is somehow an arbitrary choice and doesn't have to be the best in all cases.
Best loss function for nonlinear regression There's no such a thing as a loss function "for" a particular kind of model. You could be using nonlinear regression with different loss functions. There are many loss functions and you can even const
44,586
Best loss function for nonlinear regression
Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to think in terms of which conditional fit (or prediction) we want. Use the squared errors if you want conditional expectations as fits or predictions. ("Outliers" are then simply observations that are "far away" from the expectation, and which therefore pull the expectation towards them. If your aim is an expectation fit/prediction, then you should think long and hard about whether you want "robustness to outliers", because "outliers" are a fact of life.) Use the absolute errors if you want condititional medians as fits or predictions. Use quantile (AKA pinball) losses if you want conditional quantiles as fits or predictions. I have written a short paper (Kolassa, 2020, IJF) on this, in the context of forecasting - but the idea holds in the precise same way for fits. Thus, I would recommend you think about what kind of fit/prediction you want, and then tailor your loss function to this.
Best loss function for nonlinear regression
Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to th
Best loss function for nonlinear regression Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to think in terms of which conditional fit (or prediction) we want. Use the squared errors if you want conditional expectations as fits or predictions. ("Outliers" are then simply observations that are "far away" from the expectation, and which therefore pull the expectation towards them. If your aim is an expectation fit/prediction, then you should think long and hard about whether you want "robustness to outliers", because "outliers" are a fact of life.) Use the absolute errors if you want condititional medians as fits or predictions. Use quantile (AKA pinball) losses if you want conditional quantiles as fits or predictions. I have written a short paper (Kolassa, 2020, IJF) on this, in the context of forecasting - but the idea holds in the precise same way for fits. Thus, I would recommend you think about what kind of fit/prediction you want, and then tailor your loss function to this.
Best loss function for nonlinear regression Other answers (like bdeonovic's and Tim's) discuss "robustness to outliers". I have to admit that while this point of view is extremely common, I do not like it very much. I find it more helpful to th
44,587
Best loss function for nonlinear regression
Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard into the literature comparing them least absolute deviation least median of squares least trimmed squares metric trimming metric winsorizing Huber Loss Tukey's biweight loss soft L1 loss Cauchy loss arctan loss How are you doing the optimization? Did you code it yourself? Are you using Gaus-Newton or Gradient Descent? May want to consider Levenberg–Marquardt (interpolates between Gaus-Newton and Gradient Descent)
Best loss function for nonlinear regression
Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard
Best loss function for nonlinear regression Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard into the literature comparing them least absolute deviation least median of squares least trimmed squares metric trimming metric winsorizing Huber Loss Tukey's biweight loss soft L1 loss Cauchy loss arctan loss How are you doing the optimization? Did you code it yourself? Are you using Gaus-Newton or Gradient Descent? May want to consider Levenberg–Marquardt (interpolates between Gaus-Newton and Gradient Descent)
Best loss function for nonlinear regression Most of the alternative loss functions are for making the regression more robust to outliers. I've seen all of the following in various software package implementations, but I haven't looked too hard
44,588
Convergence in $L_1$ counterexample
Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 - 0^2|) = 1$ you get that $Y_n^2 \not \overset{L_1}{\to} 0^2$ As a side note, this examples shows that $L_1$ convergence is not preserved by contiuous transformations i.e if $g : \mathbb R \to \mathbb R$ is a continuous function then $$ X_n \overset{L_1}{\to} X \quad \not \Rightarrow \quad g(X_n) \overset{L_1}{\to} g(X)$$ Convergence in distribution, probability and and almost everywhere are all proserved by continuous transformations.
Convergence in $L_1$ counterexample
Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 -
Convergence in $L_1$ counterexample Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 - 0^2|) = 1$ you get that $Y_n^2 \not \overset{L_1}{\to} 0^2$ As a side note, this examples shows that $L_1$ convergence is not preserved by contiuous transformations i.e if $g : \mathbb R \to \mathbb R$ is a continuous function then $$ X_n \overset{L_1}{\to} X \quad \not \Rightarrow \quad g(X_n) \overset{L_1}{\to} g(X)$$ Convergence in distribution, probability and and almost everywhere are all proserved by continuous transformations.
Convergence in $L_1$ counterexample Let $X_n \sim Be(n^{-1})$ that is a Bernoulli random variable. Now consider $Y_n = \sqrt{n}X_n$. It is straight forward that $E(|Y_n-0|)= n^{-1/2}$. Hence $Y_n \overset{L_1}{\to} 0$. Since $E(|Y_n^2 -
44,589
Convergence in $L_1$ counterexample
In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this example solves the converse problem ($L^2$ but not $L^1$), and further on a space that is not a probability space ($\mathbb R$ with uniform measure). Note that $\|f_n\|_{L^1} \equiv 1$, which shows that the $L^1$ norm converges, but this does not give you convergence in $L^1$ norm, as the almost everywhere limit is $f=0$, and if $f_n$ did converge in $L^1$ then it would have to converge to the a.e. limit i.e. $\|f_n - 0 \|_{L^1} \to 0$. The $L^2$ norm however, converges to $0$, proving convergence in $L^2$ norm to zero. (In fact there is even a slight discrepancy in the question, as the difference in $L^2$ can be written $\|f_n - f\|^2_{L^2} = \|(f_n-f)^2\|_{L^1}$ but convergence of the square in $L^1$ is $\|f_n^2 - f^2\|_{L^1}$. Thankfully this nonlinearity issue disappears when the $f$ is zero.) A correct example was given in the math.SE question's body: $\sqrt n\mathbf 1_{[0,1/n]}$ (with uniform probability on $[0,1]$). Another perhaps more trivial (and perhaps not in the spirit of the question) example can be given via constant (in $n$) sequences, simply because there are functions in $L^1$ whose square are not in $L^1$, so the question of their convergence has no meaning. Any random variable with a mean and without variance will do; an example using the same uniform probabilty on $[0,1]$ is $\frac1{\sqrt x}$. PS One can prove partial results. For example, if $f,f_n$ are almost surely bounded uniformly in $n$, $\|f\|_{L^\infty} , \|f_n\|_{L^\infty} \le M$, then observe $$\|f_n^2 - f^2\|_{L^1} = \|(f_n - f)(f_n+f)\|_{L^1}\le M\| f_n - f\|_{L^1} \to 0.$$ This is of course consistent with the example of Manuel as his $Y_n$ is not a.s. bounded in $n$.
Convergence in $L_1$ counterexample
In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this examp
Convergence in $L_1$ counterexample In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this example solves the converse problem ($L^2$ but not $L^1$), and further on a space that is not a probability space ($\mathbb R$ with uniform measure). Note that $\|f_n\|_{L^1} \equiv 1$, which shows that the $L^1$ norm converges, but this does not give you convergence in $L^1$ norm, as the almost everywhere limit is $f=0$, and if $f_n$ did converge in $L^1$ then it would have to converge to the a.e. limit i.e. $\|f_n - 0 \|_{L^1} \to 0$. The $L^2$ norm however, converges to $0$, proving convergence in $L^2$ norm to zero. (In fact there is even a slight discrepancy in the question, as the difference in $L^2$ can be written $\|f_n - f\|^2_{L^2} = \|(f_n-f)^2\|_{L^1}$ but convergence of the square in $L^1$ is $\|f_n^2 - f^2\|_{L^1}$. Thankfully this nonlinearity issue disappears when the $f$ is zero.) A correct example was given in the math.SE question's body: $\sqrt n\mathbf 1_{[0,1/n]}$ (with uniform probability on $[0,1]$). Another perhaps more trivial (and perhaps not in the spirit of the question) example can be given via constant (in $n$) sequences, simply because there are functions in $L^1$ whose square are not in $L^1$, so the question of their convergence has no meaning. Any random variable with a mean and without variance will do; an example using the same uniform probabilty on $[0,1]$ is $\frac1{\sqrt x}$. PS One can prove partial results. For example, if $f,f_n$ are almost surely bounded uniformly in $n$, $\|f\|_{L^\infty} , \|f_n\|_{L^\infty} \le M$, then observe $$\|f_n^2 - f^2\|_{L^1} = \|(f_n - f)(f_n+f)\|_{L^1}\le M\| f_n - f\|_{L^1} \to 0.$$ This is of course consistent with the example of Manuel as his $Y_n$ is not a.s. bounded in $n$.
Convergence in $L_1$ counterexample In the answer of the linked question over on Math.SE and a comment of this page, it is suggested to take $$ f_n = n^{-1} \mathbf 1_{[0,n]}$$ actually this does not work, and this is because this examp
44,590
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical mean $E(Y |X) $ minimizes the root mean square error of a prediction (independent of the distribution). So if minimizing the mean square error of a prediction is your goal and you know the theoretical mean, then indeed you do not need to care about the distribution (except whether the mean and variance exist for the distribution). However, often this theoretical mean is unknown and we use an estimate instead. Or we want to minimize something else than the mean squared error. In those cases you often need to use assumptions about the distribution of the errors in order to determine which estimator to use (to determine which one is optimal). So a typical situation is gathering data from a population compute an estimate of the distribution of the population based on the data use the estimates directly (e.g. make some decision based on the estimates) or use the estimates to make a prediction (in which case the error due to the randomness in the population get's on top of the randomness of the estimates about this population) The situation that you sketch makes a shortcut to the final point and assumes that we know the population. This is very often not the case. (It can still be a practical case, for instance if we have so much information, a large sample, such that we can estimate the population distribution with high accuracy and the biggest error in the prediction is due to the randomness in the population) If the predicted value of machine learning method is $E(y | x)$, why bother with different cost functions for $ y | x$? A machine learning method does not provide $E(y|x)$ it provides an estimate of $E(y|x)$. How good or bad estimators and predictors are will depend on the underlying distribution of the population (from which we can deduce the sample distribution of our estimator and predictor). Example: Say we wish to estimate the location parameter of a Laplace distributed population (and use that for prediction). In that case the sample median is a better estimator than the sample mean (ie. The distribution of the sample median will be closer to the true parameter than the distribution of the sample mean. The error of the estimate will be smaller). Image: showcase that the sample median can be a better estimator than the sample mean. Note that the distribution is more concentrated around the true location parameter (in this example this is 0). So based on the assumption that the errors are Laplace distributed we should decide to use the sample median as an estimator and predictor, and not the sample mean. Difference between cost function used for fitting and cost function used for evaluation. Another underlying issue is about the differences in cost functions. The cost function that is used to perform the fitting can be different from the cost function that is the objective. In the previous example with the Laplace distribution, the objective might be to minimize the expected mean squared error of the estimate/prediction. But, we find the estimate that optimizes this objective by minimising the mean absolute error of the residuals. A related question is: Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? In that question you minimize the (objective) cost function by cross validation, but in the answer it is demonstrated that it is still good to perform the fitting (during training) by means of a cost function that relates to the distribution of the error of the measurements. quote from the chat "My question had to do with how to choose one estimator or another one (i.e. one loss function over another one)" An estimator can be expressed as the argmin of some cost function of the data/sample (e.g. the sample mean minimizes the sum of squared residuals, and the sample median minimizes the sum of absolute residuals). However, that is a different cost function than the cost function used to describe the performance of an estimator. So that's why we are bothered with cost functions. Those cost functions allow us to evaluate the performance of an estimator. We can compute/estimate how often an estimator X make a particular error and compare it with how often an estimator Y makes that particular error. And since there are many sizes of errors we make a weighted sum of all possibilities by some cost function. E.g. the distribution of errors for estimator X and Y might be (a simplistic example) Error size. -2 -1 0 1 2 frequency for estimator X 0.00 0.25 0.50 0.25 0.00 frequency for estimator Y 0.02 0.18 0.60 0.18 0.02 The estimator X has a higher mean absolute error (half the time the error is $\pm1$ and the mean absolut error is 0.5. For the estimator Y it is 0.44). However in terms of the expected mean squared error the estimator X (with 0.5) is lower than the estimator Y (with expected mean squared error 0.52). To compute these comparisons you need to be able to know/estimate the sample distribution of the estimators (like in the above example this is done for the Laplace distribution and the sample mean and the sample median) and some cost function to compare those distributions. (In the case of the Laplace distribution and the sample mean vs sample median, you have that the sample median is stochastically dominant and for any convex cost function the sample median will be better than the sample mean, so you do not always need to know the evaluation cost function in detail. Related question: Estimator that is optimal under all sensible loss (evaluation) functions) R-code to create the graph ### generate data set.seed(1) s <- 100000 n <- 5 x <- matrix(L1pack::rlaplace(s*n,0,1),s) medians <- apply(x,1,median) means <- apply(x,1,mean) ### compute frequency histogram breaks <- seq(floor(min(medians,means)),ceiling(max(medians,means)), 0.02) hmedians <- hist(medians, breaks = breaks) hmeans <- hist(means, breaks = breaks) ### plot results plot(hmedians$mids, hmedians$density, type = "l", ylim=c(0,1.5), xlim = c(-1.4,1.4), xlab = "estimate value", ylab = "density / histogram", lty=2) lines(hmeans$mids, hmeans$density) lines(c(0,0),c(0,2),lty=1,col="gray") title("samples of size 5 from Laplace distribution comparision of sample distribution for different estimates", cex.main = 1) legend(-1.4,1.5, c("sample median","sample mean"), lty = c(2,1), cex = 0.7)
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical mean $E(Y |X) $ minimizes the root mean square error of a prediction (independent of the distribution). So if minimizing the mean square error of a prediction is your goal and you know the theoretical mean, then indeed you do not need to care about the distribution (except whether the mean and variance exist for the distribution). However, often this theoretical mean is unknown and we use an estimate instead. Or we want to minimize something else than the mean squared error. In those cases you often need to use assumptions about the distribution of the errors in order to determine which estimator to use (to determine which one is optimal). So a typical situation is gathering data from a population compute an estimate of the distribution of the population based on the data use the estimates directly (e.g. make some decision based on the estimates) or use the estimates to make a prediction (in which case the error due to the randomness in the population get's on top of the randomness of the estimates about this population) The situation that you sketch makes a shortcut to the final point and assumes that we know the population. This is very often not the case. (It can still be a practical case, for instance if we have so much information, a large sample, such that we can estimate the population distribution with high accuracy and the biggest error in the prediction is due to the randomness in the population) If the predicted value of machine learning method is $E(y | x)$, why bother with different cost functions for $ y | x$? A machine learning method does not provide $E(y|x)$ it provides an estimate of $E(y|x)$. How good or bad estimators and predictors are will depend on the underlying distribution of the population (from which we can deduce the sample distribution of our estimator and predictor). Example: Say we wish to estimate the location parameter of a Laplace distributed population (and use that for prediction). In that case the sample median is a better estimator than the sample mean (ie. The distribution of the sample median will be closer to the true parameter than the distribution of the sample mean. The error of the estimate will be smaller). Image: showcase that the sample median can be a better estimator than the sample mean. Note that the distribution is more concentrated around the true location parameter (in this example this is 0). So based on the assumption that the errors are Laplace distributed we should decide to use the sample median as an estimator and predictor, and not the sample mean. Difference between cost function used for fitting and cost function used for evaluation. Another underlying issue is about the differences in cost functions. The cost function that is used to perform the fitting can be different from the cost function that is the objective. In the previous example with the Laplace distribution, the objective might be to minimize the expected mean squared error of the estimate/prediction. But, we find the estimate that optimizes this objective by minimising the mean absolute error of the residuals. A related question is: Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? In that question you minimize the (objective) cost function by cross validation, but in the answer it is demonstrated that it is still good to perform the fitting (during training) by means of a cost function that relates to the distribution of the error of the measurements. quote from the chat "My question had to do with how to choose one estimator or another one (i.e. one loss function over another one)" An estimator can be expressed as the argmin of some cost function of the data/sample (e.g. the sample mean minimizes the sum of squared residuals, and the sample median minimizes the sum of absolute residuals). However, that is a different cost function than the cost function used to describe the performance of an estimator. So that's why we are bothered with cost functions. Those cost functions allow us to evaluate the performance of an estimator. We can compute/estimate how often an estimator X make a particular error and compare it with how often an estimator Y makes that particular error. And since there are many sizes of errors we make a weighted sum of all possibilities by some cost function. E.g. the distribution of errors for estimator X and Y might be (a simplistic example) Error size. -2 -1 0 1 2 frequency for estimator X 0.00 0.25 0.50 0.25 0.00 frequency for estimator Y 0.02 0.18 0.60 0.18 0.02 The estimator X has a higher mean absolute error (half the time the error is $\pm1$ and the mean absolut error is 0.5. For the estimator Y it is 0.44). However in terms of the expected mean squared error the estimator X (with 0.5) is lower than the estimator Y (with expected mean squared error 0.52). To compute these comparisons you need to be able to know/estimate the sample distribution of the estimators (like in the above example this is done for the Laplace distribution and the sample mean and the sample median) and some cost function to compare those distributions. (In the case of the Laplace distribution and the sample mean vs sample median, you have that the sample median is stochastically dominant and for any convex cost function the sample median will be better than the sample mean, so you do not always need to know the evaluation cost function in detail. Related question: Estimator that is optimal under all sensible loss (evaluation) functions) R-code to create the graph ### generate data set.seed(1) s <- 100000 n <- 5 x <- matrix(L1pack::rlaplace(s*n,0,1),s) medians <- apply(x,1,median) means <- apply(x,1,mean) ### compute frequency histogram breaks <- seq(floor(min(medians,means)),ceiling(max(medians,means)), 0.02) hmedians <- hist(medians, breaks = breaks) hmeans <- hist(means, breaks = breaks) ### plot results plot(hmedians$mids, hmedians$density, type = "l", ylim=c(0,1.5), xlim = c(-1.4,1.4), xlab = "estimate value", ylab = "density / histogram", lty=2) lines(hmeans$mids, hmeans$density) lines(c(0,0),c(0,2),lty=1,col="gray") title("samples of size 5 from Laplace distribution comparision of sample distribution for different estimates", cex.main = 1) legend(-1.4,1.5, c("sample median","sample mean"), lty = c(2,1), cex = 0.7)
If the predicted value of machine learning method is E(y | x), why bother with different cost functi The mean minimizing the root mean square error is often not the practical situation It is well known that the mean E(Y |X) minimizes the Root Mean Square Error (RMSE). You are right, the theoretical
44,591
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider modeling different assumptions because it can have a substantial impact on your results. why even bother with different cost functions like the negative log-likelihood? Because we should be using our true loss function. Unfortunately, the training most people get doesn't really give them a way to really parse through the issues with varying the loss function. Let me give you a real-world problem. Some things must be purchases such that $x\ge{k}$ where $k$ is an unknown constant. If $x<k$ then one hundred percent of the material purchased is unusable and must be destroyed. You must then begin again. If you purchase $x>k$ then $x-k$ must be destroyed and is a loss. On either side, the loss per unit is the waste times $c$. Suppose $k=1000$ and $\hat{k}=999$, while $c=\$1000$. Being one unit to the left will cost you \$999,000. One unit to the right will cost you \$1,000. Minimizing quadratic loss would imply you should take catastrophic losses fifty percent of the time. The expectation is a disaster. Solutions to real world problems can be markedly suboptimal if you use the RMSE as your objective function.
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider model
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider modeling different assumptions because it can have a substantial impact on your results. why even bother with different cost functions like the negative log-likelihood? Because we should be using our true loss function. Unfortunately, the training most people get doesn't really give them a way to really parse through the issues with varying the loss function. Let me give you a real-world problem. Some things must be purchases such that $x\ge{k}$ where $k$ is an unknown constant. If $x<k$ then one hundred percent of the material purchased is unusable and must be destroyed. You must then begin again. If you purchase $x>k$ then $x-k$ must be destroyed and is a loss. On either side, the loss per unit is the waste times $c$. Suppose $k=1000$ and $\hat{k}=999$, while $c=\$1000$. Being one unit to the left will cost you \$999,000. One unit to the right will cost you \$1,000. Minimizing quadratic loss would imply you should take catastrophic losses fifty percent of the time. The expectation is a disaster. Solutions to real world problems can be markedly suboptimal if you use the RMSE as your objective function.
If the predicted value of machine learning method is E(y | x), why bother with different cost functi Say we know that Y follows a distribution with density f. If that statement is true, you would not want to try different distributional assumptions. If it is not true, then you should consider model
44,592
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x?
The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these estimates can be wrong or highly inefficient. In the case of a simple mean, i.e., when there are no predictors X, if the sample comes from a log-normal distribution the sample mean on the original scale is a terrible estimate of E(Y). The best estimator is a function of the mean and standard deviation after taking logs. In the multivariable situation, a least squares estimate of E(Y|X) provides unbiased estimates of the mean if the model structure is correctly specified for the right hand side of the model, but the estimates of E(Y|X) as a function of X can be wrong for every observation in the sense that all the regression coefficients are wrong even though they "add up" to something that is right. If Y|X has a lognormal distribution, for example, and you did not take log(Y) when computing least squares estimates, you will get bad predictions when you examine the predictions in an X-specific manner.
If the predicted value of machine learning method is E(y | x), why bother with different cost functi
The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these
If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these estimates can be wrong or highly inefficient. In the case of a simple mean, i.e., when there are no predictors X, if the sample comes from a log-normal distribution the sample mean on the original scale is a terrible estimate of E(Y). The best estimator is a function of the mean and standard deviation after taking logs. In the multivariable situation, a least squares estimate of E(Y|X) provides unbiased estimates of the mean if the model structure is correctly specified for the right hand side of the model, but the estimates of E(Y|X) as a function of X can be wrong for every observation in the sense that all the regression coefficients are wrong even though they "add up" to something that is right. If Y|X has a lognormal distribution, for example, and you did not take log(Y) when computing least squares estimates, you will get bad predictions when you examine the predictions in an X-specific manner.
If the predicted value of machine learning method is E(y | x), why bother with different cost functi The answer is simpler than it seems. Though the sample mean in the simplest case, or least squares estimates in the multivariable predictor case provide unbiased estimates of the long-run mean, these
44,593
What is exponential entropy?
I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distribution $U^N$ over $N$ outcomes, i.e. $U^N_i = \frac{1}{N}$. It's easy to show that exponential entropy is equal to the number of outcomes $N$: \begin{align} \exp\left(H\left(U^N\right)\right)& = \exp\left(-\sum_i U^N_i \ln(U^N_i)\right)\\ & = \exp\left(-\sum_i \frac{1}{N} \ln\left(\frac{1}{N}\right)\right)\\ & = \exp\left(N \frac{1}{N} \ln\left(N\right)\right)\\ & = N \end{align} For an arbitrary probability distribution over $M$ outcomes $P^M$, there is then some number $N \leq M$ such that: \begin{align} N = \exp\left(H\left(U^N\right)\right) \leq \exp\left(H\left(P^M\right)\right) \leq \exp\left(H\left(U^{N+1}\right)\right) = N + 1 \end{align} where equal $N = M$ just in case $P^M$ is uniform. From this inequality, we can interpret exponential entropy as the effective number of outcomes: The probability distribution $P^M$ has about as much uncertainty as a uniform distribution over $\left\lfloor\exp\left(H\left(P^M\right)\right)\right\rfloor$ or $\left\lceil\exp\left(H\left(P^M\right)\right)\right\rceil$ outcomes. Intuitively, a probability distribution with exponential entropy near 2 is about as uncertain as a fair coin flip, and a probability distribution with exponential entropy near one is nearly deterministic. Exponential entropy is sometimes called perplexity. In this context, the base of the exponent and logarithm are typically written as 2 rather than $e$, but it doesn't matter since $2^{\log_2(x)} = e^{\log_e(x)} = x$. Predicting a sample We can use these metrics and intuitions for understanding how well a probability distribution predicts a sample. Call the true data distribution $P$, and the distribution we are measuring $Q$. In a typical use case, $Q$ is a model we have estimated, and now we want to measure how well it fits data that is distributed according to $P$. The cross-entropy of $Q$ relative to $P$ is: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \end{align} In this typical use case, we cannot compute the cross-entropy exactly because we do not know $P$ (otherwise we would use $P$ instead of estimating $Q$). Instead, we gather a dataset $D$, or sample, that is distributed according to $P$, and perform a Monte-carlo estimate of $H(P, Q)$ by averaging across the dataset: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \\ & \approx -\frac{1}{T} \sum_{i\sim P_i} \ln Q_i \\ & = -\frac{1}{T} \sum_{i\in D} \ln Q_i \end{align} where $D$ is just a dataset containing $T$ observations that we are treating as a random sample from the true distribution (Note that $D$ may contain duplicate entries, and may lack some entries entirely). Note that $H(P, Q) \geq H(P)$, with equality just in case $P=Q$, so lower cross-entropy indicates that $Q$ is closer to $P$. If we exponentiate the cross-entropy to get the perplexity, we see how uncertain the distribution was on average when predicting each observation. A typical application is language modeling: if the perplexity is 100, then, on average, the model was as uncertain in predicting the next word as if it were choosing uniformly among 100 possible next words. Note that $D$ can be a different sample (still from $P$) from the one that used used to estimate $Q$. In this case, the perplexity is held-out and provides a measure of how well the model generalizes to unseen data from the same distribution it was estimated on, and can be compared to the perplexity on the estimation dataset to assess whether your model has overfit the estimation data. The continuous case Shannon obtained the continuous version of entropy in your post by simply replacing the summation sign with an integral rather than performing a rigorous derivation. You can approximate a continuous distribution by binning the random variable and then defining a probability distribution over the bins, with the approximation improving as the number of bins increases. In this sense, you can view the exponential entropy of the approximating distribution in a similar way. Unfortunately, as the number of bins goes to infinity to make the discrete distribution approach the continuous distribution in the limit, you end up with an inconvenient infinity in the expression. On reflection, this is not so surprising, as the probability of a single real number under a continuous distribution is zero.
What is exponential entropy?
I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distr
What is exponential entropy? I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distribution $U^N$ over $N$ outcomes, i.e. $U^N_i = \frac{1}{N}$. It's easy to show that exponential entropy is equal to the number of outcomes $N$: \begin{align} \exp\left(H\left(U^N\right)\right)& = \exp\left(-\sum_i U^N_i \ln(U^N_i)\right)\\ & = \exp\left(-\sum_i \frac{1}{N} \ln\left(\frac{1}{N}\right)\right)\\ & = \exp\left(N \frac{1}{N} \ln\left(N\right)\right)\\ & = N \end{align} For an arbitrary probability distribution over $M$ outcomes $P^M$, there is then some number $N \leq M$ such that: \begin{align} N = \exp\left(H\left(U^N\right)\right) \leq \exp\left(H\left(P^M\right)\right) \leq \exp\left(H\left(U^{N+1}\right)\right) = N + 1 \end{align} where equal $N = M$ just in case $P^M$ is uniform. From this inequality, we can interpret exponential entropy as the effective number of outcomes: The probability distribution $P^M$ has about as much uncertainty as a uniform distribution over $\left\lfloor\exp\left(H\left(P^M\right)\right)\right\rfloor$ or $\left\lceil\exp\left(H\left(P^M\right)\right)\right\rceil$ outcomes. Intuitively, a probability distribution with exponential entropy near 2 is about as uncertain as a fair coin flip, and a probability distribution with exponential entropy near one is nearly deterministic. Exponential entropy is sometimes called perplexity. In this context, the base of the exponent and logarithm are typically written as 2 rather than $e$, but it doesn't matter since $2^{\log_2(x)} = e^{\log_e(x)} = x$. Predicting a sample We can use these metrics and intuitions for understanding how well a probability distribution predicts a sample. Call the true data distribution $P$, and the distribution we are measuring $Q$. In a typical use case, $Q$ is a model we have estimated, and now we want to measure how well it fits data that is distributed according to $P$. The cross-entropy of $Q$ relative to $P$ is: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \end{align} In this typical use case, we cannot compute the cross-entropy exactly because we do not know $P$ (otherwise we would use $P$ instead of estimating $Q$). Instead, we gather a dataset $D$, or sample, that is distributed according to $P$, and perform a Monte-carlo estimate of $H(P, Q)$ by averaging across the dataset: \begin{align} H(P, Q) & = -\sum_i P_i \ln Q_i \\ & \approx -\frac{1}{T} \sum_{i\sim P_i} \ln Q_i \\ & = -\frac{1}{T} \sum_{i\in D} \ln Q_i \end{align} where $D$ is just a dataset containing $T$ observations that we are treating as a random sample from the true distribution (Note that $D$ may contain duplicate entries, and may lack some entries entirely). Note that $H(P, Q) \geq H(P)$, with equality just in case $P=Q$, so lower cross-entropy indicates that $Q$ is closer to $P$. If we exponentiate the cross-entropy to get the perplexity, we see how uncertain the distribution was on average when predicting each observation. A typical application is language modeling: if the perplexity is 100, then, on average, the model was as uncertain in predicting the next word as if it were choosing uniformly among 100 possible next words. Note that $D$ can be a different sample (still from $P$) from the one that used used to estimate $Q$. In this case, the perplexity is held-out and provides a measure of how well the model generalizes to unseen data from the same distribution it was estimated on, and can be compared to the perplexity on the estimation dataset to assess whether your model has overfit the estimation data. The continuous case Shannon obtained the continuous version of entropy in your post by simply replacing the summation sign with an integral rather than performing a rigorous derivation. You can approximate a continuous distribution by binning the random variable and then defining a probability distribution over the bins, with the approximation improving as the number of bins increases. In this sense, you can view the exponential entropy of the approximating distribution in a similar way. Unfortunately, as the number of bins goes to infinity to make the discrete distribution approach the continuous distribution in the limit, you end up with an inconvenient infinity in the expression. On reflection, this is not so surprising, as the probability of a single real number under a continuous distribution is zero.
What is exponential entropy? I will begin with building intuitions for the discrete case and then discuss the continuous case. The discrete case First, consider exponential entropy for the special case of a discrete uniform distr
44,594
What is exponential entropy?
It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribution $p_1...p_n$. Its entropy is $S = -\sum _i p_i \log p_i$ (just the discrete form of what you posted). Now, let's say we have $N$ variables following this distribution. The probability for $m_1$ of them to have value $1$, $m_2$ to have value $2$ and so forth is $ H= \prod_i {p_i}^{m_i} $ (where $\sum_i m_i =N$). Now, if we ask what's the probability of those $m$'s to have the same proportions as the probability distribution (i.e. $m_i = Np_i$; never mind m being an integer), we have $H=\prod_i {p_i}^{N p_i} =(\prod_i {p_i}^{p_i})^N $ We can define the inner expression as $H_1$, having $H = H_1 ^N $; you can see that $-\log H_1 = S$. This allows us to understand the exponent of the entropy as the (inverse of the) probability of a sample drawn from a distribution to follow the same proportion as that distribution (properly corrected for sample size).
What is exponential entropy?
It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribut
What is exponential entropy? It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribution $p_1...p_n$. Its entropy is $S = -\sum _i p_i \log p_i$ (just the discrete form of what you posted). Now, let's say we have $N$ variables following this distribution. The probability for $m_1$ of them to have value $1$, $m_2$ to have value $2$ and so forth is $ H= \prod_i {p_i}^{m_i} $ (where $\sum_i m_i =N$). Now, if we ask what's the probability of those $m$'s to have the same proportions as the probability distribution (i.e. $m_i = Np_i$; never mind m being an integer), we have $H=\prod_i {p_i}^{N p_i} =(\prod_i {p_i}^{p_i})^N $ We can define the inner expression as $H_1$, having $H = H_1 ^N $; you can see that $-\log H_1 = S$. This allows us to understand the exponent of the entropy as the (inverse of the) probability of a sample drawn from a distribution to follow the same proportion as that distribution (properly corrected for sample size).
What is exponential entropy? It's just my two cents, but I can think of an interpretation, following part of the development of the KL divergence and working from it: Let's consider the discrete case, with a probability distribut
44,595
What is exponential entropy?
Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L. “Exponential Entropy as a Measure of Extent of a Distribution.” Z. Wahrscheinlichkeitstheorie verw., 5 (1966), pp. 217–225.
What is exponential entropy?
Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L.
What is exponential entropy? Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L. “Exponential Entropy as a Measure of Extent of a Distribution.” Z. Wahrscheinlichkeitstheorie verw., 5 (1966), pp. 217–225.
What is exponential entropy? Exponential entropy measures the extent of a distribution, and can be used to avoid the case of singularity when the weighted average entropy of some variables is zero, $\bar{H}(X) = 0$. Campbell, L.
44,596
What is exponential entropy?
Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one is then interested in the effective number of species, and it turns out this is given as the exponential of entropy, see How to include the observed values, not just their probabilities, in information entropy?.
What is exponential entropy?
Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one i
What is exponential entropy? Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one is then interested in the effective number of species, and it turns out this is given as the exponential of entropy, see How to include the observed values, not just their probabilities, in information entropy?.
What is exponential entropy? Entropy can be used as a measure of diversity, as biodiversity in ecology, or of income inequality, ... see for instance How is the Herfindahl-Hirschman index different from entropy?. In ecology one i
44,597
Monotonic splines in Python [closed]
Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simple to implement (there should be also some matlab code in appendix). Anyway below you will find my own attempt :-) A bit of context The paper uses a smoother technique known as P-spline. P-splines have been introduced by Eilers and Marx, 1991 and combine B-splines (defined on equally spaced knows) and finite difference regularization of the spline coefficients (the second reference also contains some codes you can use to get accustomed to the methodology if you want). In my answer I will use a special case of P-splines, the Whittaker graduation method (see e.g. Eilers, 2003 which also contains some computer code in appendix). Force the smoother to be monotonic: asymmetric penalties To achieve monotonicity, we want that the first differences of the estimated Whittaker smoother have same sign (all negative or positive). Suppose we want a monotonically increasing fit. Following Eilers, 2006 we can write out problem as $$ S = \|y - z\|^{2} + \lambda \| \Delta^{(3)} z\|^{2} + \kappa \sum v_{i} (\Delta^{(1)} z_{i})^{2} $$ where $z$ is the vector of unknowns, $\Delta^{(3)}$ is a third order difference operator, $\Delta^{(1)}$ is the first order difference operator between adjacent smoothed values, $v_{i}$ is a weighting factor with value 1 if $\Delta^{(1)} z_{i} > 0$ and 0 otherwise, $\lambda$ is a smoothing parameter and $\kappa$ is a large constant (let say equal to $10^6$). Below you will find a small example comparing monotone and non monotone fit. import math import numpy import matplotlib.pyplot as plt # Simulate data N = 100 xlo = 0.001 xhi = 2*numpy.pi x = numpy.arange(xlo, xhi, step = (xhi-xlo)/N) y0 = numpy.sin(x) + numpy.log(x) y = y0 + numpy.random.randn(N) * 0.5 # Prepare bases (Imat) and penalty dd = 3 E = numpy.eye(N) D3 = numpy.diff(E, n = dd, axis=0) D1 = numpy.diff(E, n = 1, axis=0) la = 100 kp = 10000000 # Monotone smoothing ws = numpy.zeros(N - 1) for it in range(30): Ws = numpy.diag(ws * kp) mon_cof = numpy.linalg.solve(E + la * D3.T @ D3 + D1.T @ Ws @ D1, y) ws_new = (D1 @ mon_cof < 0.0) * 1 dw = numpy.sum(ws != ws_new) ws = ws_new if(dw == 0): break print(dw) # Monotonic and non monotonic fits z = mon_cof z2 = numpy.linalg.solve(E + la * D3.T @ D3, y) # Plots plt.scatter(x, y, linestyle = 'None', color = 'gray', s = 0.5, label = 'raw data') plt.plot(x, z, color = 'red', label = 'monotonic smooth') plt.plot(x, z2, color = 'blue', linestyle = '--', label = 'unconstrained smooth') plt.legend(loc="lower right") plt.show() I hope this helps a bit (or at least you find it interesting).
Monotonic splines in Python [closed]
Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simpl
Monotonic splines in Python [closed] Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simple to implement (there should be also some matlab code in appendix). Anyway below you will find my own attempt :-) A bit of context The paper uses a smoother technique known as P-spline. P-splines have been introduced by Eilers and Marx, 1991 and combine B-splines (defined on equally spaced knows) and finite difference regularization of the spline coefficients (the second reference also contains some codes you can use to get accustomed to the methodology if you want). In my answer I will use a special case of P-splines, the Whittaker graduation method (see e.g. Eilers, 2003 which also contains some computer code in appendix). Force the smoother to be monotonic: asymmetric penalties To achieve monotonicity, we want that the first differences of the estimated Whittaker smoother have same sign (all negative or positive). Suppose we want a monotonically increasing fit. Following Eilers, 2006 we can write out problem as $$ S = \|y - z\|^{2} + \lambda \| \Delta^{(3)} z\|^{2} + \kappa \sum v_{i} (\Delta^{(1)} z_{i})^{2} $$ where $z$ is the vector of unknowns, $\Delta^{(3)}$ is a third order difference operator, $\Delta^{(1)}$ is the first order difference operator between adjacent smoothed values, $v_{i}$ is a weighting factor with value 1 if $\Delta^{(1)} z_{i} > 0$ and 0 otherwise, $\lambda$ is a smoothing parameter and $\kappa$ is a large constant (let say equal to $10^6$). Below you will find a small example comparing monotone and non monotone fit. import math import numpy import matplotlib.pyplot as plt # Simulate data N = 100 xlo = 0.001 xhi = 2*numpy.pi x = numpy.arange(xlo, xhi, step = (xhi-xlo)/N) y0 = numpy.sin(x) + numpy.log(x) y = y0 + numpy.random.randn(N) * 0.5 # Prepare bases (Imat) and penalty dd = 3 E = numpy.eye(N) D3 = numpy.diff(E, n = dd, axis=0) D1 = numpy.diff(E, n = 1, axis=0) la = 100 kp = 10000000 # Monotone smoothing ws = numpy.zeros(N - 1) for it in range(30): Ws = numpy.diag(ws * kp) mon_cof = numpy.linalg.solve(E + la * D3.T @ D3 + D1.T @ Ws @ D1, y) ws_new = (D1 @ mon_cof < 0.0) * 1 dw = numpy.sum(ws != ws_new) ws = ws_new if(dw == 0): break print(dw) # Monotonic and non monotonic fits z = mon_cof z2 = numpy.linalg.solve(E + la * D3.T @ D3, y) # Plots plt.scatter(x, y, linestyle = 'None', color = 'gray', s = 0.5, label = 'raw data') plt.plot(x, z, color = 'red', label = 'monotonic smooth') plt.plot(x, z2, color = 'blue', linestyle = '--', label = 'unconstrained smooth') plt.legend(loc="lower right") plt.show() I hope this helps a bit (or at least you find it interesting).
Monotonic splines in Python [closed] Hi I do not know the specifics of your problem but you might find the following reference really interesting: Eilers, 2006 (especially paragraph 3). The idea presented in the reference is rather simpl
44,598
Monotonic splines in Python [closed]
I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html). Specifically, you can fit a generalized additive model using HistGradienBoostingRegressor and setting max_depth=1, which ensures that there will be no interactions between features (if that's what you want). You can then use monotonic_cst to specify the monotonicity constraints for each feature. This option also exists in XGBoost.
Monotonic splines in Python [closed]
I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/sta
Monotonic splines in Python [closed] I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html). Specifically, you can fit a generalized additive model using HistGradienBoostingRegressor and setting max_depth=1, which ensures that there will be no interactions between features (if that's what you want). You can then use monotonic_cst to specify the monotonicity constraints for each feature. This option also exists in XGBoost.
Monotonic splines in Python [closed] I don't know of python package that explicitly fits splines, but you should be able to achieve your goal with gradient boosting in the most recent version of scikit-learn (https://scikit-learn.org/sta
44,599
Bayesian inference and testable implications
There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give some non-zero prior probability for the alternative models in that class (i.e., have a prior probability less than one for your posited model class). The other is to observe some evidence that has zero density under all distributions in the posited model class, which then allows you to update to any belief you want (see discussion here). If you have assigned a prior probability of one to a class of models, and you never observe evidence that is inconsistent with those models, you can never "escape" that set of models within the Bayesian paradigm. Note that this is by design --- if you assign a prior probability of one to a set of models, you are saying that any alternative class of models has zero probability. In short, you are choosing to stick with your posited class of models no matter how strongly the evidence turns against them, so long as it is not inconsistent with those models. If you would like to have a principled "escape route" operating within the Bayesian paradigm, you will need to posit some broader class of alternative models and give it a non-zero prior probability. You could certainly give the alternative models a very low prior probability, so that they only become important a posteriori when the main model class starts to be (probabilistically) falsified by the data. Implementation in your problem: In the problem you raise, it would be usual to handle this by framing the problem as a Bayesian hypothesis test, with hypotheses: $$H_0: \mu_1 = c \mu_2 \quad \quad \quad H_A: \mu_1 \neq c \mu_2.$$ For example, under $H_0$ you could posit an overall model like this: $$\begin{aligned} X_{11}, X_{12}, ... , X_{1n} | \mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(c \mu_2,\sigma_1^2), \\[6pt] X_{21}, X_{22}, ... , X_{2n} | \mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_2,\sigma_2^2), \\[6pt] \mu_2 &\sim \text{N}(0, \eta^2), \\[6pt] \sigma_1^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \sigma_2^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \end{aligned}$$ and under $H_A$ you could posit an overall model like this: $$\begin{aligned} X_{11}, X_{12}, ... , X_{1n} | \mu_1,\mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_1,\sigma_1^2), \\[6pt] X_{21}, X_{22}, ... , X_{2n} | \mu_1,\mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_2,\sigma_2^2), \\[6pt] \mu_1 &\sim \text{N}(0, \eta^2), \\[6pt] \mu_2 &\sim \text{N}(0, \eta^2), \\[6pt] \sigma_1^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \sigma_2^2 &\sim \text{Ga}(\alpha, \beta). \\[6pt] \end{aligned}$$ You can obtain the Bayes' factor for the above hypothesis test and use this to see how you update prior probabilities for the hypotheses to posterior probabilities. If the data makes $H_0$ highly implausible, this will manifest in a lower posterior probability for $H_0$. Given some prior probability $\lambda = \mathbb{P}(H_0)$ for your posited subclass of models, you will be able to update this to a posterior probability.
Bayesian inference and testable implications
There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give som
Bayesian inference and testable implications There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give some non-zero prior probability for the alternative models in that class (i.e., have a prior probability less than one for your posited model class). The other is to observe some evidence that has zero density under all distributions in the posited model class, which then allows you to update to any belief you want (see discussion here). If you have assigned a prior probability of one to a class of models, and you never observe evidence that is inconsistent with those models, you can never "escape" that set of models within the Bayesian paradigm. Note that this is by design --- if you assign a prior probability of one to a set of models, you are saying that any alternative class of models has zero probability. In short, you are choosing to stick with your posited class of models no matter how strongly the evidence turns against them, so long as it is not inconsistent with those models. If you would like to have a principled "escape route" operating within the Bayesian paradigm, you will need to posit some broader class of alternative models and give it a non-zero prior probability. You could certainly give the alternative models a very low prior probability, so that they only become important a posteriori when the main model class starts to be (probabilistically) falsified by the data. Implementation in your problem: In the problem you raise, it would be usual to handle this by framing the problem as a Bayesian hypothesis test, with hypotheses: $$H_0: \mu_1 = c \mu_2 \quad \quad \quad H_A: \mu_1 \neq c \mu_2.$$ For example, under $H_0$ you could posit an overall model like this: $$\begin{aligned} X_{11}, X_{12}, ... , X_{1n} | \mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(c \mu_2,\sigma_1^2), \\[6pt] X_{21}, X_{22}, ... , X_{2n} | \mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_2,\sigma_2^2), \\[6pt] \mu_2 &\sim \text{N}(0, \eta^2), \\[6pt] \sigma_1^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \sigma_2^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \end{aligned}$$ and under $H_A$ you could posit an overall model like this: $$\begin{aligned} X_{11}, X_{12}, ... , X_{1n} | \mu_1,\mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_1,\sigma_1^2), \\[6pt] X_{21}, X_{22}, ... , X_{2n} | \mu_1,\mu_2,\sigma_1^2,\sigma_2^2 &\sim \text{N}(\mu_2,\sigma_2^2), \\[6pt] \mu_1 &\sim \text{N}(0, \eta^2), \\[6pt] \mu_2 &\sim \text{N}(0, \eta^2), \\[6pt] \sigma_1^2 &\sim \text{Ga}(\alpha, \beta), \\[6pt] \sigma_2^2 &\sim \text{Ga}(\alpha, \beta). \\[6pt] \end{aligned}$$ You can obtain the Bayes' factor for the above hypothesis test and use this to see how you update prior probabilities for the hypotheses to posterior probabilities. If the data makes $H_0$ highly implausible, this will manifest in a lower posterior probability for $H_0$. Given some prior probability $\lambda = \mathbb{P}(H_0)$ for your posited subclass of models, you will be able to update this to a posterior probability.
Bayesian inference and testable implications There are only two "principled" ways you can get out of your posited model that operate within the framework of the Bayesian paradigm. Once is to initially set a broader class of models, and give som
44,600
Bayesian inference and testable implications
Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case from the posterior distribution, and then compare the distributions of the fake data, with the distribution of the observed data. Prior predictive checks are aimed to diagnosing the prior-data conflict, i.e. the model a priori does not make reasonable predictions that cover the possible range of the values observed in the data, it is ill-defined a priori. In posterior predictive checks you sample from the predictions after estimating the parameters (i.e. from posterior), so you check if the predictions that the model does fit the observed data. In both cases, there are many ways of doing this, depending on particular problem, ranging form eyeballing the histograms, density plots, scatter plots, summary statistics etc, up to defining more formal tests (data falls within the per-specified interval, hypothesis tests to compare the distributions, etc). This is a routine practice in Bayesian modeling. If I understand you correctly, the model that you use as example assumes that your data $X$ comes from a mixture of two Gaussians, with unknown means $\mu_1, \mu_2$ and known variances $\sigma^2_1, \sigma^2_2$, and known constraint $c$, such that $\mu_2 = c\mu_1$. Simple way to test this model is to treat $c$ as free parameter, to be estimated. You know what $c$ should be, so you can come up with a strong, informative prior for it. In such case, it would surprise you if estimated $c$ differed from the true value. If I understand you correctly, that's the property of the model that you want to test. To test the validity of this assumption, you could take samples from the posterior distribution $\hat c_i$, and compare them to the true value of $c$, e.g. you would accept the model if at least in in $100\alpha\%$ cases, the predicted values for $c$ would be within the $\pm \varepsilon$ range from the truth $$ \alpha \le 1/n \sum_{i=1}^n \mathbf{1}(|c - \hat c_i| < \varepsilon) $$ This is not exactly a posterior predictive check, since we may argue if $c$ is data, or not, but it follows the spirit of the kind of checks you would make to test model validity. Accidentally, Michael Betancourt has just published a lengthy Towards A Principled Bayesian Workflow tutorial, where among other things, he discusses importance of prior and posterior checks discussed above.
Bayesian inference and testable implications
Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case fro
Bayesian inference and testable implications Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case from the posterior distribution, and then compare the distributions of the fake data, with the distribution of the observed data. Prior predictive checks are aimed to diagnosing the prior-data conflict, i.e. the model a priori does not make reasonable predictions that cover the possible range of the values observed in the data, it is ill-defined a priori. In posterior predictive checks you sample from the predictions after estimating the parameters (i.e. from posterior), so you check if the predictions that the model does fit the observed data. In both cases, there are many ways of doing this, depending on particular problem, ranging form eyeballing the histograms, density plots, scatter plots, summary statistics etc, up to defining more formal tests (data falls within the per-specified interval, hypothesis tests to compare the distributions, etc). This is a routine practice in Bayesian modeling. If I understand you correctly, the model that you use as example assumes that your data $X$ comes from a mixture of two Gaussians, with unknown means $\mu_1, \mu_2$ and known variances $\sigma^2_1, \sigma^2_2$, and known constraint $c$, such that $\mu_2 = c\mu_1$. Simple way to test this model is to treat $c$ as free parameter, to be estimated. You know what $c$ should be, so you can come up with a strong, informative prior for it. In such case, it would surprise you if estimated $c$ differed from the true value. If I understand you correctly, that's the property of the model that you want to test. To test the validity of this assumption, you could take samples from the posterior distribution $\hat c_i$, and compare them to the true value of $c$, e.g. you would accept the model if at least in in $100\alpha\%$ cases, the predicted values for $c$ would be within the $\pm \varepsilon$ range from the truth $$ \alpha \le 1/n \sum_{i=1}^n \mathbf{1}(|c - \hat c_i| < \varepsilon) $$ This is not exactly a posterior predictive check, since we may argue if $c$ is data, or not, but it follows the spirit of the kind of checks you would make to test model validity. Accidentally, Michael Betancourt has just published a lengthy Towards A Principled Bayesian Workflow tutorial, where among other things, he discusses importance of prior and posterior checks discussed above.
Bayesian inference and testable implications Prior predictive and posterior predictive checks may be helpful in here. In both cases you sample the predictions from the model (the "fake data"), in first case from the prior, in the second case fro