idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
39,001
Multicollinearity between ln(x) and ln(x)^2
The source of the collinearity is that $f(x) = x^2$. One way to reduce the correlation between $x$ and $x^2$ is to center $x$. Let $z=x-E(x)$ and compute $z^2$. Because the low end of the scale now has large absolute values, its square becomes large, making the relationship between $z$ and $z^2$ less linear than that between $x$ and $x^2$. This advice comes from The Analysis Factor: http://www.theanalysisfactor.com/centering-for-multicollinearity-between-main-effects-and-interaction-terms/ Note: When interpreting the effects, please remember that you scaled the covariate. Also, some researchers may caution against scaling because then the results of your model are data-dependent. Here is some perspective from Andrew Gelman on that issue: http://andrewgelman.com/2009/07/11/when_to_standar/
Multicollinearity between ln(x) and ln(x)^2
The source of the collinearity is that $f(x) = x^2$. One way to reduce the correlation between $x$ and $x^2$ is to center $x$. Let $z=x-E(x)$ and compute $z^2$. Because the low end of the scale now ha
Multicollinearity between ln(x) and ln(x)^2 The source of the collinearity is that $f(x) = x^2$. One way to reduce the correlation between $x$ and $x^2$ is to center $x$. Let $z=x-E(x)$ and compute $z^2$. Because the low end of the scale now has large absolute values, its square becomes large, making the relationship between $z$ and $z^2$ less linear than that between $x$ and $x^2$. This advice comes from The Analysis Factor: http://www.theanalysisfactor.com/centering-for-multicollinearity-between-main-effects-and-interaction-terms/ Note: When interpreting the effects, please remember that you scaled the covariate. Also, some researchers may caution against scaling because then the results of your model are data-dependent. Here is some perspective from Andrew Gelman on that issue: http://andrewgelman.com/2009/07/11/when_to_standar/
Multicollinearity between ln(x) and ln(x)^2 The source of the collinearity is that $f(x) = x^2$. One way to reduce the correlation between $x$ and $x^2$ is to center $x$. Let $z=x-E(x)$ and compute $z^2$. Because the low end of the scale now ha
39,002
Instrumental variable exclusion restriction
There are two criteria for good instruments: The instrument $z$ is correlated with the endogenous variable $x$ (relevance). The instrument $z$ affects dependent variable $y$ only through $x$. In other words, $z$ itself does not cause $y$. This is the exclusion restriction. You can check 1 (relevance) statistically, but must make good arguments to support 2 (exclusion). For example, suppose we want to estimate the effect of police ($x$) on crime ($y$) in a cross-section of cities. One issue is that places with lots of crime will hire more police. We therefore seek an instrument $z$ that is correlated with the size of the police force, but unrelated to crime. One possible $z$ is number of firefighters. The assumptions are that cities with lots of firefighters also have large police forces (relevance) and that firefighters do not affect crime (exclusion). Relevance can be checked with the reduced form regressions, but whether firefighters also affect crime is something to be argued for. Theoretically, they do not and are therefore a valid instrument. If you are curious about this specific example, see Levitt (1997) McCrary (2002) Levitt (2002)
Instrumental variable exclusion restriction
There are two criteria for good instruments: The instrument $z$ is correlated with the endogenous variable $x$ (relevance). The instrument $z$ affects dependent variable $y$ only through $x$. In othe
Instrumental variable exclusion restriction There are two criteria for good instruments: The instrument $z$ is correlated with the endogenous variable $x$ (relevance). The instrument $z$ affects dependent variable $y$ only through $x$. In other words, $z$ itself does not cause $y$. This is the exclusion restriction. You can check 1 (relevance) statistically, but must make good arguments to support 2 (exclusion). For example, suppose we want to estimate the effect of police ($x$) on crime ($y$) in a cross-section of cities. One issue is that places with lots of crime will hire more police. We therefore seek an instrument $z$ that is correlated with the size of the police force, but unrelated to crime. One possible $z$ is number of firefighters. The assumptions are that cities with lots of firefighters also have large police forces (relevance) and that firefighters do not affect crime (exclusion). Relevance can be checked with the reduced form regressions, but whether firefighters also affect crime is something to be argued for. Theoretically, they do not and are therefore a valid instrument. If you are curious about this specific example, see Levitt (1997) McCrary (2002) Levitt (2002)
Instrumental variable exclusion restriction There are two criteria for good instruments: The instrument $z$ is correlated with the endogenous variable $x$ (relevance). The instrument $z$ affects dependent variable $y$ only through $x$. In othe
39,003
Log odds ratio - what happens if linearity fails?
If the functional relationship between the exposure and average response is not an S-shaped logistic curve, there are still reasons why we might consider an S-shaped logistic curve as a meaningful summary of those data. As an example, we might have omitted a prognostic factor from a model, meaning that the true marginal relationship between the exposure and the outcome is not logistic, but a complicated semi-logistic function that averages up risks across several conditional logistic curves. This is the principal of non-collapsibility in logistic regression. Basically, we can rarely ever be certain that the S-shaped logistic trend is, in fact, the "right" one... but it is a useful one! All models are wrong, some models are useful. Kenji is right that when we try to approximate an S-shaped trend, and the data show strong distributional violations, there may be some sensitivity analyses to consider like testing for higher order polynomial effects. Another type of test to consider is breakpoints, adjusting for "knots" so that trends can change direction. These approaches are hybridized in splines and made even more general by using LOESS curves to explore general non-linear relationships between exposures and outcomes. Nonetheless, you may revert to the original question: you may say "I want to summarize these data using a single logistic curve whose intercept represents log-odds of the outcome for exposure=0 and whose slope is the log-odds ratio as a measure of association between an exposure and an outcome." The desire then is to obtain a robust error estimate that is unbiased and consistent. The S-curve then is taken to summarize a first order trend in the data, which you can think of as a rule of thumb: does the risk tend to increase or decrease as the exposure goes up, and by how much? To do this, you need only to apply sandwich based standard errors. This can be done using Generalized Estimating Equations with working independence covariance structure, logistic link, and binomial variance structure.
Log odds ratio - what happens if linearity fails?
If the functional relationship between the exposure and average response is not an S-shaped logistic curve, there are still reasons why we might consider an S-shaped logistic curve as a meaningful sum
Log odds ratio - what happens if linearity fails? If the functional relationship between the exposure and average response is not an S-shaped logistic curve, there are still reasons why we might consider an S-shaped logistic curve as a meaningful summary of those data. As an example, we might have omitted a prognostic factor from a model, meaning that the true marginal relationship between the exposure and the outcome is not logistic, but a complicated semi-logistic function that averages up risks across several conditional logistic curves. This is the principal of non-collapsibility in logistic regression. Basically, we can rarely ever be certain that the S-shaped logistic trend is, in fact, the "right" one... but it is a useful one! All models are wrong, some models are useful. Kenji is right that when we try to approximate an S-shaped trend, and the data show strong distributional violations, there may be some sensitivity analyses to consider like testing for higher order polynomial effects. Another type of test to consider is breakpoints, adjusting for "knots" so that trends can change direction. These approaches are hybridized in splines and made even more general by using LOESS curves to explore general non-linear relationships between exposures and outcomes. Nonetheless, you may revert to the original question: you may say "I want to summarize these data using a single logistic curve whose intercept represents log-odds of the outcome for exposure=0 and whose slope is the log-odds ratio as a measure of association between an exposure and an outcome." The desire then is to obtain a robust error estimate that is unbiased and consistent. The S-curve then is taken to summarize a first order trend in the data, which you can think of as a rule of thumb: does the risk tend to increase or decrease as the exposure goes up, and by how much? To do this, you need only to apply sandwich based standard errors. This can be done using Generalized Estimating Equations with working independence covariance structure, logistic link, and binomial variance structure.
Log odds ratio - what happens if linearity fails? If the functional relationship between the exposure and average response is not an S-shaped logistic curve, there are still reasons why we might consider an S-shaped logistic curve as a meaningful sum
39,004
Log odds ratio - what happens if linearity fails?
You get biased and inconsistent coefficient estimates, and biased standard errors. Bias in standard errors can be in both directions and the probability of types I and II errors could increase. You can tackle non-linearity by introducing different functional forms of the predictor that had a non-linear relationship with Y. Common functional forms are quadratic, logarithmic, cubic, square roots, among others. You can also think about including splines and possibly interactions between two or more predictors. A last possibility is to use a different link function for the binary relationship, as functions such as probit and clog-log have slightly different shapes, albeit all of them following a synodal shape.
Log odds ratio - what happens if linearity fails?
You get biased and inconsistent coefficient estimates, and biased standard errors. Bias in standard errors can be in both directions and the probability of types I and II errors could increase. You ca
Log odds ratio - what happens if linearity fails? You get biased and inconsistent coefficient estimates, and biased standard errors. Bias in standard errors can be in both directions and the probability of types I and II errors could increase. You can tackle non-linearity by introducing different functional forms of the predictor that had a non-linear relationship with Y. Common functional forms are quadratic, logarithmic, cubic, square roots, among others. You can also think about including splines and possibly interactions between two or more predictors. A last possibility is to use a different link function for the binary relationship, as functions such as probit and clog-log have slightly different shapes, albeit all of them following a synodal shape.
Log odds ratio - what happens if linearity fails? You get biased and inconsistent coefficient estimates, and biased standard errors. Bias in standard errors can be in both directions and the probability of types I and II errors could increase. You ca
39,005
Log odds ratio - what happens if linearity fails?
The assumption that your target probability can be modelled as a linear combination of log-odds ratios scaled by your inputs is equivalent to assuming that it is a combination of independent pieces of Bernoulli evidence. When that's not the case, you typically build a more complex model with cross terms. Seeing the logistic function as some arbitrary sigmoid link function really hides the assumption you're making.
Log odds ratio - what happens if linearity fails?
The assumption that your target probability can be modelled as a linear combination of log-odds ratios scaled by your inputs is equivalent to assuming that it is a combination of independent pieces of
Log odds ratio - what happens if linearity fails? The assumption that your target probability can be modelled as a linear combination of log-odds ratios scaled by your inputs is equivalent to assuming that it is a combination of independent pieces of Bernoulli evidence. When that's not the case, you typically build a more complex model with cross terms. Seeing the logistic function as some arbitrary sigmoid link function really hides the assumption you're making.
Log odds ratio - what happens if linearity fails? The assumption that your target probability can be modelled as a linear combination of log-odds ratios scaled by your inputs is equivalent to assuming that it is a combination of independent pieces of
39,006
Kruskal-Wallis test: assumption testing and interpretation of the results
The KW test (also the Mann-Whitney U-test) is essentially always a test for stochastic dominance. What that means is it is testing to see if there exists at least one group such that you would typically get a larger (lesser) value from it than the rest if you drew a value at random from each. People assume this means that one median or mean must be greater than the other, but that isn't necessarily true. If the shapes and the variances of the distributions are identical (i.e., one group's distribution is just shifted up or down relative to the other), then stochastic dominance implies a greater mean and median (and also a greater third quartile, fifth percentile, etc.). However, if the shapes / variances of the distributions differ, then it isn't necessarily the case. For further discussion of these topics and to see an example where the means are switched, see my answer here: Wilcoxon-Mann-Whitney test giving surprising results. For an example where the medians are equal, but there is nonetheless a stochastically dominant group, consider this: g1 = c(rep(0, 11), 1:10) # group 1 has 11 0s, & then 1 to 10 g2 <- g3 <- g4<- c(-10:-1, rep(0, 11)) # the other groups have 11 0s, & -1 to -10 d = stack(list(g1=g1, g2=g2, g3=g3, g4=g4)) aggregate(values~ind, d, median) # the median of every group is 0 # ind values # 1 g1 0 # 2 g2 0 # 3 g3 0 # 4 g4 0 kruskal.test(values~ind, d) # the KW test is highly significant nonetheless # Kruskal-Wallis rank sum test # # data: values by ind # Kruskal-Wallis chi-squared = 28.724, df = 3, p-value = 2.559e-06 With this understanding in mind, we can answer your specific questions. If the distributions within each group (of chicks) / condition (feed type) have the same shape and variance, a significant KW test implies there is at least one group that is stochastically greater (lesser) than the others, and its mean (and median, and first quartile, and eighty-eighth percentile, etc.) is higher (lower) than the other groups. If the distributions differ in shape and/or variance, a significant KW test implies there is at least one group that is stochastically greater (lesser) than the others, but its mean (and median, and first quartile, and eighty-eighth percentile, etc.) is not necessarily higher (lower) than the other groups. I would not bother running Levene's test before KW. I would not bother running the Kolmogorov-Smirnov test before KW. Examining qq-plots seems reasonable.
Kruskal-Wallis test: assumption testing and interpretation of the results
The KW test (also the Mann-Whitney U-test) is essentially always a test for stochastic dominance. What that means is it is testing to see if there exists at least one group such that you would typica
Kruskal-Wallis test: assumption testing and interpretation of the results The KW test (also the Mann-Whitney U-test) is essentially always a test for stochastic dominance. What that means is it is testing to see if there exists at least one group such that you would typically get a larger (lesser) value from it than the rest if you drew a value at random from each. People assume this means that one median or mean must be greater than the other, but that isn't necessarily true. If the shapes and the variances of the distributions are identical (i.e., one group's distribution is just shifted up or down relative to the other), then stochastic dominance implies a greater mean and median (and also a greater third quartile, fifth percentile, etc.). However, if the shapes / variances of the distributions differ, then it isn't necessarily the case. For further discussion of these topics and to see an example where the means are switched, see my answer here: Wilcoxon-Mann-Whitney test giving surprising results. For an example where the medians are equal, but there is nonetheless a stochastically dominant group, consider this: g1 = c(rep(0, 11), 1:10) # group 1 has 11 0s, & then 1 to 10 g2 <- g3 <- g4<- c(-10:-1, rep(0, 11)) # the other groups have 11 0s, & -1 to -10 d = stack(list(g1=g1, g2=g2, g3=g3, g4=g4)) aggregate(values~ind, d, median) # the median of every group is 0 # ind values # 1 g1 0 # 2 g2 0 # 3 g3 0 # 4 g4 0 kruskal.test(values~ind, d) # the KW test is highly significant nonetheless # Kruskal-Wallis rank sum test # # data: values by ind # Kruskal-Wallis chi-squared = 28.724, df = 3, p-value = 2.559e-06 With this understanding in mind, we can answer your specific questions. If the distributions within each group (of chicks) / condition (feed type) have the same shape and variance, a significant KW test implies there is at least one group that is stochastically greater (lesser) than the others, and its mean (and median, and first quartile, and eighty-eighth percentile, etc.) is higher (lower) than the other groups. If the distributions differ in shape and/or variance, a significant KW test implies there is at least one group that is stochastically greater (lesser) than the others, but its mean (and median, and first quartile, and eighty-eighth percentile, etc.) is not necessarily higher (lower) than the other groups. I would not bother running Levene's test before KW. I would not bother running the Kolmogorov-Smirnov test before KW. Examining qq-plots seems reasonable.
Kruskal-Wallis test: assumption testing and interpretation of the results The KW test (also the Mann-Whitney U-test) is essentially always a test for stochastic dominance. What that means is it is testing to see if there exists at least one group such that you would typica
39,007
Kruskal-Wallis test: assumption testing and interpretation of the results
This question (Non-normal distribution even with Kruskal-Wallis test) has a nice summary on the difference between inferring stochastic dominance vs. equality of medians. To answer your specific questions: The K-W test for chickwts gives Kruskal-Wallis rank sum test data: weight by feed Kruskal-Wallis chi-squared = 37.343, df = 5, p-value = 5.113e-07 As a test of medians, we would infer that the median weight after six weeks for at least one group differs from the median weights of the remaining groups. As a test of stochastic dominance, we can only infer that for at least one group, a randomly chosen member of that group is more likely than not to be heavier than a randomly chosen member from another group. If you intend to use KW as a test of medians, then some sort of heteroscedasticity test is warranted. If you find evidence of heteroscedasticity, then you can only infer stochastic dominance from KW. If you want to use KW as test of medians, checking for distributional similarity is a good idea. I'd choose QQ plots over K-S test since the former doesn't require specifying a specific distribution. If the QQ plots suggest differing distributions, then you can only infer stochastic dominance from KW.
Kruskal-Wallis test: assumption testing and interpretation of the results
This question (Non-normal distribution even with Kruskal-Wallis test) has a nice summary on the difference between inferring stochastic dominance vs. equality of medians. To answer your specific quest
Kruskal-Wallis test: assumption testing and interpretation of the results This question (Non-normal distribution even with Kruskal-Wallis test) has a nice summary on the difference between inferring stochastic dominance vs. equality of medians. To answer your specific questions: The K-W test for chickwts gives Kruskal-Wallis rank sum test data: weight by feed Kruskal-Wallis chi-squared = 37.343, df = 5, p-value = 5.113e-07 As a test of medians, we would infer that the median weight after six weeks for at least one group differs from the median weights of the remaining groups. As a test of stochastic dominance, we can only infer that for at least one group, a randomly chosen member of that group is more likely than not to be heavier than a randomly chosen member from another group. If you intend to use KW as a test of medians, then some sort of heteroscedasticity test is warranted. If you find evidence of heteroscedasticity, then you can only infer stochastic dominance from KW. If you want to use KW as test of medians, checking for distributional similarity is a good idea. I'd choose QQ plots over K-S test since the former doesn't require specifying a specific distribution. If the QQ plots suggest differing distributions, then you can only infer stochastic dominance from KW.
Kruskal-Wallis test: assumption testing and interpretation of the results This question (Non-normal distribution even with Kruskal-Wallis test) has a nice summary on the difference between inferring stochastic dominance vs. equality of medians. To answer your specific quest
39,008
Is it possible to get fitted values 0 or 1 in logistic regression when the fitting algorithm converges?
R is giving you two different warnings because these really are two distinct issues. Very loosely, the algorithm that fits a logistic regression model (typically some version of Newton-Raphson) looks around for the coefficient estimates that will maximize the log likelihood. It will estimate the model at a given point in the parameter space, see which direction is 'uphill', and then move some distance in that direction. The potential problem with this is that when perfect separation exists, the maximum of the log likelihood is where the slope is infinite. Because a search algorithm has to be designed to stop at some point, it doesn't converge. On the other hand, no matter where it stops, whether it converged or not, it is (theoretically) possible to calculate the model's predicted values for the data. However, because computers use finite precision arithmetic, when they perform the calculations, they eventually need to round off or drop extremely low decimal values. Thus, if the arithmetically correct value is sufficiently close to 0 or 1, when it is rounded, it can end up being 0 or 1 exactly. Values can have that property and be within the normal range of the data due to an extremely large (in absolute value) slope estimate due to complete separation, or they can just be so far out on X that even a small slope will lead to the same phenomenon. # I'll use this function to convert log odds to probabilities lo2p = function(lo){ exp(lo) / (1+exp(lo)) } set.seed(163) # this makes the example exactly reproducible x = c(-500, runif(100, min=-3, max=3), 500) # the x-values; 2 are extreme lo = 0 + 1*x p = lo2p(lo) y = rbinom(102, size=1, prob=p) m = glm(y~x, family=binomial) # Warning message: # glm.fit: fitted probabilities numerically 0 or 1 occurred summary(m) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) 0.3532 0.3304 1.069 0.285 # x 1.3686 0.2372 5.770 7.95e-09 *** # ... # # Null deviance: 140.420 on 101 degrees of freedom # Residual deviance: 63.017 on 100 degrees of freedom # AIC: 67.017 # # Number of Fisher Scoring iterations: 9 Here we see that we got the second warning, but the algorithm converged. The betas are reasonably close to the true values, the standard errors aren't huge, and the number of Fisher scoring iterations is moderate. Nonetheless, the extreme x-values yield predicted log odds that are perfectly calculable, but when converted into probabilities, become essentially 0 and 1. predict(m, type="link")[c(1, 102)] # these are the predicted log odds # 1 102 # -683.9379 684.6444 predict(m, type="response")[c(1, 102)] # these are the predicted probabilities # 1 102 # 2.220446e-16 1.000000e+00
Is it possible to get fitted values 0 or 1 in logistic regression when the fitting algorithm converg
R is giving you two different warnings because these really are two distinct issues. Very loosely, the algorithm that fits a logistic regression model (typically some version of Newton-Raphson) look
Is it possible to get fitted values 0 or 1 in logistic regression when the fitting algorithm converges? R is giving you two different warnings because these really are two distinct issues. Very loosely, the algorithm that fits a logistic regression model (typically some version of Newton-Raphson) looks around for the coefficient estimates that will maximize the log likelihood. It will estimate the model at a given point in the parameter space, see which direction is 'uphill', and then move some distance in that direction. The potential problem with this is that when perfect separation exists, the maximum of the log likelihood is where the slope is infinite. Because a search algorithm has to be designed to stop at some point, it doesn't converge. On the other hand, no matter where it stops, whether it converged or not, it is (theoretically) possible to calculate the model's predicted values for the data. However, because computers use finite precision arithmetic, when they perform the calculations, they eventually need to round off or drop extremely low decimal values. Thus, if the arithmetically correct value is sufficiently close to 0 or 1, when it is rounded, it can end up being 0 or 1 exactly. Values can have that property and be within the normal range of the data due to an extremely large (in absolute value) slope estimate due to complete separation, or they can just be so far out on X that even a small slope will lead to the same phenomenon. # I'll use this function to convert log odds to probabilities lo2p = function(lo){ exp(lo) / (1+exp(lo)) } set.seed(163) # this makes the example exactly reproducible x = c(-500, runif(100, min=-3, max=3), 500) # the x-values; 2 are extreme lo = 0 + 1*x p = lo2p(lo) y = rbinom(102, size=1, prob=p) m = glm(y~x, family=binomial) # Warning message: # glm.fit: fitted probabilities numerically 0 or 1 occurred summary(m) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) 0.3532 0.3304 1.069 0.285 # x 1.3686 0.2372 5.770 7.95e-09 *** # ... # # Null deviance: 140.420 on 101 degrees of freedom # Residual deviance: 63.017 on 100 degrees of freedom # AIC: 67.017 # # Number of Fisher Scoring iterations: 9 Here we see that we got the second warning, but the algorithm converged. The betas are reasonably close to the true values, the standard errors aren't huge, and the number of Fisher scoring iterations is moderate. Nonetheless, the extreme x-values yield predicted log odds that are perfectly calculable, but when converted into probabilities, become essentially 0 and 1. predict(m, type="link")[c(1, 102)] # these are the predicted log odds # 1 102 # -683.9379 684.6444 predict(m, type="response")[c(1, 102)] # these are the predicted probabilities # 1 102 # 2.220446e-16 1.000000e+00
Is it possible to get fitted values 0 or 1 in logistic regression when the fitting algorithm converg R is giving you two different warnings because these really are two distinct issues. Very loosely, the algorithm that fits a logistic regression model (typically some version of Newton-Raphson) look
39,009
How to simulate Poisson arrival times if the rate varies with time?
There are already several reasonable answers, but to try and add some clarity, here I will try to synthesize these, and do some verification on the resulting algorithm. Homogeneous Process For a homogeneous 1D Poisson process with constant rate $\lambda$, the number of events $n$ in a time interval $T$ will follow a Poisson distribution $$n\sim\text{Poiss}_\lambda$$ And the arrival times $t_1,\ldots,t_n$ will be i.i.d. uniformly through the interval $$t_i\sim\text{Unif}_{[0,T]}$$ while the waiting times between events will be i.i.d. exponentially $$\tau_i=t_{i+1}-t_i\sim\text{Exp}_\lambda$$ with CDF $$\Pr\big[\tau<\Delta{t}\big]=1-\exp^{-\lambda\Delta{t}}$$ So a homogeneous Poisson process can be easily simulated by first sampling $n$ and then sampling $t_{1:n}$ (or, alternatively sampling $\tau$ until $t=\sum\tau>T$). Inhomogeneous Process For an inhomogeneous Poisson process with rate parameter $\lambda(t)$ the above can be generalized by working in the transformed domain $$\Lambda(t)=\int_0^t\lambda(s)ds$$ where $\Lambda(t)$ is the expected cumulative number of events up to time $t$. (As noted in Aksakal's answer, and the reference cited in jth's answer.) In the generalized approach we simulate in the $\Lambda$ space, where "time" is dilated so in the deformed timeline the Poisson process is now a homogeneous process with unit rate. After simulating this $\Lambda$ process, we map the samples $\Lambda_{1:n}$ into arrival times $t_{1:n}$ by inverting the (monotonic) $\Lambda(t)$ mapping. Note that for the piecewise-constant rates $\lambda(t)$ here, the mapping $\Lambda(t)$ is piecewise linear, so very easy to invert. Example Simulation I made a short code in Octave to demonstrate this (listed at the end of this answer). To try and clear up questions about the validity of the approach for small rates, I simulate an ensemble of concatenated simulations. That is, we have 3 rates $\boldsymbol{\lambda}=[\lambda_1,\lambda_2,\lambda_3]$ each of duration 1 hour. To gather better statistics, I instead simulate a process with $\boldsymbol{\hat{\lambda}}=[\boldsymbol{\lambda},\boldsymbol{\lambda},\ldots,\boldsymbol{\lambda}]$, repeatedly cycling through the $\boldsymbol{\lambda}$ vector to allow larger sample sizes for each of the $\lambda$'s (while preserving their sequence and relative durations). The first step produces a distribution of waiting times (in $\Lambda$ space) like this which compares very well to the expected theoretical CDF (i.e. a unit-rate exponential). The second step produces the final ($t$ space) arrival times. The empirical distribution of waiting times matches well with theory here as well: This figure is a little bit more busy, but can be interpreted as follows. First, the upper panel shows the component CDFs associated with a homogeneous Poisson process at each of the three $\lambda$'s. We expect the aggregate CDF to be a mixture of these components $$\Pr\big[\tau<\Delta{t}\big]=\sum_iw_i\Pr\big[\tau<\Delta{t}\mid\lambda=\lambda_i\big]$$ where $w_i=\Pr\big[\lambda=\lambda_i\big]$ are the mixing fractions. For each component process, the expected number of samples is $\langle{n_i}\rangle=\lambda_iT_i$. Since the durations $T_i$ are equal, the expected mixing weights will then scale with the rates, i.e. $$w_i\propto\lambda_i$$ The lower panel above shows the expected CDF obtained by mixing the components in proportion to their rates. As can be seen, the empirical CDF of the inhomogeneous simulation is consistent with this theoretically expected CDF. Example Code The following code can be run here. (Note: the algorithm is only 4 lines; the bulk of the code is comments and verification.) %% SETUP lam0=[1.5,2.1,3.4]; dt0=ones(size(lam0)); % rates and their durations Nrep=1e3; lam=repmat(lam0,[1,Nrep]); dt=repmat(dt0,[1,Nrep]); % do replications (for stats check) %% SIMULATION L=cumsum([0,lam.*dt]); t=cumsum([0,dt]); % cumulative expected # events and time Lmax=L(end); N=poissrnd(Lmax); % sample total # events Lsmp=Lmax*sort(rand([1,N])); % sample event times (in "L space") tsmp=interp1(L,t,Lsmp); % transform to "t space" %% STATS CHECK % "L space" waiting time CDF dL=sort(diff(Lsmp)); p=(1:N-1)/N; % simulated p0_L=1-exp(-dL); % exponential h=plot(dL,p,'k',dL,p0_L,'r','LineWidth',1.5); set(h(1),'LineWidth',4); axis tight; xlabel('dL'); legend('data','theory (exp)',4); title('L space CDF'); % "t space" waiting time CDF dT=sort(diff(tsmp)); % simulated wcmp=(lam0.*dt0)/(lam0*dt0'); pcmp=1-exp(-lam0'*dT); p0=wcmp*pcmp; % mixture subplot(211); plot(dT,pcmp); ylabel('CDF'); title('Mixture Components'); axis tight; legend(cellstr(num2str(lam0','lam = %0.1f ')),4); subplot(212); h=plot(dT,p,'k',dT,p0,'r','LineWidth',1.5); set(h(1),'LineWidth',4); axis tight; xlabel('dt'); ylabel('CDF'); title('Aggregate'); legend('data','theory (exp mixture)',4); % mean arrival rate tbin=0.5:2.5; Navg=hist(mod(tsmp,3),tbin)/Nrep; bar(tbin,Navg,1); hold on; plot(tbin,lam0,'r.'); hold off; legend('observed (hist.)','theory (lam.)',0); xlabel('hour'); ylabel('arrivals/hour'); As requested in the comments, here is an example of the simulated mean arrival rates (see also updated code above):
How to simulate Poisson arrival times if the rate varies with time?
There are already several reasonable answers, but to try and add some clarity, here I will try to synthesize these, and do some verification on the resulting algorithm. Homogeneous Process For a homo
How to simulate Poisson arrival times if the rate varies with time? There are already several reasonable answers, but to try and add some clarity, here I will try to synthesize these, and do some verification on the resulting algorithm. Homogeneous Process For a homogeneous 1D Poisson process with constant rate $\lambda$, the number of events $n$ in a time interval $T$ will follow a Poisson distribution $$n\sim\text{Poiss}_\lambda$$ And the arrival times $t_1,\ldots,t_n$ will be i.i.d. uniformly through the interval $$t_i\sim\text{Unif}_{[0,T]}$$ while the waiting times between events will be i.i.d. exponentially $$\tau_i=t_{i+1}-t_i\sim\text{Exp}_\lambda$$ with CDF $$\Pr\big[\tau<\Delta{t}\big]=1-\exp^{-\lambda\Delta{t}}$$ So a homogeneous Poisson process can be easily simulated by first sampling $n$ and then sampling $t_{1:n}$ (or, alternatively sampling $\tau$ until $t=\sum\tau>T$). Inhomogeneous Process For an inhomogeneous Poisson process with rate parameter $\lambda(t)$ the above can be generalized by working in the transformed domain $$\Lambda(t)=\int_0^t\lambda(s)ds$$ where $\Lambda(t)$ is the expected cumulative number of events up to time $t$. (As noted in Aksakal's answer, and the reference cited in jth's answer.) In the generalized approach we simulate in the $\Lambda$ space, where "time" is dilated so in the deformed timeline the Poisson process is now a homogeneous process with unit rate. After simulating this $\Lambda$ process, we map the samples $\Lambda_{1:n}$ into arrival times $t_{1:n}$ by inverting the (monotonic) $\Lambda(t)$ mapping. Note that for the piecewise-constant rates $\lambda(t)$ here, the mapping $\Lambda(t)$ is piecewise linear, so very easy to invert. Example Simulation I made a short code in Octave to demonstrate this (listed at the end of this answer). To try and clear up questions about the validity of the approach for small rates, I simulate an ensemble of concatenated simulations. That is, we have 3 rates $\boldsymbol{\lambda}=[\lambda_1,\lambda_2,\lambda_3]$ each of duration 1 hour. To gather better statistics, I instead simulate a process with $\boldsymbol{\hat{\lambda}}=[\boldsymbol{\lambda},\boldsymbol{\lambda},\ldots,\boldsymbol{\lambda}]$, repeatedly cycling through the $\boldsymbol{\lambda}$ vector to allow larger sample sizes for each of the $\lambda$'s (while preserving their sequence and relative durations). The first step produces a distribution of waiting times (in $\Lambda$ space) like this which compares very well to the expected theoretical CDF (i.e. a unit-rate exponential). The second step produces the final ($t$ space) arrival times. The empirical distribution of waiting times matches well with theory here as well: This figure is a little bit more busy, but can be interpreted as follows. First, the upper panel shows the component CDFs associated with a homogeneous Poisson process at each of the three $\lambda$'s. We expect the aggregate CDF to be a mixture of these components $$\Pr\big[\tau<\Delta{t}\big]=\sum_iw_i\Pr\big[\tau<\Delta{t}\mid\lambda=\lambda_i\big]$$ where $w_i=\Pr\big[\lambda=\lambda_i\big]$ are the mixing fractions. For each component process, the expected number of samples is $\langle{n_i}\rangle=\lambda_iT_i$. Since the durations $T_i$ are equal, the expected mixing weights will then scale with the rates, i.e. $$w_i\propto\lambda_i$$ The lower panel above shows the expected CDF obtained by mixing the components in proportion to their rates. As can be seen, the empirical CDF of the inhomogeneous simulation is consistent with this theoretically expected CDF. Example Code The following code can be run here. (Note: the algorithm is only 4 lines; the bulk of the code is comments and verification.) %% SETUP lam0=[1.5,2.1,3.4]; dt0=ones(size(lam0)); % rates and their durations Nrep=1e3; lam=repmat(lam0,[1,Nrep]); dt=repmat(dt0,[1,Nrep]); % do replications (for stats check) %% SIMULATION L=cumsum([0,lam.*dt]); t=cumsum([0,dt]); % cumulative expected # events and time Lmax=L(end); N=poissrnd(Lmax); % sample total # events Lsmp=Lmax*sort(rand([1,N])); % sample event times (in "L space") tsmp=interp1(L,t,Lsmp); % transform to "t space" %% STATS CHECK % "L space" waiting time CDF dL=sort(diff(Lsmp)); p=(1:N-1)/N; % simulated p0_L=1-exp(-dL); % exponential h=plot(dL,p,'k',dL,p0_L,'r','LineWidth',1.5); set(h(1),'LineWidth',4); axis tight; xlabel('dL'); legend('data','theory (exp)',4); title('L space CDF'); % "t space" waiting time CDF dT=sort(diff(tsmp)); % simulated wcmp=(lam0.*dt0)/(lam0*dt0'); pcmp=1-exp(-lam0'*dT); p0=wcmp*pcmp; % mixture subplot(211); plot(dT,pcmp); ylabel('CDF'); title('Mixture Components'); axis tight; legend(cellstr(num2str(lam0','lam = %0.1f ')),4); subplot(212); h=plot(dT,p,'k',dT,p0,'r','LineWidth',1.5); set(h(1),'LineWidth',4); axis tight; xlabel('dt'); ylabel('CDF'); title('Aggregate'); legend('data','theory (exp mixture)',4); % mean arrival rate tbin=0.5:2.5; Navg=hist(mod(tsmp,3),tbin)/Nrep; bar(tbin,Navg,1); hold on; plot(tbin,lam0,'r.'); hold off; legend('observed (hist.)','theory (lam.)',0); xlabel('hour'); ylabel('arrivals/hour'); As requested in the comments, here is an example of the simulated mean arrival rates (see also updated code above):
How to simulate Poisson arrival times if the rate varies with time? There are already several reasonable answers, but to try and add some clarity, here I will try to synthesize these, and do some verification on the resulting algorithm. Homogeneous Process For a homo
39,010
How to simulate Poisson arrival times if the rate varies with time?
There are several methods. For simple rate functions like yours I find the inversion approach to be the easiest. See algorithm 3 of this paper; it's 5 lines. Edit: and the algorithm is: rates = c(1,2,3) s = sum(rates) cs = c(0, cumsum(rates)) R = function(t) { L = length(rates) t0 = floor(t) d = t0 %% L + 1 return(s * floor(t / L) + cs[d] + (t - t0) * rates[d]) } Rinv = function(y) { t0 = L * y / s return(uniroot(function(t) R(t) - y, c(t0, t0 + L))$root) } # Actual algorithm itself is only 2 lines. x = cumsum(rexp(1000)) # desired nbr of points y = sapply(x, Rinv) If efficiency matters then you'd want to analytically invert R(t).
How to simulate Poisson arrival times if the rate varies with time?
There are several methods. For simple rate functions like yours I find the inversion approach to be the easiest. See algorithm 3 of this paper; it's 5 lines. Edit: and the algorithm is: rates = c(1,2,
How to simulate Poisson arrival times if the rate varies with time? There are several methods. For simple rate functions like yours I find the inversion approach to be the easiest. See algorithm 3 of this paper; it's 5 lines. Edit: and the algorithm is: rates = c(1,2,3) s = sum(rates) cs = c(0, cumsum(rates)) R = function(t) { L = length(rates) t0 = floor(t) d = t0 %% L + 1 return(s * floor(t / L) + cs[d] + (t - t0) * rates[d]) } Rinv = function(y) { t0 = L * y / s return(uniroot(function(t) R(t) - y, c(t0, t0 + L))$root) } # Actual algorithm itself is only 2 lines. x = cumsum(rexp(1000)) # desired nbr of points y = sapply(x, Rinv) If efficiency matters then you'd want to analytically invert R(t).
How to simulate Poisson arrival times if the rate varies with time? There are several methods. For simple rate functions like yours I find the inversion approach to be the easiest. See algorithm 3 of this paper; it's 5 lines. Edit: and the algorithm is: rates = c(1,2,
39,011
How to simulate Poisson arrival times if the rate varies with time?
It's pretty straightforward, thanks to the fact that the interarrival times of a Poisson process are exponentially distributed and the exponential distribution has the memoryless property. (As an aside, note that if you are generating the time points, you are generating the interarrival times too, as they are just the differences between successive time points, and vice versa.) One strategy is to generate the number of arrivals in each of the 3 x 1000 hours, then make use of the fact that the arrival times are uniformly distributed within each hour (thanks to the memoryless property.) If you have, say, 3 arrivals in hour 1, you can generate the arrival times for that hour by generating three uniform variates over the hour. You can ignore "crossing the hour boundary" effects, also because of the memoryless property of the exponential distribution. For example: rates <- c(1.5, 2.1, 3.4) arrival_times <- c() for (hour in 1:3000) { n_arrivals <- rpois(1,rates[1 + hour%%3]) arrival_times <- c(arrival_times, runif(n_arrivals) + hour - 1) } arrival_times <- sort(arrival_times) The first and last few arrival times are: > c(head(arrival_times), tail(arrival_times)) [1] 6.175204e-02 5.907350e-01 1.066275e+00 1.089332e+00 1.638492e+00 2.296899e+00 2.996753e+03 2.996817e+03 2.997005e+03 [10] 2.997376e+03 2.998619e+03 2.999689e+03
How to simulate Poisson arrival times if the rate varies with time?
It's pretty straightforward, thanks to the fact that the interarrival times of a Poisson process are exponentially distributed and the exponential distribution has the memoryless property. (As an asi
How to simulate Poisson arrival times if the rate varies with time? It's pretty straightforward, thanks to the fact that the interarrival times of a Poisson process are exponentially distributed and the exponential distribution has the memoryless property. (As an aside, note that if you are generating the time points, you are generating the interarrival times too, as they are just the differences between successive time points, and vice versa.) One strategy is to generate the number of arrivals in each of the 3 x 1000 hours, then make use of the fact that the arrival times are uniformly distributed within each hour (thanks to the memoryless property.) If you have, say, 3 arrivals in hour 1, you can generate the arrival times for that hour by generating three uniform variates over the hour. You can ignore "crossing the hour boundary" effects, also because of the memoryless property of the exponential distribution. For example: rates <- c(1.5, 2.1, 3.4) arrival_times <- c() for (hour in 1:3000) { n_arrivals <- rpois(1,rates[1 + hour%%3]) arrival_times <- c(arrival_times, runif(n_arrivals) + hour - 1) } arrival_times <- sort(arrival_times) The first and last few arrival times are: > c(head(arrival_times), tail(arrival_times)) [1] 6.175204e-02 5.907350e-01 1.066275e+00 1.089332e+00 1.638492e+00 2.296899e+00 2.996753e+03 2.996817e+03 2.997005e+03 [10] 2.997376e+03 2.998619e+03 2.999689e+03
How to simulate Poisson arrival times if the rate varies with time? It's pretty straightforward, thanks to the fact that the interarrival times of a Poisson process are exponentially distributed and the exponential distribution has the memoryless property. (As an asi
39,012
How to simulate Poisson arrival times if the rate varies with time?
Your intensities are so low that it's impossible to ignore the boundary effects. By that I mean that a time between the last event in the first hour and the first event in the next hour is covered by two Poisson (exponential) processes. Method 1: So, the simplest way to treat this (certainly, not the most efficient) is to construct the nonhomegenous intensity $\lambda(t)$ function, where $t$ is time. Next, step in time by one second (or a smaller interval), then generate the Bernoulli experiments, until you get to the end of the time. Method 2: However, a much better (faster) solution is to scale the time instead of scaling the intensity. Here's the idea. You have a homogenous Poisson process, BUT the time speeds up after one hour, NOT the intensity increases. Your clock ticks faster after the first hour by a factor of $\frac{2.1}{1.5}=1.4$ or $\frac{3.4}{1.5}\approx 2.27$. So, all you need to do is to generate a bunch of exponential arrival times, then compress the times after the first hour by appropriate factors. Solution for Method 1 This is not R forum, and other folks might be interested in the solution, so here's a pseudo code: function y = lambda(t) % nonhomegenous Poisson intensity % t - time is hours if t<1 y = 1.5; else if t<2 y = 2.1; else y = 3.4; end if end function times = f() % times in hours of the event occurrences list times; dt = 1/3600; for t = 0 to 3 step dt if rand() < lambda(t)*dt times.add(t); end if end end Solution for Method 2 function times = f() lambda = 1.5 list times t = 0 do dt = random_exponential(lambda) t = t + dt st = compress(t) if st<3 times.add(st) end if while st < 3 end % compress slow time to faster times after 1st hour function st = compress(t) scale2 = 2.1/1.5 scale3 = 3.4/1.5 t2 = 1 + scale2; st = min(t,1) + (min(t,t2)-min(t,1))/scale2 + (max(t,t2)-t2)/scale3 end
How to simulate Poisson arrival times if the rate varies with time?
Your intensities are so low that it's impossible to ignore the boundary effects. By that I mean that a time between the last event in the first hour and the first event in the next hour is covered by
How to simulate Poisson arrival times if the rate varies with time? Your intensities are so low that it's impossible to ignore the boundary effects. By that I mean that a time between the last event in the first hour and the first event in the next hour is covered by two Poisson (exponential) processes. Method 1: So, the simplest way to treat this (certainly, not the most efficient) is to construct the nonhomegenous intensity $\lambda(t)$ function, where $t$ is time. Next, step in time by one second (or a smaller interval), then generate the Bernoulli experiments, until you get to the end of the time. Method 2: However, a much better (faster) solution is to scale the time instead of scaling the intensity. Here's the idea. You have a homogenous Poisson process, BUT the time speeds up after one hour, NOT the intensity increases. Your clock ticks faster after the first hour by a factor of $\frac{2.1}{1.5}=1.4$ or $\frac{3.4}{1.5}\approx 2.27$. So, all you need to do is to generate a bunch of exponential arrival times, then compress the times after the first hour by appropriate factors. Solution for Method 1 This is not R forum, and other folks might be interested in the solution, so here's a pseudo code: function y = lambda(t) % nonhomegenous Poisson intensity % t - time is hours if t<1 y = 1.5; else if t<2 y = 2.1; else y = 3.4; end if end function times = f() % times in hours of the event occurrences list times; dt = 1/3600; for t = 0 to 3 step dt if rand() < lambda(t)*dt times.add(t); end if end end Solution for Method 2 function times = f() lambda = 1.5 list times t = 0 do dt = random_exponential(lambda) t = t + dt st = compress(t) if st<3 times.add(st) end if while st < 3 end % compress slow time to faster times after 1st hour function st = compress(t) scale2 = 2.1/1.5 scale3 = 3.4/1.5 t2 = 1 + scale2; st = min(t,1) + (min(t,t2)-min(t,1))/scale2 + (max(t,t2)-t2)/scale3 end
How to simulate Poisson arrival times if the rate varies with time? Your intensities are so low that it's impossible to ignore the boundary effects. By that I mean that a time between the last event in the first hour and the first event in the next hour is covered by
39,013
How to simulate Poisson arrival times if the rate varies with time?
If Python is not a problem for you, I advise you to have a look at this new open-source library: tick. Among others, it provides simulation and inference methods for point processes. You can find an example explaining the simulation of an inhomogeneous Poisson process here.
How to simulate Poisson arrival times if the rate varies with time?
If Python is not a problem for you, I advise you to have a look at this new open-source library: tick. Among others, it provides simulation and inference methods for point processes. You can find an e
How to simulate Poisson arrival times if the rate varies with time? If Python is not a problem for you, I advise you to have a look at this new open-source library: tick. Among others, it provides simulation and inference methods for point processes. You can find an example explaining the simulation of an inhomogeneous Poisson process here.
How to simulate Poisson arrival times if the rate varies with time? If Python is not a problem for you, I advise you to have a look at this new open-source library: tick. Among others, it provides simulation and inference methods for point processes. You can find an e
39,014
Computing Confidence Intervals for Coefficients in Logistic Regression [duplicate]
I just discovered that someone answered this question in another post. The answer is, confint uses profile confidence intervals, whereas I was computing a Wald confidence interval (which can equivalently be computed using confint.default).
Computing Confidence Intervals for Coefficients in Logistic Regression [duplicate]
I just discovered that someone answered this question in another post. The answer is, confint uses profile confidence intervals, whereas I was computing a Wald confidence interval (which can equivalen
Computing Confidence Intervals for Coefficients in Logistic Regression [duplicate] I just discovered that someone answered this question in another post. The answer is, confint uses profile confidence intervals, whereas I was computing a Wald confidence interval (which can equivalently be computed using confint.default).
Computing Confidence Intervals for Coefficients in Logistic Regression [duplicate] I just discovered that someone answered this question in another post. The answer is, confint uses profile confidence intervals, whereas I was computing a Wald confidence interval (which can equivalen
39,015
Consistency in mean square vs. "normal" consistency
ADDENDUM 30-3-2017 An important clarification: None of the derivations below guarantees that "$\mu$" is the "true value" we are attempting to estimate. What we only show is that if $\theta_n$ converges in $L^2$ to some constant, then this constant is also its probability limit. Whether the probability limit of the estimator is the true value, and so whether the estimator is consistent, is not proven here. So the whole derivation below presupposes that $\mu$ is, after all, the "true value". $\newcommand{\E}{\mathbb{E}}$ Assume that we don't know whether $\mu$ is the mean, or the probability limit etc, of our estimator $\theta_n$. We can write $$\E[(\theta_n-\mu)^2] = \mu^2 - 2\E(\theta_n)\mu + \E(\theta_n^2)$$ which we can view as a quadratic polynomial in $\mu$. To obtain convergence in $L^2$ we need this quadratic to not be bounded away from zero, as a necessary condition. Being a quadratic, we can easily examine its roots. Its discriminant is $$\Delta_{\mu} = 4[\E(\theta_n)]^2 - 4\E(\theta_n^2) = -4\text{Var}(\theta_n)$$ We want the discriminant to be greater or equal than zero, otherwise the polynomial won't have real roots. Since the variance is non-negative, we need at least asymptotically for $\text{Var}(\theta_n) \to 0$. Given this we then have asymptotically the double root $$\mu = \lim \E(\theta_n)$$ So if $\E[(\theta_n-\mu)^2] \to 0$, it means that $\text{Var}(\theta_n) \to 0$ and $\lim \E(\theta_n) = \mu$. These are sufficient conditions for consistency (sufficient but not necessary though, either because the variance may not even exist, or because of situations like this one). [And again, they are sufficient for consistency if we assume from the start that $\mu$ is the true value we are trying to estimate.] And why these conditions should be sufficient for consistency? What do they have to do with the probability statement $$\Pr(|\theta_n -\mu| > \varepsilon) \to 0$$ Well, as another answer mentioned, this probability is tied to the variance of the distribution by Chebyshev's Inequality, so if $\mu$ is the asymptotic expected value of $\theta_n$ then $$\Pr(|\theta_n -\mu| > \varepsilon) \leq \frac{\text{Var}(\theta_n)}{\varepsilon^2} $$ So if $\lim \E(\theta_n) = \mu$ Chebyshev's Inequality is applicable, and then if $\text{Var}(\theta_n) \to 0$, the probability goes to zero. And so intuition for $L^2$-convergence being sufficient for consistency, is in my view shifted to whether we understand intuitively Chebyshev's Inequality... ...because here too, the intellectual objection of the OP appears too: a not-squared difference appears bounded by a squared difference, but which "for small deviations" (smaller than unity) "is smaller". Well, the "intervening" operators (Probability, Expected Value) have a lot to do with it, since ($I\{\}$ being the indicator function), $$\Pr(|\theta_n -\mu| > \varepsilon) =\E\left(I\{|\theta_n -\mu| > \varepsilon\}\right) $$ $$= \E\left(I\left \{\frac{(\theta_n -\mu)^2}{\varepsilon^2} >1 \right\}\right) \leq \E\left(\frac{(\theta_n -\mu)^2}{\varepsilon^2} \right)$$ ...and this last inequality holds because $$I\left \{\frac{(\theta_n -\mu)^2}{\varepsilon^2} >1 \right\} \leq \frac{(\theta_n -\mu)^2}{\varepsilon^2} $$ and it is when I saw the above and realized why this last inequality holds, that I gain some intuition on Chebyshev's Inequality.
Consistency in mean square vs. "normal" consistency
ADDENDUM 30-3-2017 An important clarification: None of the derivations below guarantees that "$\mu$" is the "true value" we are attempting to estimate. What we only show is that if $\theta_n$ converge
Consistency in mean square vs. "normal" consistency ADDENDUM 30-3-2017 An important clarification: None of the derivations below guarantees that "$\mu$" is the "true value" we are attempting to estimate. What we only show is that if $\theta_n$ converges in $L^2$ to some constant, then this constant is also its probability limit. Whether the probability limit of the estimator is the true value, and so whether the estimator is consistent, is not proven here. So the whole derivation below presupposes that $\mu$ is, after all, the "true value". $\newcommand{\E}{\mathbb{E}}$ Assume that we don't know whether $\mu$ is the mean, or the probability limit etc, of our estimator $\theta_n$. We can write $$\E[(\theta_n-\mu)^2] = \mu^2 - 2\E(\theta_n)\mu + \E(\theta_n^2)$$ which we can view as a quadratic polynomial in $\mu$. To obtain convergence in $L^2$ we need this quadratic to not be bounded away from zero, as a necessary condition. Being a quadratic, we can easily examine its roots. Its discriminant is $$\Delta_{\mu} = 4[\E(\theta_n)]^2 - 4\E(\theta_n^2) = -4\text{Var}(\theta_n)$$ We want the discriminant to be greater or equal than zero, otherwise the polynomial won't have real roots. Since the variance is non-negative, we need at least asymptotically for $\text{Var}(\theta_n) \to 0$. Given this we then have asymptotically the double root $$\mu = \lim \E(\theta_n)$$ So if $\E[(\theta_n-\mu)^2] \to 0$, it means that $\text{Var}(\theta_n) \to 0$ and $\lim \E(\theta_n) = \mu$. These are sufficient conditions for consistency (sufficient but not necessary though, either because the variance may not even exist, or because of situations like this one). [And again, they are sufficient for consistency if we assume from the start that $\mu$ is the true value we are trying to estimate.] And why these conditions should be sufficient for consistency? What do they have to do with the probability statement $$\Pr(|\theta_n -\mu| > \varepsilon) \to 0$$ Well, as another answer mentioned, this probability is tied to the variance of the distribution by Chebyshev's Inequality, so if $\mu$ is the asymptotic expected value of $\theta_n$ then $$\Pr(|\theta_n -\mu| > \varepsilon) \leq \frac{\text{Var}(\theta_n)}{\varepsilon^2} $$ So if $\lim \E(\theta_n) = \mu$ Chebyshev's Inequality is applicable, and then if $\text{Var}(\theta_n) \to 0$, the probability goes to zero. And so intuition for $L^2$-convergence being sufficient for consistency, is in my view shifted to whether we understand intuitively Chebyshev's Inequality... ...because here too, the intellectual objection of the OP appears too: a not-squared difference appears bounded by a squared difference, but which "for small deviations" (smaller than unity) "is smaller". Well, the "intervening" operators (Probability, Expected Value) have a lot to do with it, since ($I\{\}$ being the indicator function), $$\Pr(|\theta_n -\mu| > \varepsilon) =\E\left(I\{|\theta_n -\mu| > \varepsilon\}\right) $$ $$= \E\left(I\left \{\frac{(\theta_n -\mu)^2}{\varepsilon^2} >1 \right\}\right) \leq \E\left(\frac{(\theta_n -\mu)^2}{\varepsilon^2} \right)$$ ...and this last inequality holds because $$I\left \{\frac{(\theta_n -\mu)^2}{\varepsilon^2} >1 \right\} \leq \frac{(\theta_n -\mu)^2}{\varepsilon^2} $$ and it is when I saw the above and realized why this last inequality holds, that I gain some intuition on Chebyshev's Inequality.
Consistency in mean square vs. "normal" consistency ADDENDUM 30-3-2017 An important clarification: None of the derivations below guarantees that "$\mu$" is the "true value" we are attempting to estimate. What we only show is that if $\theta_n$ converge
39,016
Consistency in mean square vs. "normal" consistency
This seems to really be more of a question about mathematical probability, so I will ignore the statistical context of your question in this answer. Please let me know if this is not helpful enough. 1. Yes, it is true that converge in mean square, also called convergence in $L^2$, implies convergence in probability. This is, for example, the statement of Lemma 2.2.2., p. 54 of Durrett's Probability - Theory and Examples, 4th edition. 2. Regarding intuition, you may be thinking of "convergence almost surely" when you say "convergence in probability". It is true that convergence in mean square does not imply convergence almost surely. (But the converse isn't true either, see here.) Otherwise, all I can do is restate the formal definitions and the proof that convergence in mean square implies convergence in probability (from Durrett) to explain "why" -- that is not to say that there isn't an intuitive explanation for this, just to say that I don't have an intuitive understanding. The basic idea, though, is just "Markov's inequality". (Note: Durrett calls "Chebyshev's inequality" what most people refer to as "Markov's inequality" -- what most people refer to a "Chebyshev's inequality is a special case of "Markov's inequality".) This is all quoted from the first pages of section 2.2. Weak Laws of Large Numbers. We say that $Y_n$ converges to $Y$ in probability if for all $\varepsilon > 0$, $\mathbb{P}(|Y_n - y| > \varepsilon) \to 0$ as $n \to \infty$... Lemma 2.2.2. If $p > 0$ and $\mathbb{E}|Z_n|^p \to 0$ then $Z_n \to 0$ in probability. Proof. Chebyshev's inequality [monotonic version of Markov's inequality] with $\varphi(x)=x^p$ and $X= |Z_n|$ implies that if $\varepsilon > 0$ then $$\mathbb{P}(|Z_n| \ge \varepsilon) \le \varepsilon^{-p} \mathbb{E}|Z_n|^p \to 0 \,. \quad \quad\square$$ The proof of the form of Markov's inequality used in the proof can be found e.g. here.
Consistency in mean square vs. "normal" consistency
This seems to really be more of a question about mathematical probability, so I will ignore the statistical context of your question in this answer. Please let me know if this is not helpful enough. 1
Consistency in mean square vs. "normal" consistency This seems to really be more of a question about mathematical probability, so I will ignore the statistical context of your question in this answer. Please let me know if this is not helpful enough. 1. Yes, it is true that converge in mean square, also called convergence in $L^2$, implies convergence in probability. This is, for example, the statement of Lemma 2.2.2., p. 54 of Durrett's Probability - Theory and Examples, 4th edition. 2. Regarding intuition, you may be thinking of "convergence almost surely" when you say "convergence in probability". It is true that convergence in mean square does not imply convergence almost surely. (But the converse isn't true either, see here.) Otherwise, all I can do is restate the formal definitions and the proof that convergence in mean square implies convergence in probability (from Durrett) to explain "why" -- that is not to say that there isn't an intuitive explanation for this, just to say that I don't have an intuitive understanding. The basic idea, though, is just "Markov's inequality". (Note: Durrett calls "Chebyshev's inequality" what most people refer to as "Markov's inequality" -- what most people refer to a "Chebyshev's inequality is a special case of "Markov's inequality".) This is all quoted from the first pages of section 2.2. Weak Laws of Large Numbers. We say that $Y_n$ converges to $Y$ in probability if for all $\varepsilon > 0$, $\mathbb{P}(|Y_n - y| > \varepsilon) \to 0$ as $n \to \infty$... Lemma 2.2.2. If $p > 0$ and $\mathbb{E}|Z_n|^p \to 0$ then $Z_n \to 0$ in probability. Proof. Chebyshev's inequality [monotonic version of Markov's inequality] with $\varphi(x)=x^p$ and $X= |Z_n|$ implies that if $\varepsilon > 0$ then $$\mathbb{P}(|Z_n| \ge \varepsilon) \le \varepsilon^{-p} \mathbb{E}|Z_n|^p \to 0 \,. \quad \quad\square$$ The proof of the form of Markov's inequality used in the proof can be found e.g. here.
Consistency in mean square vs. "normal" consistency This seems to really be more of a question about mathematical probability, so I will ignore the statistical context of your question in this answer. Please let me know if this is not helpful enough. 1
39,017
Consistency in mean square vs. "normal" consistency
I have a very straight forward answer to this from a machine learning perspective. So given $\theta$ is mean square consistent, a reformulation is (Bias - Variance tradeoff): \begin{equation} \mathbb{E}\left[(\theta-\mu)^2\right] \to 0 \Leftrightarrow Var\left[\theta\right] + Bias\left(\theta,\mu\right)^2 \to 0 \end{equation} as $n \to \infty$. As both terms are nonnegative, it implies that $Var\left[\theta\right] \to 0$ and $Bias\left(\theta,\mu\right)^2 \to 0$. Now, as mentioned above, with help of Chebyshev's inequality, convergence in probability is induced.
Consistency in mean square vs. "normal" consistency
I have a very straight forward answer to this from a machine learning perspective. So given $\theta$ is mean square consistent, a reformulation is (Bias - Variance tradeoff): \begin{equation} \mathbb{
Consistency in mean square vs. "normal" consistency I have a very straight forward answer to this from a machine learning perspective. So given $\theta$ is mean square consistent, a reformulation is (Bias - Variance tradeoff): \begin{equation} \mathbb{E}\left[(\theta-\mu)^2\right] \to 0 \Leftrightarrow Var\left[\theta\right] + Bias\left(\theta,\mu\right)^2 \to 0 \end{equation} as $n \to \infty$. As both terms are nonnegative, it implies that $Var\left[\theta\right] \to 0$ and $Bias\left(\theta,\mu\right)^2 \to 0$. Now, as mentioned above, with help of Chebyshev's inequality, convergence in probability is induced.
Consistency in mean square vs. "normal" consistency I have a very straight forward answer to this from a machine learning perspective. So given $\theta$ is mean square consistent, a reformulation is (Bias - Variance tradeoff): \begin{equation} \mathbb{
39,018
Choosing the number of bootstrap resamples
A bootstrap sample is usually taken to mean that the sample size of the resample is equal to the original sample size. What you are doing is to take resamples from the original sample with larger and larger (re)sample sizes. There is no reason to believe that this will represent the properties of the (original) sampling from the study population. Say you are interested in the mean of some unknown distribution $F$ (on the real line, to make example specific). The mean (assuming it exists ) $\mu$ of the distribution $F$ is given by $$ \mu(F) = \int_{-\infty}^\infty x \; dF(x) $$ where the integral is a Stieltjes integral. If $F$ is the distribution of some continuous random variable with density $f(x) =F'(x)$ this is the usual integral $\int x f(x) \; dx$ but it also includes the discrete case. The point of writing the expectation in this unusual way is that we can see that the expectation is a functional of the distribution $F$, and also that it unifies the treatment of continuous/discrete cases. Now we get a sample $x_1, x_2, \dotsc, x_N$ from $F$, and the idea behind bootstrapping is that we represent the distribution $F$ with the sample, and investigates sampling properties of estimators of $\mu$ by resampling from the sample. This makes clear that we need to assume that the sample is reasonably representative of $F$!, so we cannot expect this to work well with too small samples. Now, our sample size was $N$, so we want properties of estimators of $\mu$ based on a sample of size $N$. Suppose we take resamples of size $n$ (possibly with $n \not = N$). Our resamples is a stand-in for samples from $F$ (that is the whole point with bootstrapping!). Suppose $F$ also has existing variance $\sigma^2$, and we estimate $\mu$ by the empirical mean $$ \bar{x}=\frac{1}{N}\sum_i x_i=\int_{-\infty}^\infty x \;d\hat{F}_N(x) $$ where $\hat{F}_n(x)$ is the empirical distribution function at $x$. Then the variance of this estimator will be $\sigma^2/N$. Lets say we do resampling but with resamples of size $n$. Then the empirical mean based on this resamples will have variance $\sigma^2(\hat{F}_N)/n$ where $\sigma^2(\hat{F}_N)$ is the variance based on the sample. If this empirical variance is a good estimator of $\sigma^2$, this will be approximately $\sigma^2/n$. If $n$ is different from $N$, this cannot be a good representation of the variance of $\bar{x}$, so will not tell you about the real uncertainty in $\bar{x}$ as an estimator of $\mu$. EDIT To clarify, the error in the results when using bootstrapping can be decomposed in the sampling error (due to only taking $N$ observations), and the bootstrap error (due to only taking $n < \infty$ resamples). By increasing $n$ we can reduce the later, but not the former. Sometimes one is deliberately using a bootstrap sample size different from the original. See Can we use bootstrap samples that are smaller than original sample?, Subsample bootstrapping
Choosing the number of bootstrap resamples
A bootstrap sample is usually taken to mean that the sample size of the resample is equal to the original sample size. What you are doing is to take resamples from the original sample with larger and
Choosing the number of bootstrap resamples A bootstrap sample is usually taken to mean that the sample size of the resample is equal to the original sample size. What you are doing is to take resamples from the original sample with larger and larger (re)sample sizes. There is no reason to believe that this will represent the properties of the (original) sampling from the study population. Say you are interested in the mean of some unknown distribution $F$ (on the real line, to make example specific). The mean (assuming it exists ) $\mu$ of the distribution $F$ is given by $$ \mu(F) = \int_{-\infty}^\infty x \; dF(x) $$ where the integral is a Stieltjes integral. If $F$ is the distribution of some continuous random variable with density $f(x) =F'(x)$ this is the usual integral $\int x f(x) \; dx$ but it also includes the discrete case. The point of writing the expectation in this unusual way is that we can see that the expectation is a functional of the distribution $F$, and also that it unifies the treatment of continuous/discrete cases. Now we get a sample $x_1, x_2, \dotsc, x_N$ from $F$, and the idea behind bootstrapping is that we represent the distribution $F$ with the sample, and investigates sampling properties of estimators of $\mu$ by resampling from the sample. This makes clear that we need to assume that the sample is reasonably representative of $F$!, so we cannot expect this to work well with too small samples. Now, our sample size was $N$, so we want properties of estimators of $\mu$ based on a sample of size $N$. Suppose we take resamples of size $n$ (possibly with $n \not = N$). Our resamples is a stand-in for samples from $F$ (that is the whole point with bootstrapping!). Suppose $F$ also has existing variance $\sigma^2$, and we estimate $\mu$ by the empirical mean $$ \bar{x}=\frac{1}{N}\sum_i x_i=\int_{-\infty}^\infty x \;d\hat{F}_N(x) $$ where $\hat{F}_n(x)$ is the empirical distribution function at $x$. Then the variance of this estimator will be $\sigma^2/N$. Lets say we do resampling but with resamples of size $n$. Then the empirical mean based on this resamples will have variance $\sigma^2(\hat{F}_N)/n$ where $\sigma^2(\hat{F}_N)$ is the variance based on the sample. If this empirical variance is a good estimator of $\sigma^2$, this will be approximately $\sigma^2/n$. If $n$ is different from $N$, this cannot be a good representation of the variance of $\bar{x}$, so will not tell you about the real uncertainty in $\bar{x}$ as an estimator of $\mu$. EDIT To clarify, the error in the results when using bootstrapping can be decomposed in the sampling error (due to only taking $N$ observations), and the bootstrap error (due to only taking $n < \infty$ resamples). By increasing $n$ we can reduce the later, but not the former. Sometimes one is deliberately using a bootstrap sample size different from the original. See Can we use bootstrap samples that are smaller than original sample?, Subsample bootstrapping
Choosing the number of bootstrap resamples A bootstrap sample is usually taken to mean that the sample size of the resample is equal to the original sample size. What you are doing is to take resamples from the original sample with larger and
39,019
Can Median Absolute Deviation (MAD)/SD be used to determine if a distribution is normal or not?
Considered as a formal test of normality: If $M$ = (sample) median absolute deviation from the median and $s$ = standard deviation, then you could indeed use a measure like $R = M/s$ (or its reciprocal) as a test statistic for a test of normality. Note however, that such tests cannot tell you something is normal, only - sometimes - that it isn't. To make it a test, all you'd need is the distribution of/a table of percentiles of the distribution of the ratio under the null (i.e. at normality) for various sample sizes. This can be obtained by simulation, for example -- though it might also be possible to obtain it analytically. It's actually a close kin to an old test statistic proposed by Geary[1], which was the ratio of mean deviation to standard deviation, sometimes referred to as Geary's $a$ test (because he proposed a number of test statistics it's necessary to distinguish them, and he used $a$ - and later, $a_1$, to denote this ratio of mean deviation to standard deviation). Geary's $a$ test has quite good power compared to the Shapiro-Wilk test in small to moderate sized samples for a wide range of symmetric alternatives, beating it in a number of situations. To my recollection is has quite good power against heavier tailed cases like the logistic and Laplace. Your proposal should have somewhat similar properties. Indeed I think that the likelihood ratio test for normality against a Laplace alternative would correspond to looking at the ratio of mean deviation from the median to standard deviation (which would be a third statistic a bit more like Geary's than yours). [My guess is that Geary's $a$ test statistic would have better power against something like a logistic alternative than yours, but yours might be more competitive with even peakier-and-heavier-tailed alternatives than the Laplace -- an example of an alternative that I'd expect it to do especially well against would be the location-scale family based off the distribution of the product of two independent standard normals. It might also do fairly well against something like a t-distribution with low d.f. It would be interesting to see if such guesses hold up, and whether it does well in other situations.] Against general alternatives, the power may sometimes be poor, however - for example we should anticipate relatively low power against lightish-tailed, skew alternatives (at least ones that have similar population ratios of median absolute deviation to standard deviation), compared to widely used omnibus tests. However, many skew alternatives of interest are also heavy-tailed, so it may still do fairly well against some of those. It wouldn't be suitable in every situation but might work very well if you anticipate the kind of alternatives against which it should have reasonably good power. There are a number of papers that have investigated Geary's test but off the top of my head I don't recall any for your proposed statistic. I'd bet that it has been looked at but I didn't find any papers on it with a quick search. The closest I came was Gel et al [2] which discusses a test based on the ratio of standard deviation to mean deviation from the median (which they call MAAD), which would be a version of the test I suggested for a Laplace alternative above. They say that compared to the test based on the MAAD, the MAD has lower power against heavy tailed alternatives (which they say is due to the higher variance of MAD at the normal) but they don't give further details (however, they do say that MAD is better for diagnostic displays, which relates to my point 2. below). Aside from that brief passing mention I haven't found anything else on power comparisons. One big advantage of these kinds of tests is their simplicity; they don't require specialized routines to compute the statistic and are amenable to hand computation in small samples, even for beginning students. In the case of Geary's test there's a normal approximation (D'Agostino, 1970 [3]) for $n>40$; there's likely to be a suitable normal approximation in medium-to-large samples here as well. That they can also have good power in situations we might actually care to identify may make them worth considering -- certainly it could be worth a bit of time investigating the power properties more closely and some investigation to find any previous investigations of the test. As a diagnostic tool. Rather than a formal test (which may answer a question we already know the answer to instead of one we'd be better to answer), we could use the ratio as a diagnostic -- a measure of how far from normality we might be (in effect as a kind of raw "effect size" of a particular kind of non-normality). For example, if we're particularly concerned about how heavy-tailed our distribution might be this sort of ratio might be worth considering as a diagnostic measure for that situation, rather than computing something like kurtosis, say. * (i.e. has relatively high power in that situation) [1] Geary, R. C. 1935. "The ratio of mean deviation to the standard deviation as a test of normality." Biometrika 27: 310-332 [2] Gel, Y. R., Miao, W., and Gastwirth, J. L. (2007) Robust Directed Tests of Normality Against Heavy Tailed Alternatives. Computational Statistics and Data Analysis 51, 2734-2746. [3] D'Agostino, Ralph B. (1970), "Simple compact portable test of normality: Geary's test revisited" Psychological Bulletin, Vol 74(2), Aug, 138-140
Can Median Absolute Deviation (MAD)/SD be used to determine if a distribution is normal or not?
Considered as a formal test of normality: If $M$ = (sample) median absolute deviation from the median and $s$ = standard deviation, then you could indeed use a measure like $R = M/s$ (or its reciproc
Can Median Absolute Deviation (MAD)/SD be used to determine if a distribution is normal or not? Considered as a formal test of normality: If $M$ = (sample) median absolute deviation from the median and $s$ = standard deviation, then you could indeed use a measure like $R = M/s$ (or its reciprocal) as a test statistic for a test of normality. Note however, that such tests cannot tell you something is normal, only - sometimes - that it isn't. To make it a test, all you'd need is the distribution of/a table of percentiles of the distribution of the ratio under the null (i.e. at normality) for various sample sizes. This can be obtained by simulation, for example -- though it might also be possible to obtain it analytically. It's actually a close kin to an old test statistic proposed by Geary[1], which was the ratio of mean deviation to standard deviation, sometimes referred to as Geary's $a$ test (because he proposed a number of test statistics it's necessary to distinguish them, and he used $a$ - and later, $a_1$, to denote this ratio of mean deviation to standard deviation). Geary's $a$ test has quite good power compared to the Shapiro-Wilk test in small to moderate sized samples for a wide range of symmetric alternatives, beating it in a number of situations. To my recollection is has quite good power against heavier tailed cases like the logistic and Laplace. Your proposal should have somewhat similar properties. Indeed I think that the likelihood ratio test for normality against a Laplace alternative would correspond to looking at the ratio of mean deviation from the median to standard deviation (which would be a third statistic a bit more like Geary's than yours). [My guess is that Geary's $a$ test statistic would have better power against something like a logistic alternative than yours, but yours might be more competitive with even peakier-and-heavier-tailed alternatives than the Laplace -- an example of an alternative that I'd expect it to do especially well against would be the location-scale family based off the distribution of the product of two independent standard normals. It might also do fairly well against something like a t-distribution with low d.f. It would be interesting to see if such guesses hold up, and whether it does well in other situations.] Against general alternatives, the power may sometimes be poor, however - for example we should anticipate relatively low power against lightish-tailed, skew alternatives (at least ones that have similar population ratios of median absolute deviation to standard deviation), compared to widely used omnibus tests. However, many skew alternatives of interest are also heavy-tailed, so it may still do fairly well against some of those. It wouldn't be suitable in every situation but might work very well if you anticipate the kind of alternatives against which it should have reasonably good power. There are a number of papers that have investigated Geary's test but off the top of my head I don't recall any for your proposed statistic. I'd bet that it has been looked at but I didn't find any papers on it with a quick search. The closest I came was Gel et al [2] which discusses a test based on the ratio of standard deviation to mean deviation from the median (which they call MAAD), which would be a version of the test I suggested for a Laplace alternative above. They say that compared to the test based on the MAAD, the MAD has lower power against heavy tailed alternatives (which they say is due to the higher variance of MAD at the normal) but they don't give further details (however, they do say that MAD is better for diagnostic displays, which relates to my point 2. below). Aside from that brief passing mention I haven't found anything else on power comparisons. One big advantage of these kinds of tests is their simplicity; they don't require specialized routines to compute the statistic and are amenable to hand computation in small samples, even for beginning students. In the case of Geary's test there's a normal approximation (D'Agostino, 1970 [3]) for $n>40$; there's likely to be a suitable normal approximation in medium-to-large samples here as well. That they can also have good power in situations we might actually care to identify may make them worth considering -- certainly it could be worth a bit of time investigating the power properties more closely and some investigation to find any previous investigations of the test. As a diagnostic tool. Rather than a formal test (which may answer a question we already know the answer to instead of one we'd be better to answer), we could use the ratio as a diagnostic -- a measure of how far from normality we might be (in effect as a kind of raw "effect size" of a particular kind of non-normality). For example, if we're particularly concerned about how heavy-tailed our distribution might be this sort of ratio might be worth considering as a diagnostic measure for that situation, rather than computing something like kurtosis, say. * (i.e. has relatively high power in that situation) [1] Geary, R. C. 1935. "The ratio of mean deviation to the standard deviation as a test of normality." Biometrika 27: 310-332 [2] Gel, Y. R., Miao, W., and Gastwirth, J. L. (2007) Robust Directed Tests of Normality Against Heavy Tailed Alternatives. Computational Statistics and Data Analysis 51, 2734-2746. [3] D'Agostino, Ralph B. (1970), "Simple compact portable test of normality: Geary's test revisited" Psychological Bulletin, Vol 74(2), Aug, 138-140
Can Median Absolute Deviation (MAD)/SD be used to determine if a distribution is normal or not? Considered as a formal test of normality: If $M$ = (sample) median absolute deviation from the median and $s$ = standard deviation, then you could indeed use a measure like $R = M/s$ (or its reciproc
39,020
What's the name of this correlation/association measure between binary variables?
Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = number of cases on which both X and Y are 1 b = number of cases where X is 1 and Y is 0 c = number of cases where X is 0 and Y is 1 d = number of cases where X and Y are 0 a+b+c+d = n, the number of cases. substitute and get $1-\frac{2(b+c)}{n} = \frac{n-2b-2c}{n} = \frac{(a+d)-(b+c)}{a+b+c+d}$ = Hamann similarity coefficient. Meet it e.g. here. To cite: Hamann similarity measure. This measure gives the probability that a characteristic has the same state in both items (present in both or absent from both) minus the probability that a characteristic has different states in the two items (present in one and absent from the other). HAMANN has a range of −1 to +1 and is monotonically related to Simple Matching similarity (SM), Sokal & Sneath similarity 1 (SS1), and Rogers & Tanimoto similarity (RT). You might want to compare the Hamann formula with that of phi correlation (that you mention) given in a,b,c,d terms. Both are "correlation" measures - ranging from -1 to 1. But look, Phi's numerator $ad-bc$ will approach 1 only when both a and d are large (or likewise -1, if both b and c are large): product, you know... In other words, Pearson correlation, and especially its dichotomous-data hypostasis, Phi, is sensitive to the symmetry of marginal distributions in the data. Hamann's numerator $(a+d)-(b+c)$, having sums in place of products, is not sensitive to it: either of two summands in a pair being large is enough for the coefficient to attain close to 1 (or -1). Thus, if you want a "correlation" (or quasi-correlation) measure defying marginal distributions shape - choose Hamann over Phi. Illustration: Crosstabulations: Y X 7 1 1 7 Phi = .75; Hamann = .75 Y X 4 1 1 10 Phi = .71; Hamann = .75
What's the name of this correlation/association measure between binary variables?
Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = num
What's the name of this correlation/association measure between binary variables? Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = number of cases on which both X and Y are 1 b = number of cases where X is 1 and Y is 0 c = number of cases where X is 0 and Y is 1 d = number of cases where X and Y are 0 a+b+c+d = n, the number of cases. substitute and get $1-\frac{2(b+c)}{n} = \frac{n-2b-2c}{n} = \frac{(a+d)-(b+c)}{a+b+c+d}$ = Hamann similarity coefficient. Meet it e.g. here. To cite: Hamann similarity measure. This measure gives the probability that a characteristic has the same state in both items (present in both or absent from both) minus the probability that a characteristic has different states in the two items (present in one and absent from the other). HAMANN has a range of −1 to +1 and is monotonically related to Simple Matching similarity (SM), Sokal & Sneath similarity 1 (SS1), and Rogers & Tanimoto similarity (RT). You might want to compare the Hamann formula with that of phi correlation (that you mention) given in a,b,c,d terms. Both are "correlation" measures - ranging from -1 to 1. But look, Phi's numerator $ad-bc$ will approach 1 only when both a and d are large (or likewise -1, if both b and c are large): product, you know... In other words, Pearson correlation, and especially its dichotomous-data hypostasis, Phi, is sensitive to the symmetry of marginal distributions in the data. Hamann's numerator $(a+d)-(b+c)$, having sums in place of products, is not sensitive to it: either of two summands in a pair being large is enough for the coefficient to attain close to 1 (or -1). Thus, if you want a "correlation" (or quasi-correlation) measure defying marginal distributions shape - choose Hamann over Phi. Illustration: Crosstabulations: Y X 7 1 1 7 Phi = .75; Hamann = .75 Y X 4 1 1 10 Phi = .71; Hamann = .75
What's the name of this correlation/association measure between binary variables? Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = num
39,021
What's the name of this correlation/association measure between binary variables?
Hubalek, Z. Coefficients of association and similarity, based on binary (presence-absence) data: an evaluation (Biol. Rev., 1982) reviews and ranks 42 different correlation coefficients for binary data. Only 3 of them meet basic statistical desiderata. Unfortunately, the issue of PRE (proportionate reduction of error) interpretation is not discussed. For the following contingency table: present absent present a b absent c d the association measure $r$ should meet the following obligatory conditions: $r(J,K) \le r(J,J) \quad\forall J, K$ $\min(r)$ should be at $a = d = 0$ and $\max(r)$ at $b = c = 0$ $r(J,K) = r(K,J) \quad \forall K,J$ discrimination between positive and negative association $r$ should be linear with $\sqrt{\chi^2}$ for both subsets $ad-bc < 0 $ and $ad-bc >= 0$ (note that $\chi^2$ violates condition 4) and ideally the following non-obligatory: range of $r$ should be either $\left\{ -1 \dots +1 \right\}$, $\left\{0 \dots +1 \right\}$, or $\left\{0 \dots \infty \right\}$ $r(b=c=0) > r(b = 0 \veebar c = 0)$ $r(a=0) = min(r)$ (stricter than 2) above) $r(a+1)-r(a) = r(a+2)-r(a+1)$ $r(a=0,b,c,d), r(a=1,b-1,c-1,d+1), r(a=2,b-2,c-2,d+2)\ldots$ should be smooth homogeneous distribution of $r$ in permutation sample random samples from population with known $a,b,c,d$: $r$ should show little variability even in small samples simplicity of calculation, low computer time All conditions are met by Jaccard $\left( \frac{a}{a+b+c} \right)$, Russel & Rao $\left( \frac{a} {a+b+c+d} \right)$ (both range $\left\{0 \dots +1 \right\}$) and McConnaughey $\left( \frac{a^2 - bc}{(a+b) \times (a+c)}\right)$ (range $\left\{ -1 \dots +1 \right\}$)
What's the name of this correlation/association measure between binary variables?
Hubalek, Z. Coefficients of association and similarity, based on binary (presence-absence) data: an evaluation (Biol. Rev., 1982) reviews and ranks 42 different correlation coefficients for binary dat
What's the name of this correlation/association measure between binary variables? Hubalek, Z. Coefficients of association and similarity, based on binary (presence-absence) data: an evaluation (Biol. Rev., 1982) reviews and ranks 42 different correlation coefficients for binary data. Only 3 of them meet basic statistical desiderata. Unfortunately, the issue of PRE (proportionate reduction of error) interpretation is not discussed. For the following contingency table: present absent present a b absent c d the association measure $r$ should meet the following obligatory conditions: $r(J,K) \le r(J,J) \quad\forall J, K$ $\min(r)$ should be at $a = d = 0$ and $\max(r)$ at $b = c = 0$ $r(J,K) = r(K,J) \quad \forall K,J$ discrimination between positive and negative association $r$ should be linear with $\sqrt{\chi^2}$ for both subsets $ad-bc < 0 $ and $ad-bc >= 0$ (note that $\chi^2$ violates condition 4) and ideally the following non-obligatory: range of $r$ should be either $\left\{ -1 \dots +1 \right\}$, $\left\{0 \dots +1 \right\}$, or $\left\{0 \dots \infty \right\}$ $r(b=c=0) > r(b = 0 \veebar c = 0)$ $r(a=0) = min(r)$ (stricter than 2) above) $r(a+1)-r(a) = r(a+2)-r(a+1)$ $r(a=0,b,c,d), r(a=1,b-1,c-1,d+1), r(a=2,b-2,c-2,d+2)\ldots$ should be smooth homogeneous distribution of $r$ in permutation sample random samples from population with known $a,b,c,d$: $r$ should show little variability even in small samples simplicity of calculation, low computer time All conditions are met by Jaccard $\left( \frac{a}{a+b+c} \right)$, Russel & Rao $\left( \frac{a} {a+b+c+d} \right)$ (both range $\left\{0 \dots +1 \right\}$) and McConnaughey $\left( \frac{a^2 - bc}{(a+b) \times (a+c)}\right)$ (range $\left\{ -1 \dots +1 \right\}$)
What's the name of this correlation/association measure between binary variables? Hubalek, Z. Coefficients of association and similarity, based on binary (presence-absence) data: an evaluation (Biol. Rev., 1982) reviews and ranks 42 different correlation coefficients for binary dat
39,022
Why use differencing and Box-Cox in time series?
The Box-Cox transformation is a family of power transformations indexed by a parameter lambda. Whenever you use it the parameter needs to be estimated from the data. In time series the process could have a non-constant variance. If the variance changes with time, the process is nonstationary. It is often desirable to transform a time series to make it stationary. Sometimes after applying Box-Cox with a particular value of lambda the process may look stationary. It is sometimes possible that even if after applying the Box-Cox transformation the series does not appear to be stationary, diagnostics from ARIMA modeling can then be used to decide if differencing or seasonal differencing might be useful to remove polynomial trends or seasonal trends respectively. After that, the result might be an ARMA model that is stationary. If diagnostics confirm the orders p and q for the ARMA model, the AR and MA parameters can then be estimated. Regarding other possible uses of Box-Cox in the case of a series of iid random variables that do not appear to be normally distributed there may be a particular value of lambda that makes the data look approximately normal. Presumably, this could be applied in regression or time series to the error term.
Why use differencing and Box-Cox in time series?
The Box-Cox transformation is a family of power transformations indexed by a parameter lambda. Whenever you use it the parameter needs to be estimated from the data. In time series the process could
Why use differencing and Box-Cox in time series? The Box-Cox transformation is a family of power transformations indexed by a parameter lambda. Whenever you use it the parameter needs to be estimated from the data. In time series the process could have a non-constant variance. If the variance changes with time, the process is nonstationary. It is often desirable to transform a time series to make it stationary. Sometimes after applying Box-Cox with a particular value of lambda the process may look stationary. It is sometimes possible that even if after applying the Box-Cox transformation the series does not appear to be stationary, diagnostics from ARIMA modeling can then be used to decide if differencing or seasonal differencing might be useful to remove polynomial trends or seasonal trends respectively. After that, the result might be an ARMA model that is stationary. If diagnostics confirm the orders p and q for the ARMA model, the AR and MA parameters can then be estimated. Regarding other possible uses of Box-Cox in the case of a series of iid random variables that do not appear to be normally distributed there may be a particular value of lambda that makes the data look approximately normal. Presumably, this could be applied in regression or time series to the error term.
Why use differencing and Box-Cox in time series? The Box-Cox transformation is a family of power transformations indexed by a parameter lambda. Whenever you use it the parameter needs to be estimated from the data. In time series the process could
39,023
Are Random Forests more powerful than generalized linear models?
You should try lots of models. The 'no free lunch' theorem states that there is no one best model - every situation is different. Logistic regression for example is desirable when it works because parameters are very interpretable. Random forests are great because they can deal with very difficult patterns, but forget about interpreting them. The point is - never stick to just one approach.
Are Random Forests more powerful than generalized linear models?
You should try lots of models. The 'no free lunch' theorem states that there is no one best model - every situation is different. Logistic regression for example is desirable when it works because par
Are Random Forests more powerful than generalized linear models? You should try lots of models. The 'no free lunch' theorem states that there is no one best model - every situation is different. Logistic regression for example is desirable when it works because parameters are very interpretable. Random forests are great because they can deal with very difficult patterns, but forget about interpreting them. The point is - never stick to just one approach.
Are Random Forests more powerful than generalized linear models? You should try lots of models. The 'no free lunch' theorem states that there is no one best model - every situation is different. Logistic regression for example is desirable when it works because par
39,024
Are Random Forests more powerful than generalized linear models?
One point to consider is are you interested in making predictions or understanding associations and carrying out inference (confidence intervals around effects). Although random forests provide a variable-importance summary, this technique is primarily aimed at prediction; there is no inference. Many researchers think they are interested in making predictions, but often there is a mismatch with their goals. With that said, you can make predictions with glm and gamlss. You also have the flexibility of regression. A benefit to the random forests is that you do not have to specify aspects such as interactions and because of this, it may discover patterns in your data. Further it handles the case where there are more predictors than observations. Still, there is evidence that techniques such as random forests do not use the data as efficiently as conventional techniques do. More research is needed. I think both techniques are updatable in the same manner. Reference: van der Ploeg, T., Austin, P. C., & Steyerberg, E. W. (2014). Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC medical research methodology, 14(1), 137.
Are Random Forests more powerful than generalized linear models?
One point to consider is are you interested in making predictions or understanding associations and carrying out inference (confidence intervals around effects). Although random forests provide a vari
Are Random Forests more powerful than generalized linear models? One point to consider is are you interested in making predictions or understanding associations and carrying out inference (confidence intervals around effects). Although random forests provide a variable-importance summary, this technique is primarily aimed at prediction; there is no inference. Many researchers think they are interested in making predictions, but often there is a mismatch with their goals. With that said, you can make predictions with glm and gamlss. You also have the flexibility of regression. A benefit to the random forests is that you do not have to specify aspects such as interactions and because of this, it may discover patterns in your data. Further it handles the case where there are more predictors than observations. Still, there is evidence that techniques such as random forests do not use the data as efficiently as conventional techniques do. More research is needed. I think both techniques are updatable in the same manner. Reference: van der Ploeg, T., Austin, P. C., & Steyerberg, E. W. (2014). Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC medical research methodology, 14(1), 137.
Are Random Forests more powerful than generalized linear models? One point to consider is are you interested in making predictions or understanding associations and carrying out inference (confidence intervals around effects). Although random forests provide a vari
39,025
Are Random Forests more powerful than generalized linear models?
Let's have some simple examples to show the differences. Our example have a single independent variable x and a single dependent variable - either real y or categorical z: x y z ... 0 0.01 A 1 1.98 A 2.01 4.02 B 2.99 6.01 B ... One can see that y grows as x grows and that z=B for values of x grater than something around 1.5. That is an example when GLM and similar methods rock. It is easy to make good predictions for both y and z for a new value of x. For example, for x=0.5 you would predict y=1 and z=A ("You can understand the direct influence, and direction, of each variable" as @HEITZ wrote) x y z ... 0 0.01 A 1 1.98 B 2.01 0 A 2.99 2.01 B 4 0 A 5.01 2.00 B ... We can see a clear pattern in the data again, however GLM and similar methods cannot, the connection between x and y or z is not linear nor even additive. That is when other methods as random forests needs to be used. Prediction based on GLM for x=3 would be y=1 and more or less randomly z=A or z=B. However RF or similar methods can predict what we would expect: y=2 and z=B. Generally I would do: Try GLM and similar, if they model the data well, use it, because you can understand the model well; if not Try other methods as RF (or neural networks, etc.)
Are Random Forests more powerful than generalized linear models?
Let's have some simple examples to show the differences. Our example have a single independent variable x and a single dependent variable - either real y or categorical z: x y z ... 0 0.01 A
Are Random Forests more powerful than generalized linear models? Let's have some simple examples to show the differences. Our example have a single independent variable x and a single dependent variable - either real y or categorical z: x y z ... 0 0.01 A 1 1.98 A 2.01 4.02 B 2.99 6.01 B ... One can see that y grows as x grows and that z=B for values of x grater than something around 1.5. That is an example when GLM and similar methods rock. It is easy to make good predictions for both y and z for a new value of x. For example, for x=0.5 you would predict y=1 and z=A ("You can understand the direct influence, and direction, of each variable" as @HEITZ wrote) x y z ... 0 0.01 A 1 1.98 B 2.01 0 A 2.99 2.01 B 4 0 A 5.01 2.00 B ... We can see a clear pattern in the data again, however GLM and similar methods cannot, the connection between x and y or z is not linear nor even additive. That is when other methods as random forests needs to be used. Prediction based on GLM for x=3 would be y=1 and more or less randomly z=A or z=B. However RF or similar methods can predict what we would expect: y=2 and z=B. Generally I would do: Try GLM and similar, if they model the data well, use it, because you can understand the model well; if not Try other methods as RF (or neural networks, etc.)
Are Random Forests more powerful than generalized linear models? Let's have some simple examples to show the differences. Our example have a single independent variable x and a single dependent variable - either real y or categorical z: x y z ... 0 0.01 A
39,026
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y)
You should present Mean (X/Y) if X/Y is a useful measure and a mean is a useful way to summarize it. By Jensen's Inequality we know that the ratio of the mean is never equal to the mean of the ratio except under some special circumstances.
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y)
You should present Mean (X/Y) if X/Y is a useful measure and a mean is a useful way to summarize it. By Jensen's Inequality we know that the ratio of the mean is never equal to the mean of the ratio e
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y) You should present Mean (X/Y) if X/Y is a useful measure and a mean is a useful way to summarize it. By Jensen's Inequality we know that the ratio of the mean is never equal to the mean of the ratio except under some special circumstances.
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y) You should present Mean (X/Y) if X/Y is a useful measure and a mean is a useful way to summarize it. By Jensen's Inequality we know that the ratio of the mean is never equal to the mean of the ratio e
39,027
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y)
$Z=Y/X$ may be meaningful for individual users as their individual average volume per upload, but $\text{Mean}(Y/X)$ does not look meaningful in aggregate as some users use the system more than others. If you took a weighted mean of $Z=Y/X$ to account for this, the natural weights would be the numbers of uploads $X$ and the resulting weighted mean would turn out to be $$\text{Weighted Mean}(Z)=\text{Sum}(X \times Y/X)/ \text{Sum}(X)=\text{Sum}(Y)/ \text{Sum}(X) \\ =\text{Mean}(Y)/ \text{Mean}(X)$$ which would also be the aggregate average volume per upload across the system. Your concerns are justified: It would probably be better to use the latter option.
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y)
$Z=Y/X$ may be meaningful for individual users as their individual average volume per upload, but $\text{Mean}(Y/X)$ does not look meaningful in aggregate as some users use the system more than others
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y) $Z=Y/X$ may be meaningful for individual users as their individual average volume per upload, but $\text{Mean}(Y/X)$ does not look meaningful in aggregate as some users use the system more than others. If you took a weighted mean of $Z=Y/X$ to account for this, the natural weights would be the numbers of uploads $X$ and the resulting weighted mean would turn out to be $$\text{Weighted Mean}(Z)=\text{Sum}(X \times Y/X)/ \text{Sum}(X)=\text{Sum}(Y)/ \text{Sum}(X) \\ =\text{Mean}(Y)/ \text{Mean}(X)$$ which would also be the aggregate average volume per upload across the system. Your concerns are justified: It would probably be better to use the latter option.
Usages of Mean(X/Y) vs. Mean(X) / Mean(Y) $Z=Y/X$ may be meaningful for individual users as their individual average volume per upload, but $\text{Mean}(Y/X)$ does not look meaningful in aggregate as some users use the system more than others
39,028
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
First, there are more Markov chain Monte Carlo (MCMC) samplers aside from the Metropolis-Hastings (MH), but I will focus on MH. It is not that a Markov chain is needed, it is that using an MH algorithm the samples obtained themselves form a Markov chain that converge to the stationary distribution indicated in the accept-reject criterion. There are a couple of differences between rejection sampling and MH. Lets say your distribution from which you desire samples is $\pi$. In MH if your current sample is $x$, you draw a sample $y$ from a proposal distribution $q$, and find the following accept reject ratio $$\min \left( 1,\dfrac{\pi(y) \,q(x \mid y)}{\pi(x) \, q(y \mid x) } \right). $$ Notice that the accept-reject ratio itself is a function of the current step $x$. Even if your proposal is not dependent on your current step (like in Independent MH), your ratio will still be $$\min \left( 1,\dfrac{\pi(y) \,q(x)}{\pi(x) \, q(y) } \right), $$ which again depends on the current step. Thus the probability with which you accept or reject depends on the current step, making the samples that you obtain a Markov chain. If the proposed step is rejected in MH, then the next step is the same as the current step. So again, the next step depends on the previous step, and thus we have a Markov chain. The stationary distribution that the MH algorithm will converge to is the one that takes $\pi$s position in the ratio. If you replace it with any other distribution, the samples you get will converge to that distribution. Whether the Markov chain converges or not is something practioners don't have to worry about because this has already been proven for the MH algorithm. Finally, there is no need to calculate the transition matrix because there is a clear path of updating the Markov chain without a transition matrix. Also, for when the state space infinite (like the real line), then we can no longer deal with the transition matrix and have to study the transition "kernel". But this is not required since the way the Markov chain updates itself is only theMH-ratio algorithm, and no other information is needed.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
First, there are more Markov chain Monte Carlo (MCMC) samplers aside from the Metropolis-Hastings (MH), but I will focus on MH. It is not that a Markov chain is needed, it is that using an MH algorith
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] First, there are more Markov chain Monte Carlo (MCMC) samplers aside from the Metropolis-Hastings (MH), but I will focus on MH. It is not that a Markov chain is needed, it is that using an MH algorithm the samples obtained themselves form a Markov chain that converge to the stationary distribution indicated in the accept-reject criterion. There are a couple of differences between rejection sampling and MH. Lets say your distribution from which you desire samples is $\pi$. In MH if your current sample is $x$, you draw a sample $y$ from a proposal distribution $q$, and find the following accept reject ratio $$\min \left( 1,\dfrac{\pi(y) \,q(x \mid y)}{\pi(x) \, q(y \mid x) } \right). $$ Notice that the accept-reject ratio itself is a function of the current step $x$. Even if your proposal is not dependent on your current step (like in Independent MH), your ratio will still be $$\min \left( 1,\dfrac{\pi(y) \,q(x)}{\pi(x) \, q(y) } \right), $$ which again depends on the current step. Thus the probability with which you accept or reject depends on the current step, making the samples that you obtain a Markov chain. If the proposed step is rejected in MH, then the next step is the same as the current step. So again, the next step depends on the previous step, and thus we have a Markov chain. The stationary distribution that the MH algorithm will converge to is the one that takes $\pi$s position in the ratio. If you replace it with any other distribution, the samples you get will converge to that distribution. Whether the Markov chain converges or not is something practioners don't have to worry about because this has already been proven for the MH algorithm. Finally, there is no need to calculate the transition matrix because there is a clear path of updating the Markov chain without a transition matrix. Also, for when the state space infinite (like the real line), then we can no longer deal with the transition matrix and have to study the transition "kernel". But this is not required since the way the Markov chain updates itself is only theMH-ratio algorithm, and no other information is needed.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] First, there are more Markov chain Monte Carlo (MCMC) samplers aside from the Metropolis-Hastings (MH), but I will focus on MH. It is not that a Markov chain is needed, it is that using an MH algorith
39,029
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
This is just confusion with the terminology. It's not that you need to "use" a Markov chain to draw Metropolis-Hastings samples, it's that the sequence of Metropolis-Hastings draws is a Markov chain. This is an innate part of the algorithm design. "Markov chain Monte Carlo" is just a qualifier indicating that the draws from this Monte Carlo algorithm produce a Markov chain.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
This is just confusion with the terminology. It's not that you need to "use" a Markov chain to draw Metropolis-Hastings samples, it's that the sequence of Metropolis-Hastings draws is a Markov chain.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] This is just confusion with the terminology. It's not that you need to "use" a Markov chain to draw Metropolis-Hastings samples, it's that the sequence of Metropolis-Hastings draws is a Markov chain. This is an innate part of the algorithm design. "Markov chain Monte Carlo" is just a qualifier indicating that the draws from this Monte Carlo algorithm produce a Markov chain.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] This is just confusion with the terminology. It's not that you need to "use" a Markov chain to draw Metropolis-Hastings samples, it's that the sequence of Metropolis-Hastings draws is a Markov chain.
39,030
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
MCMC in general and Metropolis-Hastings in particular is quite distinct from rejection sampling. Note that rejection sampling is independent from one generated value to the next -- it doesn't matter what value you just generated, the distribution of the next one doesn't consider it. MCMC involves a series of moves. At one step you are at some value and then conditional on that value you have some way of possibly moving to a (possibly) new value next step. In addition, you left out of your description what happens when you reject. In rejection sampling you simply fail to have a value. You generate again. In Metropolis-Hastings, you fail to accept the move, so you retain the previous value.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate]
MCMC in general and Metropolis-Hastings in particular is quite distinct from rejection sampling. Note that rejection sampling is independent from one generated value to the next -- it doesn't matter
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] MCMC in general and Metropolis-Hastings in particular is quite distinct from rejection sampling. Note that rejection sampling is independent from one generated value to the next -- it doesn't matter what value you just generated, the distribution of the next one doesn't consider it. MCMC involves a series of moves. At one step you are at some value and then conditional on that value you have some way of possibly moving to a (possibly) new value next step. In addition, you left out of your description what happens when you reject. In rejection sampling you simply fail to have a value. You generate again. In Metropolis-Hastings, you fail to accept the move, so you retain the previous value.
Does "Markov Chain Monte Carlo" method really need "Markov Chain"? [duplicate] MCMC in general and Metropolis-Hastings in particular is quite distinct from rejection sampling. Note that rejection sampling is independent from one generated value to the next -- it doesn't matter
39,031
Are linear classifiers (SVM, Logistic Regression) deterministic?
First, your definition of "deterministic" and "linear classifier" are not clear to me. For example, are you asking if the model building deterministic or model prediction deterministic? In addition, most people will think SVM is not a linear model but you treat it is linear. I am trying to guess what you want to ask from now on. Most models (not necessary to be "linear") are "deterministic" on prediction stage, and they should be. Intuitively we want that feeding the same input, we want to have the same output. However, many models do have some randomness during when we build the model. This means that Given the same data, with different random seeds, you can have different models (see Random Forest as an example) After model building, during the prediction stage it is "deterministic", i.e., feeding same input will have same output. Finally, in the "linear models" you mentioned logistic regression and SVM, they do not have a random seed during the training process. As mentioned in the other answers and comments, the reason is the objective function for logistic regression and SVN are convex, so we have the unique answer / global minima when we build the model.
Are linear classifiers (SVM, Logistic Regression) deterministic?
First, your definition of "deterministic" and "linear classifier" are not clear to me. For example, are you asking if the model building deterministic or model prediction deterministic? In addition, m
Are linear classifiers (SVM, Logistic Regression) deterministic? First, your definition of "deterministic" and "linear classifier" are not clear to me. For example, are you asking if the model building deterministic or model prediction deterministic? In addition, most people will think SVM is not a linear model but you treat it is linear. I am trying to guess what you want to ask from now on. Most models (not necessary to be "linear") are "deterministic" on prediction stage, and they should be. Intuitively we want that feeding the same input, we want to have the same output. However, many models do have some randomness during when we build the model. This means that Given the same data, with different random seeds, you can have different models (see Random Forest as an example) After model building, during the prediction stage it is "deterministic", i.e., feeding same input will have same output. Finally, in the "linear models" you mentioned logistic regression and SVM, they do not have a random seed during the training process. As mentioned in the other answers and comments, the reason is the objective function for logistic regression and SVN are convex, so we have the unique answer / global minima when we build the model.
Are linear classifiers (SVM, Logistic Regression) deterministic? First, your definition of "deterministic" and "linear classifier" are not clear to me. For example, are you asking if the model building deterministic or model prediction deterministic? In addition, m
39,032
Are linear classifiers (SVM, Logistic Regression) deterministic?
I would think that any algorithm that can prove it reaches a global error minimum (linear/logistic regression, support vector machines) should stay the same, except for maybe a few trailing decimal places. Models that do not make this guarantee that could get stuck in a local minimum, like neural networks or random forests, will probably differ from training session to training session.
Are linear classifiers (SVM, Logistic Regression) deterministic?
I would think that any algorithm that can prove it reaches a global error minimum (linear/logistic regression, support vector machines) should stay the same, except for maybe a few trailing decimal pl
Are linear classifiers (SVM, Logistic Regression) deterministic? I would think that any algorithm that can prove it reaches a global error minimum (linear/logistic regression, support vector machines) should stay the same, except for maybe a few trailing decimal places. Models that do not make this guarantee that could get stuck in a local minimum, like neural networks or random forests, will probably differ from training session to training session.
Are linear classifiers (SVM, Logistic Regression) deterministic? I would think that any algorithm that can prove it reaches a global error minimum (linear/logistic regression, support vector machines) should stay the same, except for maybe a few trailing decimal pl
39,033
Square of the Sample Mean as estimator of the variance
You have $X_1, X_2, \dots, X_n$ are iid from an unknown distribution with mean (say) $\mu$ and variance (say) $\sigma^2$. $\bar{X}$ is an unbiased estimator of the mean, and thus $E(\bar{X}) = \mu$. Also, $Var(\bar{X}) = \sigma^2/n$. Thus since, \begin{align*} E[\bar{X}^2] & = Var(\bar{X}) + E[\bar{X}]^2\\ & = \dfrac{\sigma^2}{n} + \mu^2. \end{align*} You can now figure out what the bias is. Clearly, $\bar{X}^2$ is a horrible estimator for $\sigma^2$. As wolfies pointed you, you will do better with $n\bar{X}^2$.
Square of the Sample Mean as estimator of the variance
You have $X_1, X_2, \dots, X_n$ are iid from an unknown distribution with mean (say) $\mu$ and variance (say) $\sigma^2$. $\bar{X}$ is an unbiased estimator of the mean, and thus $E(\bar{X}) = \mu$.
Square of the Sample Mean as estimator of the variance You have $X_1, X_2, \dots, X_n$ are iid from an unknown distribution with mean (say) $\mu$ and variance (say) $\sigma^2$. $\bar{X}$ is an unbiased estimator of the mean, and thus $E(\bar{X}) = \mu$. Also, $Var(\bar{X}) = \sigma^2/n$. Thus since, \begin{align*} E[\bar{X}^2] & = Var(\bar{X}) + E[\bar{X}]^2\\ & = \dfrac{\sigma^2}{n} + \mu^2. \end{align*} You can now figure out what the bias is. Clearly, $\bar{X}^2$ is a horrible estimator for $\sigma^2$. As wolfies pointed you, you will do better with $n\bar{X}^2$.
Square of the Sample Mean as estimator of the variance You have $X_1, X_2, \dots, X_n$ are iid from an unknown distribution with mean (say) $\mu$ and variance (say) $\sigma^2$. $\bar{X}$ is an unbiased estimator of the mean, and thus $E(\bar{X}) = \mu$.
39,034
Square of the Sample Mean as estimator of the variance
Here is a solution using the 'Moment of Moment' functions in the mathStatica package for Mathematica. In particular, let $s_1$ denote the sample sum, i.e. $s_1 = \sum_{i=1}^nX_i$. Then, you seek $$E\big[{\big(\frac{s_1}{n}\big)}^2\big]$$ which is the $1^{\text{st}}$ Raw Moment of $(\frac{s_1}{n})^2$, expressed here in terms of Central moments: where $\mu_2$ denotes the $2^{\text{nd}}$ central moment of the population (i.e. the population variance). Plainly, this is a biased estimator of population variance. Perhaps what you intended was $E\big[n {\big(\frac{s_1}{n}\big)}^2\big]$: which will be an unbiased estimator of population variance, if the population mean is zero.
Square of the Sample Mean as estimator of the variance
Here is a solution using the 'Moment of Moment' functions in the mathStatica package for Mathematica. In particular, let $s_1$ denote the sample sum, i.e. $s_1 = \sum_{i=1}^nX_i$. Then, you seek $$E\
Square of the Sample Mean as estimator of the variance Here is a solution using the 'Moment of Moment' functions in the mathStatica package for Mathematica. In particular, let $s_1$ denote the sample sum, i.e. $s_1 = \sum_{i=1}^nX_i$. Then, you seek $$E\big[{\big(\frac{s_1}{n}\big)}^2\big]$$ which is the $1^{\text{st}}$ Raw Moment of $(\frac{s_1}{n})^2$, expressed here in terms of Central moments: where $\mu_2$ denotes the $2^{\text{nd}}$ central moment of the population (i.e. the population variance). Plainly, this is a biased estimator of population variance. Perhaps what you intended was $E\big[n {\big(\frac{s_1}{n}\big)}^2\big]$: which will be an unbiased estimator of population variance, if the population mean is zero.
Square of the Sample Mean as estimator of the variance Here is a solution using the 'Moment of Moment' functions in the mathStatica package for Mathematica. In particular, let $s_1$ denote the sample sum, i.e. $s_1 = \sum_{i=1}^nX_i$. Then, you seek $$E\
39,035
Square of the Sample Mean as estimator of the variance
Do you actually mean something like "$\frac{1}{n-1} \sum_i \left(x_i - \bar{x} \right)^2$, where $\bar{x}$ is the sample mean, is an unbiased estimator of the population variance?" Or perhaps, "Is $\frac{1}{n} \sum_i x_i^2 - \bar{x}^2$ an unbiased estimator of the population variance?" Trivial counterexample for what you literally asked: It's trivial to show that the square of the sample mean is neither a consistent nor unbiased estimator in the general case. Assume $X_i = 2$ for all i: The sample mean is 2, no matter what. The population variance is 0. The sample mean squared is 4. $ 4 \neq 0$ I'd bet though this isn't what the homework is asking for. (Assuming this is homework.)
Square of the Sample Mean as estimator of the variance
Do you actually mean something like "$\frac{1}{n-1} \sum_i \left(x_i - \bar{x} \right)^2$, where $\bar{x}$ is the sample mean, is an unbiased estimator of the population variance?" Or perhaps, "Is $\f
Square of the Sample Mean as estimator of the variance Do you actually mean something like "$\frac{1}{n-1} \sum_i \left(x_i - \bar{x} \right)^2$, where $\bar{x}$ is the sample mean, is an unbiased estimator of the population variance?" Or perhaps, "Is $\frac{1}{n} \sum_i x_i^2 - \bar{x}^2$ an unbiased estimator of the population variance?" Trivial counterexample for what you literally asked: It's trivial to show that the square of the sample mean is neither a consistent nor unbiased estimator in the general case. Assume $X_i = 2$ for all i: The sample mean is 2, no matter what. The population variance is 0. The sample mean squared is 4. $ 4 \neq 0$ I'd bet though this isn't what the homework is asking for. (Assuming this is homework.)
Square of the Sample Mean as estimator of the variance Do you actually mean something like "$\frac{1}{n-1} \sum_i \left(x_i - \bar{x} \right)^2$, where $\bar{x}$ is the sample mean, is an unbiased estimator of the population variance?" Or perhaps, "Is $\f
39,036
Why are over identified models preferred over just identified models in Structural Equation Modeling?
The point of running an structural equation model is to be able to be wrong - and that's only true if it's over-identified (i.e. has degrees of freedom greater than zero). You can specify a multiple regression model as a structural equation model, you'll get the same answer, and the model will be just identified, so it will have zero degrees of freedom. But then what was the point of the structural equation model? If a model is not identified, then it has more than one solution - it usually has an infinite number of solutions, all of which are equally good. Here's an equation: $x = 5$ There is one unknown, and one equation. It is just identified. That's fine if you want to know the value of $x$, but there is no way that the model can be wrong. Here's another: $x + y= 5$ There are two unknowns, and one equation. It's not identified. There are an infinite number of solutions and they are all equally good. Here's a set of equations. There are two unknowns and two equations, it's just identified. $x + y = 5$ $x - y = 1$ Finally: $x + y = 5$ $x - y = 1$ $2x + 2y = 6$ Now there are three equations, and two unknowns. It's now possible for this set of equations to be wrong, and it is wrong. But change that value of 6, and the model could be correct, and still be over identified. Identification is a tricky issue, because the model must also be empirically identified, and that depends on the data. Here we have two equations, and two unknowns, but the model is not identified. $x * y = 0$ $x + y = 4$ There are two possible solutions, either x or y can be equal to zero. So we want over-identified models, because they provide a single solution that can possibly be wrong. Just identified models also provide a single solution, but it cannot be wrong. To expand beyond the question: Why is this a good thing? When I teach this, I say that you have a right to be sued. that's weird. Why would anyone want to be able to be sued? Surely you'd be happier if you couldn't be sued? The ability to be sued means that people can trust you with things, because they can sue you if you screw up. You can rent a car, get a checkbook, sign a contract, get a credit card. Children can't do those things, because they can't be sued. In the same way, being able to be wrong has advantages - if you're right, it's better evidence for your theory.
Why are over identified models preferred over just identified models in Structural Equation Modeling
The point of running an structural equation model is to be able to be wrong - and that's only true if it's over-identified (i.e. has degrees of freedom greater than zero). You can specify a multiple r
Why are over identified models preferred over just identified models in Structural Equation Modeling? The point of running an structural equation model is to be able to be wrong - and that's only true if it's over-identified (i.e. has degrees of freedom greater than zero). You can specify a multiple regression model as a structural equation model, you'll get the same answer, and the model will be just identified, so it will have zero degrees of freedom. But then what was the point of the structural equation model? If a model is not identified, then it has more than one solution - it usually has an infinite number of solutions, all of which are equally good. Here's an equation: $x = 5$ There is one unknown, and one equation. It is just identified. That's fine if you want to know the value of $x$, but there is no way that the model can be wrong. Here's another: $x + y= 5$ There are two unknowns, and one equation. It's not identified. There are an infinite number of solutions and they are all equally good. Here's a set of equations. There are two unknowns and two equations, it's just identified. $x + y = 5$ $x - y = 1$ Finally: $x + y = 5$ $x - y = 1$ $2x + 2y = 6$ Now there are three equations, and two unknowns. It's now possible for this set of equations to be wrong, and it is wrong. But change that value of 6, and the model could be correct, and still be over identified. Identification is a tricky issue, because the model must also be empirically identified, and that depends on the data. Here we have two equations, and two unknowns, but the model is not identified. $x * y = 0$ $x + y = 4$ There are two possible solutions, either x or y can be equal to zero. So we want over-identified models, because they provide a single solution that can possibly be wrong. Just identified models also provide a single solution, but it cannot be wrong. To expand beyond the question: Why is this a good thing? When I teach this, I say that you have a right to be sued. that's weird. Why would anyone want to be able to be sued? Surely you'd be happier if you couldn't be sued? The ability to be sued means that people can trust you with things, because they can sue you if you screw up. You can rent a car, get a checkbook, sign a contract, get a credit card. Children can't do those things, because they can't be sued. In the same way, being able to be wrong has advantages - if you're right, it's better evidence for your theory.
Why are over identified models preferred over just identified models in Structural Equation Modeling The point of running an structural equation model is to be able to be wrong - and that's only true if it's over-identified (i.e. has degrees of freedom greater than zero). You can specify a multiple r
39,037
Regression variable has no meaning for one category
This will happen naturally, with no intervention on your part. Consider, for instance, dummy coding. This system uses vectors of zeros and ones to indicate the categorical variables in a way that allows straightforward interpretation of the coefficients. A variable with $k$ categories is represented by $k-1$ terms (along with an "intercept"). A standard vector notation to describe this uses vector notation. The "base" contribution to the response is the intercept $\beta_0$. The corresponding vector is $(1,0,\ldots,0)$ with $k$ components. The contribution of the second category relative to the first is $\beta_1$, whence the contribution of the second category is $\beta_0 + \beta_1$. The corresponding vector is $(1,1,0,\ldots,0)$. $\cdots$ The contribution of category $k$ relative to the first is $\beta_{k-1}$, whence the contribution of category $k$ is $\beta_0 + \beta_{k-1}$. The corresponding vector is $(1,0,\ldots,0,1)$. Thus, each vector has an initial $1$ (for the intercept). The vectors for all categories but the base have a single additional $1$. Each observation, as given by its vector $\mathbf{x}$, contributes $$\mathbf{x} \cdot (\beta_0, \beta_1, \ldots, \beta_{k-1})$$ to the response. These dot products give the values $\beta_0, \beta_0+\beta_1, \ldots, \beta_0 + \beta_{k-1}$ mentioned in the bulleted list above. The same system is used when more than one categorical variable is included among the regressors, but they all share the same intercept. In other words, the "base" case is the one where all categorical variables have their base values. The principal advantage of this coding system--besides being automatic in just about any statistical computing platform--is that the coefficients have simple natural interpretations. To evaluate whether the existence of communication is significant, for instance, you would examine the coefficient associated with $x_2$ ($\beta_3$ in this example) and test whether it differs significantly from zero. This test is usually automatically conducted by software and shown in its summary output. The question provides a good example. The following table (automatically created by R) shows all six possible combinations of a three-category regressor $x_1$, with values "1", "2", and "3+", and a two-category regressor $x_2$ with values "No" and "Yes". x1 x2 Intercept x1=2 x1=3+ x2=Yes Coefficient 1 No 1 0 0 0 b0 2 No 1 1 0 0 b0 + b1 3+ No 1 0 1 0 b0 + b2 1 Yes 1 0 0 1 b0 + b3 -- there won't be any rows like this 2 Yes 1 1 0 1 b0 + b1 + b3 3+ Yes 1 0 1 1 b0 + b2 + b3 The left two columns show the combined values of $x_1$ and $x_2$. The next remaining four columns correspond to (a) an intercept common to both variables, (b) $3-1=2$ components for the effects of $x_1$ relative to the base, and (c) $2-1=1$ components for the effects of $x_2$ relative to the base (that is, the difference between having communications and not). We may call their coefficients $\beta_0, \beta_1, \beta_2, \beta_3$, in order from left to right. The dot product, showing the contribution of each row to the response, is summarized in the rightmost column (in which b0 stands for $\beta_0$, etc). When certain combinations are not possible, such as x1=1 and x2=Yes (represented in the fourth row), they simply will not appear in the dataset. Because of this, some might argue that the interpretation of $\beta_3$ should change subtly. Whereas before it would have been understood as the difference between communications and no communications, now it is understood as that difference for the cases where communications make sense. Here is an example of software output (for a logistic regression) using this coding: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.65625 0.07841 8.369 3.09e-14 *** x1.2 -0.33594 0.10373 -3.238 0.00147 ** x1.3+ -0.50781 0.10373 -4.895 2.43e-06 *** x2Yes 0.04687 0.07841 0.598 0.55085 The four lines correspond to the four similarly-labeled columns in the table. In this case, the software has performed a t-test for x2Yes, which is $\beta_3$, and obtained a p-value of $0.55085$. This would not be considered significant by anyone. The conclusion would be that although there is some evidence that communications increases the chance of a response (as evidenced by the positive estimate $\hat\beta_3 = 0.04687$), it is not significant in this dataset.
Regression variable has no meaning for one category
This will happen naturally, with no intervention on your part. Consider, for instance, dummy coding. This system uses vectors of zeros and ones to indicate the categorical variables in a way that all
Regression variable has no meaning for one category This will happen naturally, with no intervention on your part. Consider, for instance, dummy coding. This system uses vectors of zeros and ones to indicate the categorical variables in a way that allows straightforward interpretation of the coefficients. A variable with $k$ categories is represented by $k-1$ terms (along with an "intercept"). A standard vector notation to describe this uses vector notation. The "base" contribution to the response is the intercept $\beta_0$. The corresponding vector is $(1,0,\ldots,0)$ with $k$ components. The contribution of the second category relative to the first is $\beta_1$, whence the contribution of the second category is $\beta_0 + \beta_1$. The corresponding vector is $(1,1,0,\ldots,0)$. $\cdots$ The contribution of category $k$ relative to the first is $\beta_{k-1}$, whence the contribution of category $k$ is $\beta_0 + \beta_{k-1}$. The corresponding vector is $(1,0,\ldots,0,1)$. Thus, each vector has an initial $1$ (for the intercept). The vectors for all categories but the base have a single additional $1$. Each observation, as given by its vector $\mathbf{x}$, contributes $$\mathbf{x} \cdot (\beta_0, \beta_1, \ldots, \beta_{k-1})$$ to the response. These dot products give the values $\beta_0, \beta_0+\beta_1, \ldots, \beta_0 + \beta_{k-1}$ mentioned in the bulleted list above. The same system is used when more than one categorical variable is included among the regressors, but they all share the same intercept. In other words, the "base" case is the one where all categorical variables have their base values. The principal advantage of this coding system--besides being automatic in just about any statistical computing platform--is that the coefficients have simple natural interpretations. To evaluate whether the existence of communication is significant, for instance, you would examine the coefficient associated with $x_2$ ($\beta_3$ in this example) and test whether it differs significantly from zero. This test is usually automatically conducted by software and shown in its summary output. The question provides a good example. The following table (automatically created by R) shows all six possible combinations of a three-category regressor $x_1$, with values "1", "2", and "3+", and a two-category regressor $x_2$ with values "No" and "Yes". x1 x2 Intercept x1=2 x1=3+ x2=Yes Coefficient 1 No 1 0 0 0 b0 2 No 1 1 0 0 b0 + b1 3+ No 1 0 1 0 b0 + b2 1 Yes 1 0 0 1 b0 + b3 -- there won't be any rows like this 2 Yes 1 1 0 1 b0 + b1 + b3 3+ Yes 1 0 1 1 b0 + b2 + b3 The left two columns show the combined values of $x_1$ and $x_2$. The next remaining four columns correspond to (a) an intercept common to both variables, (b) $3-1=2$ components for the effects of $x_1$ relative to the base, and (c) $2-1=1$ components for the effects of $x_2$ relative to the base (that is, the difference between having communications and not). We may call their coefficients $\beta_0, \beta_1, \beta_2, \beta_3$, in order from left to right. The dot product, showing the contribution of each row to the response, is summarized in the rightmost column (in which b0 stands for $\beta_0$, etc). When certain combinations are not possible, such as x1=1 and x2=Yes (represented in the fourth row), they simply will not appear in the dataset. Because of this, some might argue that the interpretation of $\beta_3$ should change subtly. Whereas before it would have been understood as the difference between communications and no communications, now it is understood as that difference for the cases where communications make sense. Here is an example of software output (for a logistic regression) using this coding: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.65625 0.07841 8.369 3.09e-14 *** x1.2 -0.33594 0.10373 -3.238 0.00147 ** x1.3+ -0.50781 0.10373 -4.895 2.43e-06 *** x2Yes 0.04687 0.07841 0.598 0.55085 The four lines correspond to the four similarly-labeled columns in the table. In this case, the software has performed a t-test for x2Yes, which is $\beta_3$, and obtained a p-value of $0.55085$. This would not be considered significant by anyone. The conclusion would be that although there is some evidence that communications increases the chance of a response (as evidenced by the positive estimate $\hat\beta_3 = 0.04687$), it is not significant in this dataset.
Regression variable has no meaning for one category This will happen naturally, with no intervention on your part. Consider, for instance, dummy coding. This system uses vectors of zeros and ones to indicate the categorical variables in a way that all
39,038
Regression variable has no meaning for one category
If I understand this, what you are saying is that when there is only one person, the question of whether communication exists is meaningless. You can solve this issue by making the two IVs into one: One person Two people, no communication Two people, communication Three+ people, communication Three+ people, no communication
Regression variable has no meaning for one category
If I understand this, what you are saying is that when there is only one person, the question of whether communication exists is meaningless. You can solve this issue by making the two IVs into one:
Regression variable has no meaning for one category If I understand this, what you are saying is that when there is only one person, the question of whether communication exists is meaningless. You can solve this issue by making the two IVs into one: One person Two people, no communication Two people, communication Three+ people, communication Three+ people, no communication
Regression variable has no meaning for one category If I understand this, what you are saying is that when there is only one person, the question of whether communication exists is meaningless. You can solve this issue by making the two IVs into one:
39,039
What's the Kendall Tau's distance between these 2 rankings?
The Kendall tau distance in your case is, indeed, 1. See the following python code: import itertools def kendallTau(A, B): pairs = itertools.combinations(range(0, len(A)), 2) distance = 0 for x, y in pairs: a = A[x] - A[y] b = B[x] - B[y] # if discordant (different signs) if (a * b < 0): distance += 1 return distance ranking_i = [3, 1, 2] ranking_j = [2, 1, 3] assert kendallTau(ranking_i, ranking_j) == 1
What's the Kendall Tau's distance between these 2 rankings?
The Kendall tau distance in your case is, indeed, 1. See the following python code: import itertools def kendallTau(A, B): pairs = itertools.combinations(range(0, len(A)), 2) distance = 0
What's the Kendall Tau's distance between these 2 rankings? The Kendall tau distance in your case is, indeed, 1. See the following python code: import itertools def kendallTau(A, B): pairs = itertools.combinations(range(0, len(A)), 2) distance = 0 for x, y in pairs: a = A[x] - A[y] b = B[x] - B[y] # if discordant (different signs) if (a * b < 0): distance += 1 return distance ranking_i = [3, 1, 2] ranking_j = [2, 1, 3] assert kendallTau(ranking_i, ranking_j) == 1
What's the Kendall Tau's distance between these 2 rankings? The Kendall tau distance in your case is, indeed, 1. See the following python code: import itertools def kendallTau(A, B): pairs = itertools.combinations(range(0, len(A)), 2) distance = 0
39,040
What's the Kendall Tau's distance between these 2 rankings?
The wrong answer got accepted! The correct answer is 3. From Wikipedia: Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. import itertools def kendall_tau_distance(order_a, order_b): pairs = itertools.combinations(range(1, len(order_a)+1), 2) distance = 0 for x, y in pairs: a = order_a.index(x) - order_a.index(y) b = order_b.index(x) - order_b.index(y) if a * b < 0: distance += 1 return distance print kendall_tau_distance([3,1,2], [2,1,3]) 3
What's the Kendall Tau's distance between these 2 rankings?
The wrong answer got accepted! The correct answer is 3. From Wikipedia: Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort alg
What's the Kendall Tau's distance between these 2 rankings? The wrong answer got accepted! The correct answer is 3. From Wikipedia: Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. import itertools def kendall_tau_distance(order_a, order_b): pairs = itertools.combinations(range(1, len(order_a)+1), 2) distance = 0 for x, y in pairs: a = order_a.index(x) - order_a.index(y) b = order_b.index(x) - order_b.index(y) if a * b < 0: distance += 1 return distance print kendall_tau_distance([3,1,2], [2,1,3]) 3
What's the Kendall Tau's distance between these 2 rankings? The wrong answer got accepted! The correct answer is 3. From Wikipedia: Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort alg
39,041
What's the Kendall Tau's distance between these 2 rankings?
The Kendall tau distance in this instance is 3. It is also known as Kemeny distance. See here and here. In some fields rankings are also allowed to have ties, therefore the Kemeny could be considered as a distance of 6 instead of 3. That's a confusion that arises quite often. But in your situation it is 3 because ties are not allowed.
What's the Kendall Tau's distance between these 2 rankings?
The Kendall tau distance in this instance is 3. It is also known as Kemeny distance. See here and here. In some fields rankings are also allowed to have ties, therefore the Kemeny could be considered
What's the Kendall Tau's distance between these 2 rankings? The Kendall tau distance in this instance is 3. It is also known as Kemeny distance. See here and here. In some fields rankings are also allowed to have ties, therefore the Kemeny could be considered as a distance of 6 instead of 3. That's a confusion that arises quite often. But in your situation it is 3 because ties are not allowed.
What's the Kendall Tau's distance between these 2 rankings? The Kendall tau distance in this instance is 3. It is also known as Kemeny distance. See here and here. In some fields rankings are also allowed to have ties, therefore the Kemeny could be considered
39,042
What's the Kendall Tau's distance between these 2 rankings?
Kendell tau distance is the number of swaps which needs to be done for making the two lists the same. It can also be considered as a variant of Insertion Sort, where each swap adds +1 to Kendell distance.
What's the Kendall Tau's distance between these 2 rankings?
Kendell tau distance is the number of swaps which needs to be done for making the two lists the same. It can also be considered as a variant of Insertion Sort, where each swap adds +1 to Kendell dista
What's the Kendall Tau's distance between these 2 rankings? Kendell tau distance is the number of swaps which needs to be done for making the two lists the same. It can also be considered as a variant of Insertion Sort, where each swap adds +1 to Kendell distance.
What's the Kendall Tau's distance between these 2 rankings? Kendell tau distance is the number of swaps which needs to be done for making the two lists the same. It can also be considered as a variant of Insertion Sort, where each swap adds +1 to Kendell dista
39,043
Interpretation of $\mathbf{y}^T(\mathbf{I}-\mathbf{H})\mathbf{y}$ in OLS
It's the sum of squared residuals. As you say (no boldface in my notation), $(I-H)y=r$ gives the residuals, as we subtract the fitted values $Hy$ from the dependent variable $y$. Also, $I-H$ is symmetric and idempotent, i.e., $(I-H)=(I-H)'$ and $(I-H)^2=I-H$. This follows from $H=X(X'X)^{-1}X'$, which is symmetric and idempotent itself, just like $I$. $H$ is symmetric because of the properties $(AB)'=B'A'$ and $[A^{-1}]'=[A']^{-1}$, i.e., that when taking a transpose of the product, it's the product of the transposes in reverse order, and that the inverse of a transpose is the transpose of the inverse. Idempotency of $H$ can be seen from $$ H^2=X(X'X)^{-1}\underbrace{X'X(X'X)^{-1}}_{=I}X'=X(X'X)^{-1}X'=H $$ Thus, $(I-H)^2=I-2H+H^2=I-2H+H=I-H$. Hence, $$ y'(I-H)y=y'(I-H)'(I-H)y=r'r $$ As $r=(r_1,\ldots,r_n)'$, $$ r'r=(r_1,\ldots,r_n)\begin{pmatrix}r_1\\ \vdots\\r_n\end{pmatrix}=\sum_{i=1}^nr_i^2 $$ So it's the sum of the squared mistakes OLS makes. Graphically, it is the sum of the areas of the dashed squares (which look like rectangles because of the aspect ratio), where the residuals are the vertical distances between the observations and the regression line: It is a useful measure of fit. For example, this quantity is a key ingredient in the $R^2$ formula.
Interpretation of $\mathbf{y}^T(\mathbf{I}-\mathbf{H})\mathbf{y}$ in OLS
It's the sum of squared residuals. As you say (no boldface in my notation), $(I-H)y=r$ gives the residuals, as we subtract the fitted values $Hy$ from the dependent variable $y$. Also, $I-H$ is symmet
Interpretation of $\mathbf{y}^T(\mathbf{I}-\mathbf{H})\mathbf{y}$ in OLS It's the sum of squared residuals. As you say (no boldface in my notation), $(I-H)y=r$ gives the residuals, as we subtract the fitted values $Hy$ from the dependent variable $y$. Also, $I-H$ is symmetric and idempotent, i.e., $(I-H)=(I-H)'$ and $(I-H)^2=I-H$. This follows from $H=X(X'X)^{-1}X'$, which is symmetric and idempotent itself, just like $I$. $H$ is symmetric because of the properties $(AB)'=B'A'$ and $[A^{-1}]'=[A']^{-1}$, i.e., that when taking a transpose of the product, it's the product of the transposes in reverse order, and that the inverse of a transpose is the transpose of the inverse. Idempotency of $H$ can be seen from $$ H^2=X(X'X)^{-1}\underbrace{X'X(X'X)^{-1}}_{=I}X'=X(X'X)^{-1}X'=H $$ Thus, $(I-H)^2=I-2H+H^2=I-2H+H=I-H$. Hence, $$ y'(I-H)y=y'(I-H)'(I-H)y=r'r $$ As $r=(r_1,\ldots,r_n)'$, $$ r'r=(r_1,\ldots,r_n)\begin{pmatrix}r_1\\ \vdots\\r_n\end{pmatrix}=\sum_{i=1}^nr_i^2 $$ So it's the sum of the squared mistakes OLS makes. Graphically, it is the sum of the areas of the dashed squares (which look like rectangles because of the aspect ratio), where the residuals are the vertical distances between the observations and the regression line: It is a useful measure of fit. For example, this quantity is a key ingredient in the $R^2$ formula.
Interpretation of $\mathbf{y}^T(\mathbf{I}-\mathbf{H})\mathbf{y}$ in OLS It's the sum of squared residuals. As you say (no boldface in my notation), $(I-H)y=r$ gives the residuals, as we subtract the fitted values $Hy$ from the dependent variable $y$. Also, $I-H$ is symmet
39,044
White's test for heteroskedasticity in R
The whites.htest() function implements White's test for heteroskedasticity for vector autoregressions (VAR). It requires a varest object as input. However, from your description it seems that your model is not a VAR (vector autoregression) but a simple linear model. Hence, the model should be estimated by lm() as previously suggested in the comments. Then you can use the bptest() function from the lmtest package to carry out White's test. The latter requires that you set up the terms in the auxiliary model yourself. It should look like this: m <- lm(A ~ B + C, data = dataset) bptest(m, ~ B*C + I(B^2) + I(C^2), data = dataset) Note that B*C includes the main effects and their interaction (= product). Equivalently - and maybe somewhat more explicitly - we could also specify the auxiliary model as ~ A + B + I(A*B) + I(A^2) + I(B^2). You can also look at help("CigarettesB", package = "AER") for a worked example.
White's test for heteroskedasticity in R
The whites.htest() function implements White's test for heteroskedasticity for vector autoregressions (VAR). It requires a varest object as input. However, from your description it seems that your mod
White's test for heteroskedasticity in R The whites.htest() function implements White's test for heteroskedasticity for vector autoregressions (VAR). It requires a varest object as input. However, from your description it seems that your model is not a VAR (vector autoregression) but a simple linear model. Hence, the model should be estimated by lm() as previously suggested in the comments. Then you can use the bptest() function from the lmtest package to carry out White's test. The latter requires that you set up the terms in the auxiliary model yourself. It should look like this: m <- lm(A ~ B + C, data = dataset) bptest(m, ~ B*C + I(B^2) + I(C^2), data = dataset) Note that B*C includes the main effects and their interaction (= product). Equivalently - and maybe somewhat more explicitly - we could also specify the auxiliary model as ~ A + B + I(A*B) + I(A^2) + I(B^2). You can also look at help("CigarettesB", package = "AER") for a worked example.
White's test for heteroskedasticity in R The whites.htest() function implements White's test for heteroskedasticity for vector autoregressions (VAR). It requires a varest object as input. However, from your description it seems that your mod
39,045
KL divergence between two univariate Poisson distributions
This is fairly straight-foward. The log ratio of the densities is equal to: $$ \log\left (\frac {f_1}{f_2}\right)=x\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$ Then you take expectation of this expression wrt $ f_1$, which simply replaces $ x $ with its expectation (in this case). So you have: $$ D_{KL} (f_1||f_2)=\lambda_1\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$
KL divergence between two univariate Poisson distributions
This is fairly straight-foward. The log ratio of the densities is equal to: $$ \log\left (\frac {f_1}{f_2}\right)=x\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$ Then you take e
KL divergence between two univariate Poisson distributions This is fairly straight-foward. The log ratio of the densities is equal to: $$ \log\left (\frac {f_1}{f_2}\right)=x\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$ Then you take expectation of this expression wrt $ f_1$, which simply replaces $ x $ with its expectation (in this case). So you have: $$ D_{KL} (f_1||f_2)=\lambda_1\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$
KL divergence between two univariate Poisson distributions This is fairly straight-foward. The log ratio of the densities is equal to: $$ \log\left (\frac {f_1}{f_2}\right)=x\log\left (\frac {\lambda_1}{\lambda_2}\right)+\lambda_2-\lambda_1$$ Then you take e
39,046
Significance in beta regression and glm binomial
The binomial is for modeling Bernoulli variables (i.e., binary) or binomial variables (i.e., the number of successes from a certain number of independent trials). So this should not be applied to computed rates (successes divided by trials) directly but glm() wants you to supply a matrix with successes and failures. Consequently, your glm() call above yields the warning: Warning message: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! The beta regression model, on the other hand, is intended for situations where you only have a direct rate that does not correspond to success rates from a known number of independent trials. It uses a different likelihood and hence can lead to different results. Specifically, it has an additional precision parameter which is related to the variance of the observations. Thus, if your proportions above come from a known number of independent trials, then supply this information and use a binomial GLM. Otherwise you can consider beta regression. Additional remark: As your Y above supplies proportions directly, the binomial likelihood does not fit. Specifically, the variance of the observations will be overestimated. If you use a quasi-binomial with an additional dispersion parameter, the model still won't be really appropriate but much closer to the beta regression results. R> summary(betareg(Y ~ X)) Call: betareg(formula = Y ~ X) Standardized weighted residuals 2: Min 1Q Median 3Q Max -1.7480 -0.8042 -0.1105 0.8864 1.8896 Coefficients (mean model with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) 0.29444 0.08715 3.378 0.000729 *** X 0.27270 0.09068 3.007 0.002637 ** Phi coefficients (precision model with identity link): Estimate Std. Error z value Pr(>|z|) (phi) 41.06 15.92 2.579 0.0099 ** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Type of estimator: ML (maximum likelihood) Log-likelihood: 15.15 on 3 Df Pseudo R-squared: 0.4149 Number of iterations: 34 (BFGS) + 2 (Fisher scoring) R> summary(glm(Y ~ X, family = quasibinomial)) Call: glm(formula = Y ~ X, family = quasibinomial) Deviance Residuals: Min 1Q Median 3Q Max -0.25696 -0.11263 -0.01107 0.13491 0.25792 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.29284 0.09523 3.075 0.0106 * X 0.27078 0.09910 2.732 0.0195 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for quasibinomial family taken to be 0.02836306) Null deviance: 0.52867 on 12 degrees of freedom Residual deviance: 0.31489 on 11 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 3
Significance in beta regression and glm binomial
The binomial is for modeling Bernoulli variables (i.e., binary) or binomial variables (i.e., the number of successes from a certain number of independent trials). So this should not be applied to comp
Significance in beta regression and glm binomial The binomial is for modeling Bernoulli variables (i.e., binary) or binomial variables (i.e., the number of successes from a certain number of independent trials). So this should not be applied to computed rates (successes divided by trials) directly but glm() wants you to supply a matrix with successes and failures. Consequently, your glm() call above yields the warning: Warning message: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! The beta regression model, on the other hand, is intended for situations where you only have a direct rate that does not correspond to success rates from a known number of independent trials. It uses a different likelihood and hence can lead to different results. Specifically, it has an additional precision parameter which is related to the variance of the observations. Thus, if your proportions above come from a known number of independent trials, then supply this information and use a binomial GLM. Otherwise you can consider beta regression. Additional remark: As your Y above supplies proportions directly, the binomial likelihood does not fit. Specifically, the variance of the observations will be overestimated. If you use a quasi-binomial with an additional dispersion parameter, the model still won't be really appropriate but much closer to the beta regression results. R> summary(betareg(Y ~ X)) Call: betareg(formula = Y ~ X) Standardized weighted residuals 2: Min 1Q Median 3Q Max -1.7480 -0.8042 -0.1105 0.8864 1.8896 Coefficients (mean model with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) 0.29444 0.08715 3.378 0.000729 *** X 0.27270 0.09068 3.007 0.002637 ** Phi coefficients (precision model with identity link): Estimate Std. Error z value Pr(>|z|) (phi) 41.06 15.92 2.579 0.0099 ** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Type of estimator: ML (maximum likelihood) Log-likelihood: 15.15 on 3 Df Pseudo R-squared: 0.4149 Number of iterations: 34 (BFGS) + 2 (Fisher scoring) R> summary(glm(Y ~ X, family = quasibinomial)) Call: glm(formula = Y ~ X, family = quasibinomial) Deviance Residuals: Min 1Q Median 3Q Max -0.25696 -0.11263 -0.01107 0.13491 0.25792 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.29284 0.09523 3.075 0.0106 * X 0.27078 0.09910 2.732 0.0195 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for quasibinomial family taken to be 0.02836306) Null deviance: 0.52867 on 12 degrees of freedom Residual deviance: 0.31489 on 11 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 3
Significance in beta regression and glm binomial The binomial is for modeling Bernoulli variables (i.e., binary) or binomial variables (i.e., the number of successes from a certain number of independent trials). So this should not be applied to comp
39,047
Should the fumble rate of NFL teams be a normal distribution?
Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rational. It could be very close, and good enough, for example, the sum of many binomials (had a fumble or didnt with x% chance, summed across 100 games) approaches what looks like a normal bell curve. People go to poisson because it is a discrete counting variable, with integer results defined from independent results; that is to say if each play had a consistent fumble probability, then over 100 plays the final outcome fumble count would be poisson distributed. If there's any correlation within the ranks then it wont be any theoretical (clean) distribution. If for example having a lot of fumbles reduces your total number of plays in that game, then it's a self-correlated score and things get messy. I do believe if your first dozen plays all had a fumble (not likely but possible), then you might not get any more. It's definitely not an independent sum of even probabilities. If the coach is allowed to remove a player who has had several fumbles, then the rate would decrease from that point on, another non-independence of the score. The real observed distribution sure could look a lot like a normal in any event. Do you have any data we could play with? EDIT: We see some data at this link: http://www.sharpfootballanalysis.com/blog/2015/the-new-eng;land-patriots-mysteriously-became-fumble-proof-in-2007 Thanks Affine for finding that. And in that article the claim is made more explicitly: "Based on the assumption that plays per fumble follow a normal distribution, you’d expect to see, according to random fluctuation, the results that the Patriots have gotten since 2007 once in 5842 instances." Which is a malformed hypothesis, you'd never care about the probability of an exact answer, the question of interest is how likely is any result this extreme OR HIGHER, combined. A point result has an extremely rare probability, but if there's a fat tail to the distribution, then perhaps more extreme results can happen, and the outlier event is really not so extreme. As this is an inverse distribution, Touches per Fumble, consider both variables as random poisson, you get so many touches per game and you see so many fumbles per game. The ratio will have a long tail, because it's possible to have many many touches with few fumbles. The outlier is to be expected, even looking at the previous decade's results, there was an outlier at 56 TpF which didn't get any comment from the blog author.
Should the fumble rate of NFL teams be a normal distribution?
Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rati
Should the fumble rate of NFL teams be a normal distribution? Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rational. It could be very close, and good enough, for example, the sum of many binomials (had a fumble or didnt with x% chance, summed across 100 games) approaches what looks like a normal bell curve. People go to poisson because it is a discrete counting variable, with integer results defined from independent results; that is to say if each play had a consistent fumble probability, then over 100 plays the final outcome fumble count would be poisson distributed. If there's any correlation within the ranks then it wont be any theoretical (clean) distribution. If for example having a lot of fumbles reduces your total number of plays in that game, then it's a self-correlated score and things get messy. I do believe if your first dozen plays all had a fumble (not likely but possible), then you might not get any more. It's definitely not an independent sum of even probabilities. If the coach is allowed to remove a player who has had several fumbles, then the rate would decrease from that point on, another non-independence of the score. The real observed distribution sure could look a lot like a normal in any event. Do you have any data we could play with? EDIT: We see some data at this link: http://www.sharpfootballanalysis.com/blog/2015/the-new-eng;land-patriots-mysteriously-became-fumble-proof-in-2007 Thanks Affine for finding that. And in that article the claim is made more explicitly: "Based on the assumption that plays per fumble follow a normal distribution, you’d expect to see, according to random fluctuation, the results that the Patriots have gotten since 2007 once in 5842 instances." Which is a malformed hypothesis, you'd never care about the probability of an exact answer, the question of interest is how likely is any result this extreme OR HIGHER, combined. A point result has an extremely rare probability, but if there's a fat tail to the distribution, then perhaps more extreme results can happen, and the outlier event is really not so extreme. As this is an inverse distribution, Touches per Fumble, consider both variables as random poisson, you get so many touches per game and you see so many fumbles per game. The ratio will have a long tail, because it's possible to have many many touches with few fumbles. The outlier is to be expected, even looking at the previous decade's results, there was an outlier at 56 TpF which didn't get any comment from the blog author.
Should the fumble rate of NFL teams be a normal distribution? Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rati
39,048
Should the fumble rate of NFL teams be a normal distribution?
would distribution of season fumbles (y) by NFL team (x) be a normal distribution? Under no circumstances is a non-negative and discrete random variable actually normal. In some circumstances (discreteness aside) it might not be terrible as an approximation, but it wouldn't be the first approximation I'd look at. "it would be a normal distribution if the fumble rate for each team per game/ season is equal" -- no, that doesn't do it ... though homogeneity might lead to a less skew distribution than otherwise. "the distribution of fumbles/ season by NFL Team is actually a Poisson distribution" -- well, at least it's not immediately ruled out by the domain of the variable, but (except perhaps as a rough approximation) I would think it would be readily rejected as a possibility; I expect that heterogeneity (across team make-up, opposition, conditions etc) would make it more heavily skew; there may also be a possibility of some serial dependence (outside that caused by intermittent changes resulting from heterogeneity). "* for modeling when a call might come in within the next hour, or when a dice might come up 6 after N tosses*" when a call might come is continuous, so no. "when a die might come up 6..." -- again, no. Your description of what the random variable will be isn't completely clear there, but that sounds like one of "number of tosses to the first 6" (a geometric distribution), "number of tosses to the Nth 6" (a negative binomial) or "number of 6's in N tosses" (a binomial) -- but even if you meant something else, it still won't be Poisson. (Note that 'dice' is plural, 'die' is singular, so only ever 'a die'. You need at least two of them to have 'dice') By comparison, the "fumbles per season" one being Poisson is at least plausible as a suggestion, but I think for a variety of reasons it won't be Poisson either.
Should the fumble rate of NFL teams be a normal distribution?
would distribution of season fumbles (y) by NFL team (x) be a normal distribution? Under no circumstances is a non-negative and discrete random variable actually normal. In some circumstances (disc
Should the fumble rate of NFL teams be a normal distribution? would distribution of season fumbles (y) by NFL team (x) be a normal distribution? Under no circumstances is a non-negative and discrete random variable actually normal. In some circumstances (discreteness aside) it might not be terrible as an approximation, but it wouldn't be the first approximation I'd look at. "it would be a normal distribution if the fumble rate for each team per game/ season is equal" -- no, that doesn't do it ... though homogeneity might lead to a less skew distribution than otherwise. "the distribution of fumbles/ season by NFL Team is actually a Poisson distribution" -- well, at least it's not immediately ruled out by the domain of the variable, but (except perhaps as a rough approximation) I would think it would be readily rejected as a possibility; I expect that heterogeneity (across team make-up, opposition, conditions etc) would make it more heavily skew; there may also be a possibility of some serial dependence (outside that caused by intermittent changes resulting from heterogeneity). "* for modeling when a call might come in within the next hour, or when a dice might come up 6 after N tosses*" when a call might come is continuous, so no. "when a die might come up 6..." -- again, no. Your description of what the random variable will be isn't completely clear there, but that sounds like one of "number of tosses to the first 6" (a geometric distribution), "number of tosses to the Nth 6" (a negative binomial) or "number of 6's in N tosses" (a binomial) -- but even if you meant something else, it still won't be Poisson. (Note that 'dice' is plural, 'die' is singular, so only ever 'a die'. You need at least two of them to have 'dice') By comparison, the "fumbles per season" one being Poisson is at least plausible as a suggestion, but I think for a variety of reasons it won't be Poisson either.
Should the fumble rate of NFL teams be a normal distribution? would distribution of season fumbles (y) by NFL team (x) be a normal distribution? Under no circumstances is a non-negative and discrete random variable actually normal. In some circumstances (disc
39,049
Which residuals to analyse when dependent variable is transformed?
For residual analysis, you should use the residuals obtained directly from the regression. No back-transformation is needed. This is because you want to make sure that your regression is valid (that it satisfies the underlying assumptions) which is sort of a "mechanical" issue, not subject-matter issue. Thus you look at the regression and its residuals directly, not at some transformation thereof.
Which residuals to analyse when dependent variable is transformed?
For residual analysis, you should use the residuals obtained directly from the regression. No back-transformation is needed. This is because you want to make sure that your regression is valid (that i
Which residuals to analyse when dependent variable is transformed? For residual analysis, you should use the residuals obtained directly from the regression. No back-transformation is needed. This is because you want to make sure that your regression is valid (that it satisfies the underlying assumptions) which is sort of a "mechanical" issue, not subject-matter issue. Thus you look at the regression and its residuals directly, not at some transformation thereof.
Which residuals to analyse when dependent variable is transformed? For residual analysis, you should use the residuals obtained directly from the regression. No back-transformation is needed. This is because you want to make sure that your regression is valid (that i
39,050
Which residuals to analyse when dependent variable is transformed?
You might be better off fitting a generalized linear model instead of a "plain" linear model, and analyzing the residuals of the GLM instead. This procedure and a few good reasons for doing so are laid out in this answer. GLMs have more than one kind of residual, but there is a large literature on analyzing them. In case you balk at the idea of switching from OLS to ML, or you're hesitant to impose distributional assumptions on the response, consider that regression with OLS is equivalent to a GLM that assumes a normally distributed response and the identity link function. Moreover, regression models (generalized or not) describe a conditional mean, but making predictions and then un-transforming the predictions does not in general produce a conditional mean for the un-transformed response. In your case, $E(\sqrt{y}) \neq \sqrt{E(y)}$. (edit/update) Consider a response $y$ and its transformation $y'=\sqrt{y}$. You fit the regression model $$y'=\beta_0 + \beta x + \varepsilon$$ which, if $\operatorname{E}(\varepsilon|x)=0$ (as we assume for OLS), is equivalent to the model $$\operatorname{E}(y'|x) = \operatorname{E}(\sqrt{y}|x) = \beta_0 + \beta x$$ The problem is that $\left(\operatorname{E}(\sqrt{y}|x)\right)^2 \neq \operatorname{E}(y|x)$ in general. Fortunately, in this particular case we can move forward without making any additional assumptions by appealing to the formula $\operatorname{V}(Z) = \operatorname{E}(Z^2) - \left(\operatorname{E}(Z)\right)^2 \implies \operatorname{E}(Z^2) = \operatorname{V}(Z) + \left(\operatorname{E}(Z)\right)^2$, so that $$\operatorname{E}(y|x) = \operatorname{V}(\sqrt{y}|x) + \left(\operatorname{E}(\sqrt{y}|x)\right)^2$$ and therefore $$ \widehat{y} = \widehat{\sigma^2} + \left(\widehat{y'}\right)^2 $$ In general, however, you will need to make some more assumptions. If you assume that $(y|x) \sim Normal(\beta_0 + \beta x, \sigma^2)$, which is implicit in OLS, you can usually derive the transformation by applying the Jacobian to the Gaussian PDF and taking its expectation. With a log-transformed response, for instance, the original-scale response variable follows a log-normal distribution, so the correct back-transformation would be $\widehat{y} = e^{\widehat{y'} + \frac{\widehat{\sigma^2}}{2}}$. This particular (and very common) case is demonstrated nicely on David Giles' blog.
Which residuals to analyse when dependent variable is transformed?
You might be better off fitting a generalized linear model instead of a "plain" linear model, and analyzing the residuals of the GLM instead. This procedure and a few good reasons for doing so are lai
Which residuals to analyse when dependent variable is transformed? You might be better off fitting a generalized linear model instead of a "plain" linear model, and analyzing the residuals of the GLM instead. This procedure and a few good reasons for doing so are laid out in this answer. GLMs have more than one kind of residual, but there is a large literature on analyzing them. In case you balk at the idea of switching from OLS to ML, or you're hesitant to impose distributional assumptions on the response, consider that regression with OLS is equivalent to a GLM that assumes a normally distributed response and the identity link function. Moreover, regression models (generalized or not) describe a conditional mean, but making predictions and then un-transforming the predictions does not in general produce a conditional mean for the un-transformed response. In your case, $E(\sqrt{y}) \neq \sqrt{E(y)}$. (edit/update) Consider a response $y$ and its transformation $y'=\sqrt{y}$. You fit the regression model $$y'=\beta_0 + \beta x + \varepsilon$$ which, if $\operatorname{E}(\varepsilon|x)=0$ (as we assume for OLS), is equivalent to the model $$\operatorname{E}(y'|x) = \operatorname{E}(\sqrt{y}|x) = \beta_0 + \beta x$$ The problem is that $\left(\operatorname{E}(\sqrt{y}|x)\right)^2 \neq \operatorname{E}(y|x)$ in general. Fortunately, in this particular case we can move forward without making any additional assumptions by appealing to the formula $\operatorname{V}(Z) = \operatorname{E}(Z^2) - \left(\operatorname{E}(Z)\right)^2 \implies \operatorname{E}(Z^2) = \operatorname{V}(Z) + \left(\operatorname{E}(Z)\right)^2$, so that $$\operatorname{E}(y|x) = \operatorname{V}(\sqrt{y}|x) + \left(\operatorname{E}(\sqrt{y}|x)\right)^2$$ and therefore $$ \widehat{y} = \widehat{\sigma^2} + \left(\widehat{y'}\right)^2 $$ In general, however, you will need to make some more assumptions. If you assume that $(y|x) \sim Normal(\beta_0 + \beta x, \sigma^2)$, which is implicit in OLS, you can usually derive the transformation by applying the Jacobian to the Gaussian PDF and taking its expectation. With a log-transformed response, for instance, the original-scale response variable follows a log-normal distribution, so the correct back-transformation would be $\widehat{y} = e^{\widehat{y'} + \frac{\widehat{\sigma^2}}{2}}$. This particular (and very common) case is demonstrated nicely on David Giles' blog.
Which residuals to analyse when dependent variable is transformed? You might be better off fitting a generalized linear model instead of a "plain" linear model, and analyzing the residuals of the GLM instead. This procedure and a few good reasons for doing so are lai
39,051
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables
why can't I expand the $(X + Y)^2$ to get $E(X^2 + 2XY + Y^2)$ You can. and then use linearity to get $E(X^2) + 2E(X)E(Y) + E(Y^2)$ You can Then since $X$ and $Y$ are independent, right ... but you already used independence in the previous step when you wrote $E(XY)$ as $E(X)\,E(Y)$. this would give $E(X)^2 + 2(1)(1) + E(Y)^2 = 1 + 2 + 1 = 4$. Nope. You just went $E(X^2) = E(X)^2$; that's not true. You got this manipulation right when you did it the other way. Note that: $E(X^2) = \text{Var}(X) + E(X)^2 = 1 + 1 = 2$ and then the result follows. (It would be easiest to add the random variables first and then find the expectation of the square, but it's quite doable by expanding the square)
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables
why can't I expand the $(X + Y)^2$ to get $E(X^2 + 2XY + Y^2)$ You can. and then use linearity to get $E(X^2) + 2E(X)E(Y) + E(Y^2)$ You can Then since $X$ and $Y$ are independent, right ... but
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables why can't I expand the $(X + Y)^2$ to get $E(X^2 + 2XY + Y^2)$ You can. and then use linearity to get $E(X^2) + 2E(X)E(Y) + E(Y^2)$ You can Then since $X$ and $Y$ are independent, right ... but you already used independence in the previous step when you wrote $E(XY)$ as $E(X)\,E(Y)$. this would give $E(X)^2 + 2(1)(1) + E(Y)^2 = 1 + 2 + 1 = 4$. Nope. You just went $E(X^2) = E(X)^2$; that's not true. You got this manipulation right when you did it the other way. Note that: $E(X^2) = \text{Var}(X) + E(X)^2 = 1 + 1 = 2$ and then the result follows. (It would be easiest to add the random variables first and then find the expectation of the square, but it's quite doable by expanding the square)
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables why can't I expand the $(X + Y)^2$ to get $E(X^2 + 2XY + Y^2)$ You can. and then use linearity to get $E(X^2) + 2E(X)E(Y) + E(Y^2)$ You can Then since $X$ and $Y$ are independent, right ... but
39,052
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables
You cannot say $E(X^2)=E(X)^2$ since $X$ is not independent of $X$. For this problem you can set $Z=X_1+X_2 \sim \text{Poisson}$ with mean $2$. Than you can find $E(Z^2)$ easily.
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables
You cannot say $E(X^2)=E(X)^2$ since $X$ is not independent of $X$. For this problem you can set $Z=X_1+X_2 \sim \text{Poisson}$ with mean $2$. Than you can find $E(Z^2)$ easily.
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables You cannot say $E(X^2)=E(X)^2$ since $X$ is not independent of $X$. For this problem you can set $Z=X_1+X_2 \sim \text{Poisson}$ with mean $2$. Than you can find $E(Z^2)$ easily.
Expectation of $(X + Y)^2$ where $X$ and $Y$ are independent Poisson random variables You cannot say $E(X^2)=E(X)^2$ since $X$ is not independent of $X$. For this problem you can set $Z=X_1+X_2 \sim \text{Poisson}$ with mean $2$. Than you can find $E(Z^2)$ easily.
39,053
In convolutional neural network, what does fully-connected layer mean?
Every neuron from the previous layer is connected to every neuron on the next layer1.
In convolutional neural network, what does fully-connected layer mean?
Every neuron from the previous layer is connected to every neuron on the next layer1.
In convolutional neural network, what does fully-connected layer mean? Every neuron from the previous layer is connected to every neuron on the next layer1.
In convolutional neural network, what does fully-connected layer mean? Every neuron from the previous layer is connected to every neuron on the next layer1.
39,054
In convolutional neural network, what does fully-connected layer mean?
The convolutional and the Pooling layers create a feature space, and the flatten FUlly connected layer can be thought as a cheap way of learning a linear function out of the feature space. For the convenience of understanding think of it as a PCA that selects the good features among the feature space created by the Conv and POOL layer.
In convolutional neural network, what does fully-connected layer mean?
The convolutional and the Pooling layers create a feature space, and the flatten FUlly connected layer can be thought as a cheap way of learning a linear function out of the feature space. For the con
In convolutional neural network, what does fully-connected layer mean? The convolutional and the Pooling layers create a feature space, and the flatten FUlly connected layer can be thought as a cheap way of learning a linear function out of the feature space. For the convenience of understanding think of it as a PCA that selects the good features among the feature space created by the Conv and POOL layer.
In convolutional neural network, what does fully-connected layer mean? The convolutional and the Pooling layers create a feature space, and the flatten FUlly connected layer can be thought as a cheap way of learning a linear function out of the feature space. For the con
39,055
How do I enter a continuous variable as a random effect in a linear mixed effects model?
I'll elaborate on what I think Sergio meant in his comment. A random effect is always associated with a categorical variable. This categorical variable will most often divide the observations into different observational units (this could for instance be Dam in your data set as it seems reasonable to assume that observations from the same dam are more alike than from different dams. Using something like (1|Dam) will give you a random intercept on that variable. Using a continuous predictor like Density you can get a random slope on the predictor. Then you'll have to use (Density|Dam) in your model formulae. This will give you a (random) slope, i.e. effect of Density, for each level of Dam. What you're doing in the above code is forcing Density to be used as a categorical predictor, i.e. making a random intercept for each level (value) of Density. This is probably not what you want.
How do I enter a continuous variable as a random effect in a linear mixed effects model?
I'll elaborate on what I think Sergio meant in his comment. A random effect is always associated with a categorical variable. This categorical variable will most often divide the observations into dif
How do I enter a continuous variable as a random effect in a linear mixed effects model? I'll elaborate on what I think Sergio meant in his comment. A random effect is always associated with a categorical variable. This categorical variable will most often divide the observations into different observational units (this could for instance be Dam in your data set as it seems reasonable to assume that observations from the same dam are more alike than from different dams. Using something like (1|Dam) will give you a random intercept on that variable. Using a continuous predictor like Density you can get a random slope on the predictor. Then you'll have to use (Density|Dam) in your model formulae. This will give you a (random) slope, i.e. effect of Density, for each level of Dam. What you're doing in the above code is forcing Density to be used as a categorical predictor, i.e. making a random intercept for each level (value) of Density. This is probably not what you want.
How do I enter a continuous variable as a random effect in a linear mixed effects model? I'll elaborate on what I think Sergio meant in his comment. A random effect is always associated with a categorical variable. This categorical variable will most often divide the observations into dif
39,056
Linearity between predictors and dependent variable in a linear model
To add to AdamO's answer, I was taught to base my decisions regarding model assumptions more on whether failing to correct the assumption in some way causes me to misrepresent my data. For a concrete example of what I mean, I simulated some data in R and created some plots and ran some diagnostics using these data. # lmSupport contains the lm.modelAssumptions function that I use below require(lmSupport) set.seed(12234) # Create some data with a strong quadratic component x <- rnorm(200, sd = 1) y <- x + .75 * x^2 + rnorm(200, sd = 1) # There is a significant linear trend mod <- lm(y ~ x) summary(mod) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -2.7972 -0.9511 -0.1312 0.6659 5.8659 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.77981 0.10463 7.453 2.77e-12 *** x 1.19417 0.09795 12.191 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.477 on 198 degrees of freedom Multiple R-squared: 0.4288, Adjusted R-squared: 0.4259 F-statistic: 148.6 on 1 and 198 DF, p-value: < 2.2e-16 However, when plotting the data, it's clear that the curvilinear component is an important aspect of the relationship between x and y. pX <- seq(min(x), max(x), by = .1) pY <- predict(mod, data.frame(x = pX)) plot(x, y, frame = F) lines(pX, pY, col = "red") A diagnostic test of linearity also supports our argument that the quadratic component is an important aspect of the relationship between x and y for these data. lm.modelAssumptions(mod, "linear") Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.7798 1.1942 ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM: Level of Significance = 0.05 Call: gvlma(x = model) Value p-value Decision Global Stat 180.04567 0.000e+00 Assumptions NOT satisfied! Skewness 32.67166 1.091e-08 Assumptions NOT satisfied! Kurtosis 23.99022 9.683e-07 Assumptions NOT satisfied! Link Function 123.35831 0.000e+00 Assumptions NOT satisfied! Heteroscedasticity 0.02547 8.732e-01 Assumptions acceptable. # We should probably add the quadratic component to this model mod <- lm(y ~ x + I(x^2)) Let's see what happens when we simulate data with a smaller (but still significant) nonlinear trend. y <- x + .25 * x^2 + rnorm(200, sd = 1) mod <- lm(y ~ x) summary(mod) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -2.59701 -0.77446 0.03546 0.80261 2.75938 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.30500 0.07907 3.858 0.000155 *** x 0.99934 0.07402 13.500 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.116 on 198 degrees of freedom Multiple R-squared: 0.4793, Adjusted R-squared: 0.4767 F-statistic: 182.3 on 1 and 198 DF, p-value: < 2.2e-16 If we examine a plot of these new data, it's pretty clear that they are well-represented by just the linear trend. pX <- seq(min(x), max(x), by = .1) pY <- predict(mod, data.frame(x = pX)) plot(x, y, frame = F) lines(pX, pY, col = "red") This is in spite of the fact that this model fails a diagnostic test of linearity. lm.modelAssumptions(mod, "linear") Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.3050 0.9993 ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM: Level of Significance = 0.05 Call: gvlma(x = model) Value p-value Decision Global Stat 34.6428 5.500e-07 Assumptions NOT satisfied! Skewness 0.3355 5.624e-01 Assumptions acceptable. Kurtosis 2.0094 1.563e-01 Assumptions acceptable. Link Function 32.1379 1.436e-08 Assumptions NOT satisfied! Heteroscedasticity 0.1600 6.892e-01 Assumptions acceptable. My point is that diagnostic tests should not be a substitute for thinking on the part of the analyst; they are tools to help you understand whether your substantive conclusions follow from your analyses. For this reason, I prefer to look at different types of plots rather than rely on global tests when I'm making these sorts of decisions.
Linearity between predictors and dependent variable in a linear model
To add to AdamO's answer, I was taught to base my decisions regarding model assumptions more on whether failing to correct the assumption in some way causes me to misrepresent my data. For a concrete
Linearity between predictors and dependent variable in a linear model To add to AdamO's answer, I was taught to base my decisions regarding model assumptions more on whether failing to correct the assumption in some way causes me to misrepresent my data. For a concrete example of what I mean, I simulated some data in R and created some plots and ran some diagnostics using these data. # lmSupport contains the lm.modelAssumptions function that I use below require(lmSupport) set.seed(12234) # Create some data with a strong quadratic component x <- rnorm(200, sd = 1) y <- x + .75 * x^2 + rnorm(200, sd = 1) # There is a significant linear trend mod <- lm(y ~ x) summary(mod) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -2.7972 -0.9511 -0.1312 0.6659 5.8659 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.77981 0.10463 7.453 2.77e-12 *** x 1.19417 0.09795 12.191 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.477 on 198 degrees of freedom Multiple R-squared: 0.4288, Adjusted R-squared: 0.4259 F-statistic: 148.6 on 1 and 198 DF, p-value: < 2.2e-16 However, when plotting the data, it's clear that the curvilinear component is an important aspect of the relationship between x and y. pX <- seq(min(x), max(x), by = .1) pY <- predict(mod, data.frame(x = pX)) plot(x, y, frame = F) lines(pX, pY, col = "red") A diagnostic test of linearity also supports our argument that the quadratic component is an important aspect of the relationship between x and y for these data. lm.modelAssumptions(mod, "linear") Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.7798 1.1942 ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM: Level of Significance = 0.05 Call: gvlma(x = model) Value p-value Decision Global Stat 180.04567 0.000e+00 Assumptions NOT satisfied! Skewness 32.67166 1.091e-08 Assumptions NOT satisfied! Kurtosis 23.99022 9.683e-07 Assumptions NOT satisfied! Link Function 123.35831 0.000e+00 Assumptions NOT satisfied! Heteroscedasticity 0.02547 8.732e-01 Assumptions acceptable. # We should probably add the quadratic component to this model mod <- lm(y ~ x + I(x^2)) Let's see what happens when we simulate data with a smaller (but still significant) nonlinear trend. y <- x + .25 * x^2 + rnorm(200, sd = 1) mod <- lm(y ~ x) summary(mod) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -2.59701 -0.77446 0.03546 0.80261 2.75938 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.30500 0.07907 3.858 0.000155 *** x 0.99934 0.07402 13.500 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.116 on 198 degrees of freedom Multiple R-squared: 0.4793, Adjusted R-squared: 0.4767 F-statistic: 182.3 on 1 and 198 DF, p-value: < 2.2e-16 If we examine a plot of these new data, it's pretty clear that they are well-represented by just the linear trend. pX <- seq(min(x), max(x), by = .1) pY <- predict(mod, data.frame(x = pX)) plot(x, y, frame = F) lines(pX, pY, col = "red") This is in spite of the fact that this model fails a diagnostic test of linearity. lm.modelAssumptions(mod, "linear") Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.3050 0.9993 ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM: Level of Significance = 0.05 Call: gvlma(x = model) Value p-value Decision Global Stat 34.6428 5.500e-07 Assumptions NOT satisfied! Skewness 0.3355 5.624e-01 Assumptions acceptable. Kurtosis 2.0094 1.563e-01 Assumptions acceptable. Link Function 32.1379 1.436e-08 Assumptions NOT satisfied! Heteroscedasticity 0.1600 6.892e-01 Assumptions acceptable. My point is that diagnostic tests should not be a substitute for thinking on the part of the analyst; they are tools to help you understand whether your substantive conclusions follow from your analyses. For this reason, I prefer to look at different types of plots rather than rely on global tests when I'm making these sorts of decisions.
Linearity between predictors and dependent variable in a linear model To add to AdamO's answer, I was taught to base my decisions regarding model assumptions more on whether failing to correct the assumption in some way causes me to misrepresent my data. For a concrete
39,057
Linearity between predictors and dependent variable in a linear model
The emphasis upon so called assumptions in linear regression modeling is evidence that pedagogy in the applied fields is not in line with traditional statistical theory. In particular, the aforementioned "straight line assumption" (or non-existence of higher order effects) is completely erroneous depending upon your intended application. For inference, the test of whether a regression parameter $\beta = 0$ under the null hypothesis, it should be stressed that the $\beta$ need not model the truth, it's just a first order trend indicating the direction (sign) and strength (value) of that association. Such is the case with regression models and the name itself: a simplified "rule of thumb". Smoking is negatively associated with survival, social drinking is positively associated with obesity, etc. are examples of such rules of thumb gleaned from rules-of-thumb regression models. For prediction, the complexity of a trend can be infinite provided a sufficient quantity of data have been provided for the application. Thus it's moot to ask "is a straight line enough" when we should be asking, "would a fractional polynomial / smoothing spline / etc. do better?" that decision, of course, would be based upon the overall predictive accuracy determined in a split-sample cross-validated independent dataset. In both cases, making decisions based on visual inspections about the types of models you fit greatly increase the risk of overfitting the trend (committing type I errors in the inference case). I think the answer is to consider exactly what the regression model is intended to do and determine how flexible and robust the model is for that application.
Linearity between predictors and dependent variable in a linear model
The emphasis upon so called assumptions in linear regression modeling is evidence that pedagogy in the applied fields is not in line with traditional statistical theory. In particular, the aforementi
Linearity between predictors and dependent variable in a linear model The emphasis upon so called assumptions in linear regression modeling is evidence that pedagogy in the applied fields is not in line with traditional statistical theory. In particular, the aforementioned "straight line assumption" (or non-existence of higher order effects) is completely erroneous depending upon your intended application. For inference, the test of whether a regression parameter $\beta = 0$ under the null hypothesis, it should be stressed that the $\beta$ need not model the truth, it's just a first order trend indicating the direction (sign) and strength (value) of that association. Such is the case with regression models and the name itself: a simplified "rule of thumb". Smoking is negatively associated with survival, social drinking is positively associated with obesity, etc. are examples of such rules of thumb gleaned from rules-of-thumb regression models. For prediction, the complexity of a trend can be infinite provided a sufficient quantity of data have been provided for the application. Thus it's moot to ask "is a straight line enough" when we should be asking, "would a fractional polynomial / smoothing spline / etc. do better?" that decision, of course, would be based upon the overall predictive accuracy determined in a split-sample cross-validated independent dataset. In both cases, making decisions based on visual inspections about the types of models you fit greatly increase the risk of overfitting the trend (committing type I errors in the inference case). I think the answer is to consider exactly what the regression model is intended to do and determine how flexible and robust the model is for that application.
Linearity between predictors and dependent variable in a linear model The emphasis upon so called assumptions in linear regression modeling is evidence that pedagogy in the applied fields is not in line with traditional statistical theory. In particular, the aforementi
39,058
Linear regression with constrained coefficient
General constrained OLS problem Recall that the OLS problem, subject to linear constraints can be written as $$ \begin{align} \arg\min_{\boldsymbol{\beta}}\boldsymbol{Y}'\boldsymbol{Y} - \boldsymbol{Y}'\mathbf{X}\boldsymbol{\beta} - \boldsymbol{\beta}'\mathbf{X}'\boldsymbol{Y} + \boldsymbol{\beta}'\mathbf{X}'\mathbf{X}\boldsymbol{\beta} \end{align}\\ \text{subject to }\quad \mathbf{a}\boldsymbol{\beta} = \boldsymbol{c} $$ where in the general case, $\mathbf{a}$ is a matrix, and $\boldsymbol{c}$ is a vector. Since the first term does not depend on $\boldsymbol{\beta}$, that we can scale by a constant without changing the solution, and that a scalar is its own transpose, we get $$ \begin{align} \arg\min_{\boldsymbol{\beta}} - \boldsymbol{Y}'\mathbf{X}\boldsymbol{\beta} +\tfrac{1}{2} \boldsymbol{\beta}'\mathbf{X}'\mathbf{X}\boldsymbol{\beta} \end{align}\\ \text{subject to }\quad \mathbf{a}\boldsymbol{\beta} = \boldsymbol{c} $$ Note: I do this so that it maps neatly into the way R solves constrained quadratic programming problems. Specific case In your case of three coefficients including the intercept and one constraint, $$ \begin{align} \mathbf{a} &= [0, 1, 1] \\ \boldsymbol{c} &= 1 \\ \text{so that}\\ \mathbf{a}\boldsymbol{\beta} &= \boldsymbol{c}\\ \implies \beta_2 + \beta_3 &= 1 \end{align} $$ R This is then a standard quadratic programming problem with a quadratic (in $\boldsymbol{\beta}$) objective function and linear constraints. You can easily solve this using any of the QP packages in R. Here is an example: library(quadprog) # generate some data mX = cbind(1, matrix(rnorm(100*2), nrow = 100, ncol = 2)) vBeta = c(3, 0.81, 0.19) # note that the 2nd and 3rd elements add to one vY = mX %*% vBeta + rnorm(100) # solve the quadratic program qpStackExchange = solve.QP(Dmat = t(mX)%*% mX, # X'X dvec = t(vY) %*% mX, # Y'X Amat = matrix(c(0, 1, 1), ncol = 1, nrow = 3), # matrix a bvec = 1, # vector c meq = 1) # equality imposed, rather than inequality qpStackExchange$solution # estimates constrained coefficients qpStackExchange$unconstrained.solution # estimates constrained coefficients
Linear regression with constrained coefficient
General constrained OLS problem Recall that the OLS problem, subject to linear constraints can be written as $$ \begin{align} \arg\min_{\boldsymbol{\beta}}\boldsymbol{Y}'\boldsymbol{Y} - \boldsymbol{Y
Linear regression with constrained coefficient General constrained OLS problem Recall that the OLS problem, subject to linear constraints can be written as $$ \begin{align} \arg\min_{\boldsymbol{\beta}}\boldsymbol{Y}'\boldsymbol{Y} - \boldsymbol{Y}'\mathbf{X}\boldsymbol{\beta} - \boldsymbol{\beta}'\mathbf{X}'\boldsymbol{Y} + \boldsymbol{\beta}'\mathbf{X}'\mathbf{X}\boldsymbol{\beta} \end{align}\\ \text{subject to }\quad \mathbf{a}\boldsymbol{\beta} = \boldsymbol{c} $$ where in the general case, $\mathbf{a}$ is a matrix, and $\boldsymbol{c}$ is a vector. Since the first term does not depend on $\boldsymbol{\beta}$, that we can scale by a constant without changing the solution, and that a scalar is its own transpose, we get $$ \begin{align} \arg\min_{\boldsymbol{\beta}} - \boldsymbol{Y}'\mathbf{X}\boldsymbol{\beta} +\tfrac{1}{2} \boldsymbol{\beta}'\mathbf{X}'\mathbf{X}\boldsymbol{\beta} \end{align}\\ \text{subject to }\quad \mathbf{a}\boldsymbol{\beta} = \boldsymbol{c} $$ Note: I do this so that it maps neatly into the way R solves constrained quadratic programming problems. Specific case In your case of three coefficients including the intercept and one constraint, $$ \begin{align} \mathbf{a} &= [0, 1, 1] \\ \boldsymbol{c} &= 1 \\ \text{so that}\\ \mathbf{a}\boldsymbol{\beta} &= \boldsymbol{c}\\ \implies \beta_2 + \beta_3 &= 1 \end{align} $$ R This is then a standard quadratic programming problem with a quadratic (in $\boldsymbol{\beta}$) objective function and linear constraints. You can easily solve this using any of the QP packages in R. Here is an example: library(quadprog) # generate some data mX = cbind(1, matrix(rnorm(100*2), nrow = 100, ncol = 2)) vBeta = c(3, 0.81, 0.19) # note that the 2nd and 3rd elements add to one vY = mX %*% vBeta + rnorm(100) # solve the quadratic program qpStackExchange = solve.QP(Dmat = t(mX)%*% mX, # X'X dvec = t(vY) %*% mX, # Y'X Amat = matrix(c(0, 1, 1), ncol = 1, nrow = 3), # matrix a bvec = 1, # vector c meq = 1) # equality imposed, rather than inequality qpStackExchange$solution # estimates constrained coefficients qpStackExchange$unconstrained.solution # estimates constrained coefficients
Linear regression with constrained coefficient General constrained OLS problem Recall that the OLS problem, subject to linear constraints can be written as $$ \begin{align} \arg\min_{\boldsymbol{\beta}}\boldsymbol{Y}'\boldsymbol{Y} - \boldsymbol{Y
39,059
Linear regression with constrained coefficient
This problem can be formulated as a standard errors-in-variables problem. Write: \begin{equation} y - x_2 = a(x_1-x_2) + c \end{equation} now call $z = y-x_2$ and $w = x_1 -x_2$. Then you have the following problem \begin{equation} z = aw + c \end{equation} Be careful, Error have changed and are now correlated. If $\eta_k $ is the error of $x_k$ and $\varepsilon$ is the error for y. Now you have new errors $\phi=\varepsilon -\eta_2$ for $z$ and $\zeta= \eta_1 -\eta_2$ for w. \begin{align*} \mathrm{Var}[ \phi] & = \mathrm{Var}[\varepsilon]+\mathrm{Var}[\eta_2] -2\mathrm{Cov}[\varepsilon,\eta_2] \\ \mathrm{Var}[ \zeta] & = \mathrm{Var}[\eta_1]+\mathrm{Var}[\eta_2] -2\mathrm{Cov}[\eta_1,\eta_2] \\ \mathrm{Cov}[\phi,\zeta] & = \mathrm{Cov}[\varepsilon -\eta_2,\eta_1 -\eta_2] \\ & = \mathrm{Cov}[\varepsilon,\eta_1] - \mathrm{Cov}[\varepsilon,\eta_2] -\mathrm{Cov}[\eta_2,\eta_1]+ \mathrm{Var}[\eta_2] \end{align*} If all errors are uncorrelated this is: \begin{align*} \mathrm{Var}[ \phi] & = \mathrm{Var}[\varepsilon]+\mathrm{Var}[\eta_2] \\ \mathrm{Var}[ \zeta] & = \mathrm{Var}[\eta_1]+\mathrm{Var}[\eta_2] \\ \mathrm{Cov}[\phi,\zeta] & = \mathrm{Var}[\eta_2] \end{align*} In both cases this problem has a a closed form solution. You can find it Fuller Measurment Error Models (http://www.amazon.com/Measurement-Error-Models-Probability-Statistics/dp/0470095717). I belive it's explained in the introduction chapter.
Linear regression with constrained coefficient
This problem can be formulated as a standard errors-in-variables problem. Write: \begin{equation} y - x_2 = a(x_1-x_2) + c \end{equation} now call $z = y-x_2$ and $w = x_1 -x_2$. Then you have the
Linear regression with constrained coefficient This problem can be formulated as a standard errors-in-variables problem. Write: \begin{equation} y - x_2 = a(x_1-x_2) + c \end{equation} now call $z = y-x_2$ and $w = x_1 -x_2$. Then you have the following problem \begin{equation} z = aw + c \end{equation} Be careful, Error have changed and are now correlated. If $\eta_k $ is the error of $x_k$ and $\varepsilon$ is the error for y. Now you have new errors $\phi=\varepsilon -\eta_2$ for $z$ and $\zeta= \eta_1 -\eta_2$ for w. \begin{align*} \mathrm{Var}[ \phi] & = \mathrm{Var}[\varepsilon]+\mathrm{Var}[\eta_2] -2\mathrm{Cov}[\varepsilon,\eta_2] \\ \mathrm{Var}[ \zeta] & = \mathrm{Var}[\eta_1]+\mathrm{Var}[\eta_2] -2\mathrm{Cov}[\eta_1,\eta_2] \\ \mathrm{Cov}[\phi,\zeta] & = \mathrm{Cov}[\varepsilon -\eta_2,\eta_1 -\eta_2] \\ & = \mathrm{Cov}[\varepsilon,\eta_1] - \mathrm{Cov}[\varepsilon,\eta_2] -\mathrm{Cov}[\eta_2,\eta_1]+ \mathrm{Var}[\eta_2] \end{align*} If all errors are uncorrelated this is: \begin{align*} \mathrm{Var}[ \phi] & = \mathrm{Var}[\varepsilon]+\mathrm{Var}[\eta_2] \\ \mathrm{Var}[ \zeta] & = \mathrm{Var}[\eta_1]+\mathrm{Var}[\eta_2] \\ \mathrm{Cov}[\phi,\zeta] & = \mathrm{Var}[\eta_2] \end{align*} In both cases this problem has a a closed form solution. You can find it Fuller Measurment Error Models (http://www.amazon.com/Measurement-Error-Models-Probability-Statistics/dp/0470095717). I belive it's explained in the introduction chapter.
Linear regression with constrained coefficient This problem can be formulated as a standard errors-in-variables problem. Write: \begin{equation} y - x_2 = a(x_1-x_2) + c \end{equation} now call $z = y-x_2$ and $w = x_1 -x_2$. Then you have the
39,060
Linear regression with constrained coefficient
This is an old question, but it may help you. You can use ConsReg package. See the example below: Imagine you want the following constraints in your parameters: All coefficients will be less than 1 and greater than -1 $x_4 < 0.2$ The coefficient of $x_3$ and $x_3^2$ must satisfied: $(x_3 + x_3^2 > 0.01$) Your can put this constraints to the the function in a easy way: constraints = '(x3 + `I(x3^2)`) > .01, x4 < .2' LOWER = -1, UPPER = 1 And finally, set initial parameters that have to fulfill the constraints above: ini.pars.coef = c(-.4, .12, -.004, 0.1, 0.1, .15) Complete example: require(ConsReg) data("fake_data") fit2 = ConsReg(formula = y~x1+x2+x3+ I(x3^2) + x4, data = fake_data, family = 'gaussian', constraints = '(x3 + `I(x3^2)`) > .01, x4 < .2', optimizer = 'mcmc', LOWER = -1, UPPER = 1, ini.pars.coef = c(-.4, .12, -.004, 0.1, 0.1, .15))
Linear regression with constrained coefficient
This is an old question, but it may help you. You can use ConsReg package. See the example below: Imagine you want the following constraints in your parameters: All coefficients will be less than 1 a
Linear regression with constrained coefficient This is an old question, but it may help you. You can use ConsReg package. See the example below: Imagine you want the following constraints in your parameters: All coefficients will be less than 1 and greater than -1 $x_4 < 0.2$ The coefficient of $x_3$ and $x_3^2$ must satisfied: $(x_3 + x_3^2 > 0.01$) Your can put this constraints to the the function in a easy way: constraints = '(x3 + `I(x3^2)`) > .01, x4 < .2' LOWER = -1, UPPER = 1 And finally, set initial parameters that have to fulfill the constraints above: ini.pars.coef = c(-.4, .12, -.004, 0.1, 0.1, .15) Complete example: require(ConsReg) data("fake_data") fit2 = ConsReg(formula = y~x1+x2+x3+ I(x3^2) + x4, data = fake_data, family = 'gaussian', constraints = '(x3 + `I(x3^2)`) > .01, x4 < .2', optimizer = 'mcmc', LOWER = -1, UPPER = 1, ini.pars.coef = c(-.4, .12, -.004, 0.1, 0.1, .15))
Linear regression with constrained coefficient This is an old question, but it may help you. You can use ConsReg package. See the example below: Imagine you want the following constraints in your parameters: All coefficients will be less than 1 a
39,061
Main idea of Bagging
Bootstrapping is a concept in statistics of approximating the sampling distribution of a statistic by repeatedly sampling from a given sample of size $n$. We construct $B$ samples, each of size $n$, by sampling with replacement from the original sample. The statistic of interest is calculated for each of the $B$ samples. For sufficiently large $B$, we have a good idea of how the statistic is distributed. Roughly speaking, this distribution indicates the range of values of a statistic and how dense these values are. Bagging, or Bootstrap AGGregatING, is an extension of bootstrapping to classification and regression problems. The main idea is to sample with replacement from the training data so that we now have $B$ training data sets, each having $n' \le n$ observations. The machine-learning algorithm is trained on each of the $B$ data sets to form a committee. When predicting (or classifying) future test observations, we ask each trained algorithm in the committee for its prediction. We then compute a (weighted) average of the $B$ predictions to obtain a single prediction. The simplest approach is to weight each of the $B$ committee members equally. However, several variants are available that reduce the weight of less reliable committee members (e.g., poor classification accuracy, multiple outliers are present, etc).
Main idea of Bagging
Bootstrapping is a concept in statistics of approximating the sampling distribution of a statistic by repeatedly sampling from a given sample of size $n$. We construct $B$ samples, each of size $n$, b
Main idea of Bagging Bootstrapping is a concept in statistics of approximating the sampling distribution of a statistic by repeatedly sampling from a given sample of size $n$. We construct $B$ samples, each of size $n$, by sampling with replacement from the original sample. The statistic of interest is calculated for each of the $B$ samples. For sufficiently large $B$, we have a good idea of how the statistic is distributed. Roughly speaking, this distribution indicates the range of values of a statistic and how dense these values are. Bagging, or Bootstrap AGGregatING, is an extension of bootstrapping to classification and regression problems. The main idea is to sample with replacement from the training data so that we now have $B$ training data sets, each having $n' \le n$ observations. The machine-learning algorithm is trained on each of the $B$ data sets to form a committee. When predicting (or classifying) future test observations, we ask each trained algorithm in the committee for its prediction. We then compute a (weighted) average of the $B$ predictions to obtain a single prediction. The simplest approach is to weight each of the $B$ committee members equally. However, several variants are available that reduce the weight of less reliable committee members (e.g., poor classification accuracy, multiple outliers are present, etc).
Main idea of Bagging Bootstrapping is a concept in statistics of approximating the sampling distribution of a statistic by repeatedly sampling from a given sample of size $n$. We construct $B$ samples, each of size $n$, b
39,062
Model fitting when errors take a Cauchy distribution
The least squares estimates for the regression coefficients are only equal to the maximum-likelihood estimates when the errors have a normal distribution (see here for the proof). If you really wanted maximum likelihood estimates for regression parameters with Cauchy errors, just look at that likelihood: $$L(\beta,\sigma)=\prod_{i=1}^n {\frac{1}{\pi\sigma\left(1+\left(\frac{y_i-\beta^\mathrm{T}x_i}{\sigma}\right)^2\right)}}$$ ($y_i$ is the $i$th observation, $x_i$ the vector of predictors, $\sigma$ the scale parameter, & $\beta$ the vector of coefficients.) There's no sufficient statistic of lower dimensionality than the entire dataset, so it's not so easy to maximize, though there's probably a better method than brute force. But without some theoretical motivation for assuming Cauchy errors, you can just say they have some fat-tailed distribution. In this situation some form or other of robust regression would be worth considering. Note that the least squares approach isn't the worst thing you could use even so. Provided the variance is constant (& finite, which it isn't for the Cauchy) it still gives consistent estimates, even the best linear unbiased estimates, though you'd have to take confidence intervals with a pinch of salt.
Model fitting when errors take a Cauchy distribution
The least squares estimates for the regression coefficients are only equal to the maximum-likelihood estimates when the errors have a normal distribution (see here for the proof). If you really wanted
Model fitting when errors take a Cauchy distribution The least squares estimates for the regression coefficients are only equal to the maximum-likelihood estimates when the errors have a normal distribution (see here for the proof). If you really wanted maximum likelihood estimates for regression parameters with Cauchy errors, just look at that likelihood: $$L(\beta,\sigma)=\prod_{i=1}^n {\frac{1}{\pi\sigma\left(1+\left(\frac{y_i-\beta^\mathrm{T}x_i}{\sigma}\right)^2\right)}}$$ ($y_i$ is the $i$th observation, $x_i$ the vector of predictors, $\sigma$ the scale parameter, & $\beta$ the vector of coefficients.) There's no sufficient statistic of lower dimensionality than the entire dataset, so it's not so easy to maximize, though there's probably a better method than brute force. But without some theoretical motivation for assuming Cauchy errors, you can just say they have some fat-tailed distribution. In this situation some form or other of robust regression would be worth considering. Note that the least squares approach isn't the worst thing you could use even so. Provided the variance is constant (& finite, which it isn't for the Cauchy) it still gives consistent estimates, even the best linear unbiased estimates, though you'd have to take confidence intervals with a pinch of salt.
Model fitting when errors take a Cauchy distribution The least squares estimates for the regression coefficients are only equal to the maximum-likelihood estimates when the errors have a normal distribution (see here for the proof). If you really wanted
39,063
Model fitting when errors take a Cauchy distribution
The Jeffreys (posterior) distribution is quite nice to do inference in a linear regression model with location-scale errors. Inference based on the Jeffreys distribution achieves very good frequentist properties: it provides confidence intervals whose coverage is close to the nominal coverage, even for very small sample sizes. Let $\Phi$ be any derivable cdf and $\phi=\Phi'$ the corresponding pdf. Consider the linear regression model $Y=X\beta+\sigma\epsilon$ with $\epsilon_i \sim_{\text{iid}} \mathrm{d}\Phi$. The Jeffreys posterior distribution is given, up to a proportionality constant, by: $$ \pi(\beta, \sigma \mid y) \propto \frac{1}{\sigma^{n+1}} \prod_{i=1}^n \phi\left(\frac{y_i-\mu_i}{\sigma}\right) =: f(\beta, \sigma \mid y) $$ where $\mu_i=X_i \beta$ is the expected value of $y_i$. The problem, given a set $A$ in the $q$-dimensional parameter space $\Theta$ (with $q=p+1$ since parameters are $\beta_1$, $\ldots$, $\beta_p$ and $\sigma$), is to evaluate $$ \int_A \pi(\beta, \sigma \mid y) \mathrm{d}\beta \mathrm{d}\sigma = \frac{\int_A f(\beta, \sigma \mid y)\mathrm{d}\beta \mathrm{d}\sigma}{\int_\Theta f(\beta, \sigma \mid y)\mathrm{d}\beta\mathrm{d}\sigma} \approx ? $$ Change of variables It is possible to transform this integral to an integral on ${[0,1]}^q$ as follows. The key point is the fact that $$ \frac{1}{\sigma^{q+1}}\prod_{i=1}^q\phi\left(\frac{y_i-\mu_i}{\sigma}\right) $$ is, up to a proportionality constant not depending on $(\beta,\sigma)$, the Jacobian of the function $$ F\colon(\beta,\sigma)\mapsto \left(\Phi\left(\frac{y_1-\mu_1}{\sigma}\right), \ldots, \Phi\left(\frac{y_q-\mu_q}{\sigma}\right)\right) \in {[0,1]}^q. $$ I have not tried to prove this point, but one can numerically check it, for a simple linear regression for example: x <- 1:10; y <- rcauchy(length(x), 2+x) X <- model.matrix(~x) Phi <- pcauchy; phi <- dcauchy # use any cdf and pdf you want F <- function(betasigma){ sapply(1:3, function(i) Phi((y[i]-X[i,]%*%betasigma[1:2])/betasigma[3])) } library(pracma) # provides the jacobian() function f <- function(betasigma){ prod(sapply(1:3, function(i){ phi((y[i]-X[i,]%*%betasigma[1:2])/betasigma[3]) })) / betasigma[3]^4 } # look, the ratio is always the same: det(jacobian(F, c(1,1,1)))/f(c(1,1,1)) ## [1] -19.01263 det(jacobian(F, c(1,2,1)))/f(c(1,2,1)) ## [1] -19.01263 det(jacobian(F, c(2,2,2)))/f(c(2,2,2)) ## [1] -19.01263 Thus, $$ \begin{align} \int_A f(\beta, \sigma \mid y)d\beta d\sigma & \propto \int_A \bigl|\det J_F(\mu,\sigma)\bigr| \frac{1}{\sigma^{n-q}} \prod_{i=q+1}^n \phi\left(\frac{y_i-\mu_i}{\sigma}\right) \mathrm{d}\beta\mathrm{d}\sigma \\ & = \int_{F(A)} g\bigl(F^{-1}(u_1, \ldots, u_q)\bigr)\mathrm{d}u_1\ldots\mathrm{d}u_q \end{align} $$ where $g(\beta,\sigma)=\frac{1}{\sigma^{q+1}} \prod_{i=1}^q \phi\left(\frac{y_i-\mu_i}{\sigma}\right)$. It is not difficult to get the inverse of $F$: $$ F^{-1}(u_1, \ldots, u_q) = {(\beta,\sigma)}' = {(H'H)}^{-1}H'y_{1:q} $$ where the matrix $H$ is $H=\left[ X_{1:q}, {\bigl(\Phi^{-1}(u_i)\bigr)}_{i\in(1:q)}\right]$. Note that $F^{-1}(u_1, \ldots, u_q)$ yields $\sigma<0$ for some values of the $u_i$. In fact, if $F^{-1}\bigl(\Phi(z_1), \ldots, \Phi(z_q))={(\beta,\sigma)}'$, then $F^{-1}\bigl(\Phi(-z_1), \ldots, \Phi(-z_q))={(\beta,-\sigma)}'$, therefore the set of $u_i$'s for which $\sigma>0$ has Lesbegue measure $1/2$. In fact, the Jeffreys distribution for a location-scale linear regression is the same as the fiducial distribution. The method I present is a particular case of the general method given in the paper Computational issues of generalized fiducial inference by Hannig & al.. But there is a high simplification of the general method in the case of a location-scale linear regression (we can take $K=1$ with the notations of the paper, but I will not develop this point). Algorithm The Jeffreys function below returns an approximation of the Jeffreys distribution for the linear regression model when errors follow a Student distribution with degrees of freedom df, to be set by the user. For df=Inf (default), this is the Gaussian linear regression; for df=1 this is the Cauchy linear regression. In the Gaussian case df=Inf, we can compare the results to the exact Jeffreys distribution which is known and elementary. Moreover the inference based on the Jeffreys distribution in the Gaussian case is the same as the usual least-squares inference (as we will see on examples). By default, the X matrix is the matrix of the intercept-only model y~1. The approximation is obtained by a Riemann-like integration on ${[0,1]}^q$ using a uniform partition into hypercubes. The partition is controlled by the argument L, giving the number of centers of the hypercubes on each coordinate (hence there are $L^q$ hypercubes). #' parameters: y (sample), X (model matrix), L (number of points per coordinate) Jeffreys <- function(y, X=as.matrix(rep(1,length(y))), L=10, df=Inf){ qdistr <- function(x, ...) qt(x, df=df, ...) ddistr <- function(x, ...) dt(x, df=df, ...) n <- nrow(X) q <- ncol(X)+1 # centers of hypercubes (volume 1/L^p) centers <- as.matrix(do.call(expand.grid, rep(list(seq(0, 1, length.out=L+1)[-1] - 1/(2*L)), q))) # remove centers having equal coordinates (H'H is not invertible) centers <- centers[apply(centers, 1, function(row) length(unique(row))>1),] # outputs M <- (L^q-L)/2 # number of centers yielding sigma>0 J <- numeric(M) Theta <- array(0, c(M, q)) # algorithm I <- 1:q yI <- y[I]; ymI <- y[-I] XI <- X[I,]; XmI <- X[-I,] counter <- 0 for(m in 1:nrow(centers)){ H <- unname(cbind(XI, qdistr(centers[m,]))) theta <- solve(crossprod(H))%*%t(H)%*%yI if(theta[q]>0){ # sigma>0 counter <- counter+1 J[counter] <- sum(ddistr((ymI-XmI%*%head(theta,-1))/theta[q], log=TRUE)) - (n-q)*log(theta[q]) Theta[counter,] <- theta } } J <- exp(J) return(list(Beta=Theta[,-q], sigma=Theta[,q], W=J/sum(J))) } The function returns the values of $(\beta,\sigma)$ corresponding to every hypercube center in the partition of ${[0,1]}^q$. It also computes the values of the integrand evaluated at every center in the vector J, and returns the normalized vector of weights W=J/sum(J). We will see how to deal with these outputs on some examples. First example: Gaussian sample Let's try it for an i.i.d. Gaussian sample $y_i \sim_{\text{i.i.d.}} {\cal N}(\mu, \sigma^2)$: set.seed(666) n <- 4 y <- rnorm(n) results <- Jeffreys(y, L=100) Mu <- results$Beta; Sigma <- results$sigma; W <- results$W Now we can treat Mu and Sigma as if they were weighted samples of the Jeffreys distribution, with weights W. The theoretical mean is the sample mean, and our approximation is quite good: sum(W*Mu); mean(y) ## [1] 1.109794 ## [1] 1.110175 We can get the approximate Jeffreys cdf with the ewcdf function (weighted empirical cdf) of the spatstat package, and compare with the theoretical one. Our approximation is quite perfect: ### approximate Jeffreys distribution of µ ### F_mu <- spatstat::ewcdf(Mu, weights=W) curve(F_mu, from=0, to=2.5, xlab="mu", ylim=c(0,1), col="blue", lwd=2) ### exact Jeffreys distribution ### mean_y <- mean(y); sd_y <- sd(y) curve(pt((x-mean_y)/(sd_y/sqrt(n)), df=n-1), add=TRUE, col="red", lwd=4, lty="dashed") We can get confidence intervals by applying the quantile function to the weighted cdf F_mu. They are theoretically the same as the ususal confidence intervals in Gaussian linear regression, and we indeed get very close results: quantile(F_mu, c(2.5,97.5)/100) ## 2.5% 97.5% ## -0.7143989 2.9309891 confint(lm(y~1)) # theoretically the same ## 2.5 % 97.5 % ## (Intercept) -0.7121603 2.93251 The same for $\sigma$ (knowing the inverse-Gamma distribution of $\sigma^2$): F_sigma <- spatstat::ewcdf(Sigma,W) curve(F_sigma, from=0, to=2.5, xlab="sigma", ylim=c(0,1), col="blue", lwd=2) curve(1-pgamma(1/x^2, (n - 1)/2, (n - 1) * sd_y^2/2), add=TRUE, col="red", lwd=4, lty="dashed") Second example: Cauchy sample Now let's try a i.i.d. Cauchy sample with sample size $n=200$. set.seed(666) n <- 200 y <- rcauchy(n) results <- Jeffreys(y, L=100, df=1) Mu <- results$Beta; Sigma <- results$sigma; W <- results$W Since $n=200$ is not a small sample size, the Jeffreys means are close to the maximum-likelihood estimates: sum(W*Mu); sum(W*Sigma) ## [1] -0.01490355 ## [1] 0.9081371 MASS::fitdistr(y, "cauchy") ## location scale ## -0.01345121 0.89958785 ## ( 0.09185580) ( 0.08874509) The MASS::rlm estimates are not so close: rlmfit <- MASS::rlm(y~1) rlmfit$coefficients ## (Intercept) ## -0.1160915 rlmfit$s # rlm estimate of sigma ## [1] 1.338744 Jeffreys confidence intervals are close to the ML asymptotic confidence intervals: F_mu <- spatstat::ewcdf(Mu, weights=W); F_sigma <- spatstat::ewcdf(Sigma,W) quantile(F_mu, c(2.5,97.5)/100) ## 2.5% 97.5% ## -0.1971883 0.1707172 quantile(F_sigma, c(2.5,50,97.5)/100) ## 2.5% 50% 97.5% ## 0.7471966 0.9055118 1.1027395 confint(MASS::fitdistr(y, "cauchy")) ## 2.5 % 97.5 % ## location -0.1934853 0.1665829 ## scale 0.7256507 1.0735250 Third example : Gaussian simple linear regression Nice: f <- function(x) 4+2*x set.seed(666) n <- 20 x <- seq_len(n) # covariates y <- f(x)+rnorm(n) # run algorithm results <- Jeffreys(y, X=model.matrix(~x), L=60) # outputs W <- results$W; Beta0 <- results$Beta[,1]; Beta1 <- results$Beta[,2] sum(W*Beta0); sum(W*Beta1) # Jeffreys means ## [1] 4.172721 ## [1] 1.983503 coef(lm(y~x)) # theoretically the same ## (Intercept) x ## 4.179859 1.983008 F_Beta0 <- spatstat::ewcdf(Beta0, weights=W); F_Beta1 <- spatstat::ewcdf(Beta1, weights=W) quantile(F_Beta0, c(2.5,97.5)/100); quantile(F_Beta1, c(2.5,97.5)/100) ## 2.5% 97.5% ## 2.883869 5.499620 ## 2.5% 97.5% ## 1.872328 2.095764 confint(lm(y~x)) # theoretically the same ## 2.5 % 97.5 % ## (Intercept) 2.857903 5.501815 ## x 1.872653 2.093362 Fourth example : Cauchy simple linear regression set.seed(666) y <- f(x)+rcauchy(n) # run algorithm results <- Jeffreys(y, X=model.matrix(~x), L=60, df=1) # outputs W <- results$W; Beta0 <- results$Beta[,1]; Beta1 <- results$Beta[,2]; Sigma <- results$sigma # Jeffreys means sum(W*Beta0); sum(W*Beta1); sum(W*Sigma) ## [1] 4.157664 ## [1] 1.997121 ## [1] 0.685825 While $n=20$ is not large, the ML estimates of the regression parameters are close to their Jeffreys means, but they are not so close for $\sigma$: X <- model.matrix(~x) likelihood <- function(y, beta0, beta1, sigma){ prod(dcauchy((y-X%*%c(beta0,beta1))/sigma)/sigma) } (ML <- MASS::fitdistr(y, likelihood, list(beta0=sum(W*Beta0), beta1=sum(W*Beta1), sigma=1))) ## beta0 beta1 sigma ## 4.20188590 1.99239112 0.60087397 ## (0.54295228) (0.04433536) (0.18660186) The Jeffreys confidence intervals are close the ML confidence intervals, except for $\sigma$: confint(ML) ## 2.5 % 97.5 % ## beta0 3.137719 5.2660528 ## beta1 1.905495 2.0792868 ## sigma 0.235141 0.9666069 F_Beta0 <- spatstat::ewcdf(Beta0, weights=W); F_Beta1 <- spatstat::ewcdf(Beta1, weights=W) quantile(F_Beta0, c(2.5,97.5)/100); quantile(F_Beta1, c(2.5,97.5)/100) ## 2.5% 97.5% ## 3.098442 5.167351 ## 2.5% 97.5% ## 1.913328 2.089146 F_sigma <- spatstat::ewcdf(Sigma,W); quantile(F_sigma, c(2.5,50,97.5)/100) ## 2.5% 50% 97.5% ## 0.3418978 0.6491162 1.2464163 The MASS::rlm estimates of the regression parameters are rather close to their Jeffreys means too: rlmfit <- MASS::rlm(y~x) rlmfit$coefficients ## (Intercept) x ## 3.945603 2.042590 rlmfit$s # rlm estimate of sigma ## [1] 1.019974
Model fitting when errors take a Cauchy distribution
The Jeffreys (posterior) distribution is quite nice to do inference in a linear regression model with location-scale errors. Inference based on the Jeffreys distribution achieves very good frequentist
Model fitting when errors take a Cauchy distribution The Jeffreys (posterior) distribution is quite nice to do inference in a linear regression model with location-scale errors. Inference based on the Jeffreys distribution achieves very good frequentist properties: it provides confidence intervals whose coverage is close to the nominal coverage, even for very small sample sizes. Let $\Phi$ be any derivable cdf and $\phi=\Phi'$ the corresponding pdf. Consider the linear regression model $Y=X\beta+\sigma\epsilon$ with $\epsilon_i \sim_{\text{iid}} \mathrm{d}\Phi$. The Jeffreys posterior distribution is given, up to a proportionality constant, by: $$ \pi(\beta, \sigma \mid y) \propto \frac{1}{\sigma^{n+1}} \prod_{i=1}^n \phi\left(\frac{y_i-\mu_i}{\sigma}\right) =: f(\beta, \sigma \mid y) $$ where $\mu_i=X_i \beta$ is the expected value of $y_i$. The problem, given a set $A$ in the $q$-dimensional parameter space $\Theta$ (with $q=p+1$ since parameters are $\beta_1$, $\ldots$, $\beta_p$ and $\sigma$), is to evaluate $$ \int_A \pi(\beta, \sigma \mid y) \mathrm{d}\beta \mathrm{d}\sigma = \frac{\int_A f(\beta, \sigma \mid y)\mathrm{d}\beta \mathrm{d}\sigma}{\int_\Theta f(\beta, \sigma \mid y)\mathrm{d}\beta\mathrm{d}\sigma} \approx ? $$ Change of variables It is possible to transform this integral to an integral on ${[0,1]}^q$ as follows. The key point is the fact that $$ \frac{1}{\sigma^{q+1}}\prod_{i=1}^q\phi\left(\frac{y_i-\mu_i}{\sigma}\right) $$ is, up to a proportionality constant not depending on $(\beta,\sigma)$, the Jacobian of the function $$ F\colon(\beta,\sigma)\mapsto \left(\Phi\left(\frac{y_1-\mu_1}{\sigma}\right), \ldots, \Phi\left(\frac{y_q-\mu_q}{\sigma}\right)\right) \in {[0,1]}^q. $$ I have not tried to prove this point, but one can numerically check it, for a simple linear regression for example: x <- 1:10; y <- rcauchy(length(x), 2+x) X <- model.matrix(~x) Phi <- pcauchy; phi <- dcauchy # use any cdf and pdf you want F <- function(betasigma){ sapply(1:3, function(i) Phi((y[i]-X[i,]%*%betasigma[1:2])/betasigma[3])) } library(pracma) # provides the jacobian() function f <- function(betasigma){ prod(sapply(1:3, function(i){ phi((y[i]-X[i,]%*%betasigma[1:2])/betasigma[3]) })) / betasigma[3]^4 } # look, the ratio is always the same: det(jacobian(F, c(1,1,1)))/f(c(1,1,1)) ## [1] -19.01263 det(jacobian(F, c(1,2,1)))/f(c(1,2,1)) ## [1] -19.01263 det(jacobian(F, c(2,2,2)))/f(c(2,2,2)) ## [1] -19.01263 Thus, $$ \begin{align} \int_A f(\beta, \sigma \mid y)d\beta d\sigma & \propto \int_A \bigl|\det J_F(\mu,\sigma)\bigr| \frac{1}{\sigma^{n-q}} \prod_{i=q+1}^n \phi\left(\frac{y_i-\mu_i}{\sigma}\right) \mathrm{d}\beta\mathrm{d}\sigma \\ & = \int_{F(A)} g\bigl(F^{-1}(u_1, \ldots, u_q)\bigr)\mathrm{d}u_1\ldots\mathrm{d}u_q \end{align} $$ where $g(\beta,\sigma)=\frac{1}{\sigma^{q+1}} \prod_{i=1}^q \phi\left(\frac{y_i-\mu_i}{\sigma}\right)$. It is not difficult to get the inverse of $F$: $$ F^{-1}(u_1, \ldots, u_q) = {(\beta,\sigma)}' = {(H'H)}^{-1}H'y_{1:q} $$ where the matrix $H$ is $H=\left[ X_{1:q}, {\bigl(\Phi^{-1}(u_i)\bigr)}_{i\in(1:q)}\right]$. Note that $F^{-1}(u_1, \ldots, u_q)$ yields $\sigma<0$ for some values of the $u_i$. In fact, if $F^{-1}\bigl(\Phi(z_1), \ldots, \Phi(z_q))={(\beta,\sigma)}'$, then $F^{-1}\bigl(\Phi(-z_1), \ldots, \Phi(-z_q))={(\beta,-\sigma)}'$, therefore the set of $u_i$'s for which $\sigma>0$ has Lesbegue measure $1/2$. In fact, the Jeffreys distribution for a location-scale linear regression is the same as the fiducial distribution. The method I present is a particular case of the general method given in the paper Computational issues of generalized fiducial inference by Hannig & al.. But there is a high simplification of the general method in the case of a location-scale linear regression (we can take $K=1$ with the notations of the paper, but I will not develop this point). Algorithm The Jeffreys function below returns an approximation of the Jeffreys distribution for the linear regression model when errors follow a Student distribution with degrees of freedom df, to be set by the user. For df=Inf (default), this is the Gaussian linear regression; for df=1 this is the Cauchy linear regression. In the Gaussian case df=Inf, we can compare the results to the exact Jeffreys distribution which is known and elementary. Moreover the inference based on the Jeffreys distribution in the Gaussian case is the same as the usual least-squares inference (as we will see on examples). By default, the X matrix is the matrix of the intercept-only model y~1. The approximation is obtained by a Riemann-like integration on ${[0,1]}^q$ using a uniform partition into hypercubes. The partition is controlled by the argument L, giving the number of centers of the hypercubes on each coordinate (hence there are $L^q$ hypercubes). #' parameters: y (sample), X (model matrix), L (number of points per coordinate) Jeffreys <- function(y, X=as.matrix(rep(1,length(y))), L=10, df=Inf){ qdistr <- function(x, ...) qt(x, df=df, ...) ddistr <- function(x, ...) dt(x, df=df, ...) n <- nrow(X) q <- ncol(X)+1 # centers of hypercubes (volume 1/L^p) centers <- as.matrix(do.call(expand.grid, rep(list(seq(0, 1, length.out=L+1)[-1] - 1/(2*L)), q))) # remove centers having equal coordinates (H'H is not invertible) centers <- centers[apply(centers, 1, function(row) length(unique(row))>1),] # outputs M <- (L^q-L)/2 # number of centers yielding sigma>0 J <- numeric(M) Theta <- array(0, c(M, q)) # algorithm I <- 1:q yI <- y[I]; ymI <- y[-I] XI <- X[I,]; XmI <- X[-I,] counter <- 0 for(m in 1:nrow(centers)){ H <- unname(cbind(XI, qdistr(centers[m,]))) theta <- solve(crossprod(H))%*%t(H)%*%yI if(theta[q]>0){ # sigma>0 counter <- counter+1 J[counter] <- sum(ddistr((ymI-XmI%*%head(theta,-1))/theta[q], log=TRUE)) - (n-q)*log(theta[q]) Theta[counter,] <- theta } } J <- exp(J) return(list(Beta=Theta[,-q], sigma=Theta[,q], W=J/sum(J))) } The function returns the values of $(\beta,\sigma)$ corresponding to every hypercube center in the partition of ${[0,1]}^q$. It also computes the values of the integrand evaluated at every center in the vector J, and returns the normalized vector of weights W=J/sum(J). We will see how to deal with these outputs on some examples. First example: Gaussian sample Let's try it for an i.i.d. Gaussian sample $y_i \sim_{\text{i.i.d.}} {\cal N}(\mu, \sigma^2)$: set.seed(666) n <- 4 y <- rnorm(n) results <- Jeffreys(y, L=100) Mu <- results$Beta; Sigma <- results$sigma; W <- results$W Now we can treat Mu and Sigma as if they were weighted samples of the Jeffreys distribution, with weights W. The theoretical mean is the sample mean, and our approximation is quite good: sum(W*Mu); mean(y) ## [1] 1.109794 ## [1] 1.110175 We can get the approximate Jeffreys cdf with the ewcdf function (weighted empirical cdf) of the spatstat package, and compare with the theoretical one. Our approximation is quite perfect: ### approximate Jeffreys distribution of µ ### F_mu <- spatstat::ewcdf(Mu, weights=W) curve(F_mu, from=0, to=2.5, xlab="mu", ylim=c(0,1), col="blue", lwd=2) ### exact Jeffreys distribution ### mean_y <- mean(y); sd_y <- sd(y) curve(pt((x-mean_y)/(sd_y/sqrt(n)), df=n-1), add=TRUE, col="red", lwd=4, lty="dashed") We can get confidence intervals by applying the quantile function to the weighted cdf F_mu. They are theoretically the same as the ususal confidence intervals in Gaussian linear regression, and we indeed get very close results: quantile(F_mu, c(2.5,97.5)/100) ## 2.5% 97.5% ## -0.7143989 2.9309891 confint(lm(y~1)) # theoretically the same ## 2.5 % 97.5 % ## (Intercept) -0.7121603 2.93251 The same for $\sigma$ (knowing the inverse-Gamma distribution of $\sigma^2$): F_sigma <- spatstat::ewcdf(Sigma,W) curve(F_sigma, from=0, to=2.5, xlab="sigma", ylim=c(0,1), col="blue", lwd=2) curve(1-pgamma(1/x^2, (n - 1)/2, (n - 1) * sd_y^2/2), add=TRUE, col="red", lwd=4, lty="dashed") Second example: Cauchy sample Now let's try a i.i.d. Cauchy sample with sample size $n=200$. set.seed(666) n <- 200 y <- rcauchy(n) results <- Jeffreys(y, L=100, df=1) Mu <- results$Beta; Sigma <- results$sigma; W <- results$W Since $n=200$ is not a small sample size, the Jeffreys means are close to the maximum-likelihood estimates: sum(W*Mu); sum(W*Sigma) ## [1] -0.01490355 ## [1] 0.9081371 MASS::fitdistr(y, "cauchy") ## location scale ## -0.01345121 0.89958785 ## ( 0.09185580) ( 0.08874509) The MASS::rlm estimates are not so close: rlmfit <- MASS::rlm(y~1) rlmfit$coefficients ## (Intercept) ## -0.1160915 rlmfit$s # rlm estimate of sigma ## [1] 1.338744 Jeffreys confidence intervals are close to the ML asymptotic confidence intervals: F_mu <- spatstat::ewcdf(Mu, weights=W); F_sigma <- spatstat::ewcdf(Sigma,W) quantile(F_mu, c(2.5,97.5)/100) ## 2.5% 97.5% ## -0.1971883 0.1707172 quantile(F_sigma, c(2.5,50,97.5)/100) ## 2.5% 50% 97.5% ## 0.7471966 0.9055118 1.1027395 confint(MASS::fitdistr(y, "cauchy")) ## 2.5 % 97.5 % ## location -0.1934853 0.1665829 ## scale 0.7256507 1.0735250 Third example : Gaussian simple linear regression Nice: f <- function(x) 4+2*x set.seed(666) n <- 20 x <- seq_len(n) # covariates y <- f(x)+rnorm(n) # run algorithm results <- Jeffreys(y, X=model.matrix(~x), L=60) # outputs W <- results$W; Beta0 <- results$Beta[,1]; Beta1 <- results$Beta[,2] sum(W*Beta0); sum(W*Beta1) # Jeffreys means ## [1] 4.172721 ## [1] 1.983503 coef(lm(y~x)) # theoretically the same ## (Intercept) x ## 4.179859 1.983008 F_Beta0 <- spatstat::ewcdf(Beta0, weights=W); F_Beta1 <- spatstat::ewcdf(Beta1, weights=W) quantile(F_Beta0, c(2.5,97.5)/100); quantile(F_Beta1, c(2.5,97.5)/100) ## 2.5% 97.5% ## 2.883869 5.499620 ## 2.5% 97.5% ## 1.872328 2.095764 confint(lm(y~x)) # theoretically the same ## 2.5 % 97.5 % ## (Intercept) 2.857903 5.501815 ## x 1.872653 2.093362 Fourth example : Cauchy simple linear regression set.seed(666) y <- f(x)+rcauchy(n) # run algorithm results <- Jeffreys(y, X=model.matrix(~x), L=60, df=1) # outputs W <- results$W; Beta0 <- results$Beta[,1]; Beta1 <- results$Beta[,2]; Sigma <- results$sigma # Jeffreys means sum(W*Beta0); sum(W*Beta1); sum(W*Sigma) ## [1] 4.157664 ## [1] 1.997121 ## [1] 0.685825 While $n=20$ is not large, the ML estimates of the regression parameters are close to their Jeffreys means, but they are not so close for $\sigma$: X <- model.matrix(~x) likelihood <- function(y, beta0, beta1, sigma){ prod(dcauchy((y-X%*%c(beta0,beta1))/sigma)/sigma) } (ML <- MASS::fitdistr(y, likelihood, list(beta0=sum(W*Beta0), beta1=sum(W*Beta1), sigma=1))) ## beta0 beta1 sigma ## 4.20188590 1.99239112 0.60087397 ## (0.54295228) (0.04433536) (0.18660186) The Jeffreys confidence intervals are close the ML confidence intervals, except for $\sigma$: confint(ML) ## 2.5 % 97.5 % ## beta0 3.137719 5.2660528 ## beta1 1.905495 2.0792868 ## sigma 0.235141 0.9666069 F_Beta0 <- spatstat::ewcdf(Beta0, weights=W); F_Beta1 <- spatstat::ewcdf(Beta1, weights=W) quantile(F_Beta0, c(2.5,97.5)/100); quantile(F_Beta1, c(2.5,97.5)/100) ## 2.5% 97.5% ## 3.098442 5.167351 ## 2.5% 97.5% ## 1.913328 2.089146 F_sigma <- spatstat::ewcdf(Sigma,W); quantile(F_sigma, c(2.5,50,97.5)/100) ## 2.5% 50% 97.5% ## 0.3418978 0.6491162 1.2464163 The MASS::rlm estimates of the regression parameters are rather close to their Jeffreys means too: rlmfit <- MASS::rlm(y~x) rlmfit$coefficients ## (Intercept) x ## 3.945603 2.042590 rlmfit$s # rlm estimate of sigma ## [1] 1.019974
Model fitting when errors take a Cauchy distribution The Jeffreys (posterior) distribution is quite nice to do inference in a linear regression model with location-scale errors. Inference based on the Jeffreys distribution achieves very good frequentist
39,064
Model fitting when errors take a Cauchy distribution
GraphPad Prism can do nonlinear regression assuming a Cauchy distribution. That is our robust method. The mathematical details are explained in detail on pages 11-14 of BMC Bioinformatics 2006, 7:123 doi:10.1186/1471-2105-7-123, Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate
Model fitting when errors take a Cauchy distribution
GraphPad Prism can do nonlinear regression assuming a Cauchy distribution. That is our robust method. The mathematical details are explained in detail on pages 11-14 of BMC Bioinformatics 2006, 7:123
Model fitting when errors take a Cauchy distribution GraphPad Prism can do nonlinear regression assuming a Cauchy distribution. That is our robust method. The mathematical details are explained in detail on pages 11-14 of BMC Bioinformatics 2006, 7:123 doi:10.1186/1471-2105-7-123, Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate
Model fitting when errors take a Cauchy distribution GraphPad Prism can do nonlinear regression assuming a Cauchy distribution. That is our robust method. The mathematical details are explained in detail on pages 11-14 of BMC Bioinformatics 2006, 7:123
39,065
Model fitting when errors take a Cauchy distribution
A bit too late, but it may be useful for others in the future. Scortchi wrote the likelihood as if the errors where independent and there is no simple answer in that case. To complement what he said in the end about OLS, note that if you instead, specify the joint likelihood with dependent errors, you will end up with a Multivariate Cauchy Distribution (a Multivariate t Distribution with $\nu = 1$) and from here you can recover the the maximum likelihood estimates of the regression coefficients which will be exactly what you would get with OLS. $$\mathcal{L} = \frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}\sigma^{N/2}}\bigg(1+\frac{\mathbf{\epsilon}^\intercal\mathbf{\epsilon}}{\sigma^{2}}\bigg)^{(N+1)/2}$$ $$\mathcal{L} = \frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}\sigma^{N/2}}\bigg[1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}\bigg]^{(N+1)/2}$$ If $\ell = \log\mathcal{L}$, then $$\ell = \log{\bigg[\frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}}}\bigg] -\frac{N}{2}\log{|\sigma|} - \frac{N+1}{2}\log\bigg[1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}\bigg]$$ $$\frac{\partial\ell}{\partial\beta} = -\frac{N+1}{2}\frac{1}{1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}}\sum_{i=1}^{N}(y_{i}-\beta x_{i})(-x_{i})\frac{1}{\sigma^{2}} = 0$$ $$\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})(-x_{i})}{\sigma^{2}+\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}} = 0$$ $$\sum_{i=1}^{N}(y_{i}-\beta x_{i})(x_{i}) = 0$$ $$\sum_{i=1}^{N}y_{i}x_{i}-\sum_{i=1}^{N}\beta x_{i}^{2} = 0$$ $$\beta \sum_{i=1}^{N}x_{i}^{2} = \sum_{i=1}^{N}y_{i}x_{i}$$ $$\beta = \frac{\sum_{i=1}^{N}y_{i}x_{i}}{\sum_{i=1}^{N}x_{i}^{2}}$$ And for $\sigma$ $$\sigma = \sqrt{\mathbf{\epsilon}^\intercal\mathbf{\epsilon}\bigg(\frac{N+2}{N}\bigg)}$$ The estimator for $\beta$ is the OLS estimate and the estimator for $\sigma$ is really similar to the OLS $s^2$. Unfortunately these estimators don't have any moments due to the Cauchy density. A bit more complicated algebra when using a Multivariate t density instead of a Cauchy density, would yield the same estimator for $\beta$ and a similar for $\sigma$ and also the third parameter $\nu$, but with finite moments if ($\nu > 2$) and 'inflated' variance compared to standard OLS. You will note that the same inference holds true as in the case of normality: you can use t, and F statistics. However, as you can see, there is not much of gain compared to standard OLS and maximum likelihood with a normal density. OLS does not require normality or independence.
Model fitting when errors take a Cauchy distribution
A bit too late, but it may be useful for others in the future. Scortchi wrote the likelihood as if the errors where independent and there is no simple answer in that case. To complement what he said i
Model fitting when errors take a Cauchy distribution A bit too late, but it may be useful for others in the future. Scortchi wrote the likelihood as if the errors where independent and there is no simple answer in that case. To complement what he said in the end about OLS, note that if you instead, specify the joint likelihood with dependent errors, you will end up with a Multivariate Cauchy Distribution (a Multivariate t Distribution with $\nu = 1$) and from here you can recover the the maximum likelihood estimates of the regression coefficients which will be exactly what you would get with OLS. $$\mathcal{L} = \frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}\sigma^{N/2}}\bigg(1+\frac{\mathbf{\epsilon}^\intercal\mathbf{\epsilon}}{\sigma^{2}}\bigg)^{(N+1)/2}$$ $$\mathcal{L} = \frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}\sigma^{N/2}}\bigg[1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}\bigg]^{(N+1)/2}$$ If $\ell = \log\mathcal{L}$, then $$\ell = \log{\bigg[\frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}}}\bigg] -\frac{N}{2}\log{|\sigma|} - \frac{N+1}{2}\log\bigg[1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}\bigg]$$ $$\frac{\partial\ell}{\partial\beta} = -\frac{N+1}{2}\frac{1}{1+\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}}{\sigma^{2}}}\sum_{i=1}^{N}(y_{i}-\beta x_{i})(-x_{i})\frac{1}{\sigma^{2}} = 0$$ $$\frac{\sum_{i=1}^{N}(y_{i}-\beta x_{i})(-x_{i})}{\sigma^{2}+\sum_{i=1}^{N}(y_{i}-\beta x_{i})^{2}} = 0$$ $$\sum_{i=1}^{N}(y_{i}-\beta x_{i})(x_{i}) = 0$$ $$\sum_{i=1}^{N}y_{i}x_{i}-\sum_{i=1}^{N}\beta x_{i}^{2} = 0$$ $$\beta \sum_{i=1}^{N}x_{i}^{2} = \sum_{i=1}^{N}y_{i}x_{i}$$ $$\beta = \frac{\sum_{i=1}^{N}y_{i}x_{i}}{\sum_{i=1}^{N}x_{i}^{2}}$$ And for $\sigma$ $$\sigma = \sqrt{\mathbf{\epsilon}^\intercal\mathbf{\epsilon}\bigg(\frac{N+2}{N}\bigg)}$$ The estimator for $\beta$ is the OLS estimate and the estimator for $\sigma$ is really similar to the OLS $s^2$. Unfortunately these estimators don't have any moments due to the Cauchy density. A bit more complicated algebra when using a Multivariate t density instead of a Cauchy density, would yield the same estimator for $\beta$ and a similar for $\sigma$ and also the third parameter $\nu$, but with finite moments if ($\nu > 2$) and 'inflated' variance compared to standard OLS. You will note that the same inference holds true as in the case of normality: you can use t, and F statistics. However, as you can see, there is not much of gain compared to standard OLS and maximum likelihood with a normal density. OLS does not require normality or independence.
Model fitting when errors take a Cauchy distribution A bit too late, but it may be useful for others in the future. Scortchi wrote the likelihood as if the errors where independent and there is no simple answer in that case. To complement what he said i
39,066
Model fitting when errors take a Cauchy distribution
The use of Cauchy errors IS NOT a robust method. It leads to a model that can capture outliers, but if there are no outliers, then the resulting model becomes very restrictive since it is being assumed that the distribution of the errors is heavy tailed with a specific tail behaviour. Because the Cauchy distribution is a special case of the t-distribution for $\nu=1$, this makes a very strong statement about how the errors are distributed. A more robust approach consists of using a Student $t$-distribution $t(0,\sigma,\nu)$, where $\nu$ are the degrees of freedom and $\sigma$ is the scale parameter, which are unknown and they are to be estimated using the data. The model is $$y_j = h(x_j^{\top}\beta) + e_j,$$ where $e_j\stackrel{ind}{\sim} t(0,\sigma,\nu)$, $j=1,\dots,n$, and $h$ is a real function. The likelihood of $(\beta,\sigma,\nu)$ is then given by $${\mathcal L}(\beta,\sigma,\nu) \propto \prod_{j=1}^n f(h(y_j- x_j^{\top}\beta);0,\sigma,\nu),$$ where $$f(z;\mu,\sigma,\nu) = \dfrac{1}{\sigma}\dfrac{\Gamma\left(\dfrac{\nu+1}{2}\right)}{\sqrt{\pi\nu}\Gamma\left(\dfrac{\nu}{2}\right)} \left[1+\dfrac{1}{\nu}\left(\dfrac{z-\mu}{\sigma}\right)^2\right]^{-\frac{\nu+1}{2}}.$$ The maximum likelihood estimators can be obtained using numerical methods. Note that this structure covers data fitting, linear and nonlinear regression. From Wikipedia: Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normally distributed. Robust statistical methods have been developed for many common problems, such as estimating location, scale and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distributions. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations, for example, one and three; under this model, non-robust methods like a t-test work badly.
Model fitting when errors take a Cauchy distribution
The use of Cauchy errors IS NOT a robust method. It leads to a model that can capture outliers, but if there are no outliers, then the resulting model becomes very restrictive since it is being assume
Model fitting when errors take a Cauchy distribution The use of Cauchy errors IS NOT a robust method. It leads to a model that can capture outliers, but if there are no outliers, then the resulting model becomes very restrictive since it is being assumed that the distribution of the errors is heavy tailed with a specific tail behaviour. Because the Cauchy distribution is a special case of the t-distribution for $\nu=1$, this makes a very strong statement about how the errors are distributed. A more robust approach consists of using a Student $t$-distribution $t(0,\sigma,\nu)$, where $\nu$ are the degrees of freedom and $\sigma$ is the scale parameter, which are unknown and they are to be estimated using the data. The model is $$y_j = h(x_j^{\top}\beta) + e_j,$$ where $e_j\stackrel{ind}{\sim} t(0,\sigma,\nu)$, $j=1,\dots,n$, and $h$ is a real function. The likelihood of $(\beta,\sigma,\nu)$ is then given by $${\mathcal L}(\beta,\sigma,\nu) \propto \prod_{j=1}^n f(h(y_j- x_j^{\top}\beta);0,\sigma,\nu),$$ where $$f(z;\mu,\sigma,\nu) = \dfrac{1}{\sigma}\dfrac{\Gamma\left(\dfrac{\nu+1}{2}\right)}{\sqrt{\pi\nu}\Gamma\left(\dfrac{\nu}{2}\right)} \left[1+\dfrac{1}{\nu}\left(\dfrac{z-\mu}{\sigma}\right)^2\right]^{-\frac{\nu+1}{2}}.$$ The maximum likelihood estimators can be obtained using numerical methods. Note that this structure covers data fitting, linear and nonlinear regression. From Wikipedia: Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normally distributed. Robust statistical methods have been developed for many common problems, such as estimating location, scale and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distributions. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations, for example, one and three; under this model, non-robust methods like a t-test work badly.
Model fitting when errors take a Cauchy distribution The use of Cauchy errors IS NOT a robust method. It leads to a model that can capture outliers, but if there are no outliers, then the resulting model becomes very restrictive since it is being assume
39,067
coxph ran out of iterations and did not converge
This may be a case where, as the coxph() documentation page puts it, "the actual MLE estimate of a coefficient is infinity" so that "the associated coefficient grows at a steady pace and a race condition will exist in the fitting routine." In particular, close interrelations of the start / end times with the total_usage variable may be the problem here. When I have problems with a continuous predictor variable like your total_usage in survival analysis, I examine a split of the continuous variable at the median. Look at survival curves from your data based on a split of total_usage at its median value of $5866.2$ (the coxph() for this simple analysis also didn't converge): plot(survfit(f_sur~(total_usage > 5866.2),data=ff_usage)) Looks like almost all censoring times and events for the low total_usage cases are before something like time=700, while almost all events and censoring times for the high total_usage subset are greater than that time. Also, examining: summary(survfit(f_sur~(total_usage > 5866.2),data=ff_usage)) may provide some insight. My data sets are typically much smaller than this, but I have run into related problems in Cox analysis with "a dichotomous variable where one of the groups has no events," so that hazard ratios are ill-defined. Hope this helps point you in the right direction.
coxph ran out of iterations and did not converge
This may be a case where, as the coxph() documentation page puts it, "the actual MLE estimate of a coefficient is infinity" so that "the associated coefficient grows at a steady pace and a race condit
coxph ran out of iterations and did not converge This may be a case where, as the coxph() documentation page puts it, "the actual MLE estimate of a coefficient is infinity" so that "the associated coefficient grows at a steady pace and a race condition will exist in the fitting routine." In particular, close interrelations of the start / end times with the total_usage variable may be the problem here. When I have problems with a continuous predictor variable like your total_usage in survival analysis, I examine a split of the continuous variable at the median. Look at survival curves from your data based on a split of total_usage at its median value of $5866.2$ (the coxph() for this simple analysis also didn't converge): plot(survfit(f_sur~(total_usage > 5866.2),data=ff_usage)) Looks like almost all censoring times and events for the low total_usage cases are before something like time=700, while almost all events and censoring times for the high total_usage subset are greater than that time. Also, examining: summary(survfit(f_sur~(total_usage > 5866.2),data=ff_usage)) may provide some insight. My data sets are typically much smaller than this, but I have run into related problems in Cox analysis with "a dichotomous variable where one of the groups has no events," so that hazard ratios are ill-defined. Hope this helps point you in the right direction.
coxph ran out of iterations and did not converge This may be a case where, as the coxph() documentation page puts it, "the actual MLE estimate of a coefficient is infinity" so that "the associated coefficient grows at a steady pace and a race condit
39,068
coxph ran out of iterations and did not converge
The result: f_cox Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -0.00407 0.996 0 0 -Inf 0 This is the warning message. Notice it is not a failure to converge: Warning message: In fitter(X, Y, strats, offset, init, control, weights = weights, : Loglik converged before variable 1 ; beta may be infinite. When considering problems with survival analyses where estimates blow up, it's often useful to look at tabular displays. (in this case the "explosion" is to the small side rather than the high side.) Consider looking at event crossed with your clustering variable: with(ff_usage, table(event, fault_id)) Every cluster (all 2294 of them) has event count either 0 or 1, so the algorithm ends up doing a simple tabulation and a zero estimate for sd.coef. It's pretty clearly fake data with not much effort at inserting randomness at least for the counts. The numbers at risk ascend in integer sequences along with "fault_id". with(ff_usage, table(event, fault_id)) fault_id event 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 15 16 17 18 19 20 21 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 0 22 23 24 25 26 27 28 29 30 31 31 32 33 34 35 36 37 38 39 40 41 42 43 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 0 44 45 46 47 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 63 64 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 0 65 66 67 68 69 70 71 72 73 74 33 35 36 37 39 40 42 43 45 46 47 49 50 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #--- truncated output which otherwise goes on for pages since there are 2294 "clusters". Running this in 2018 produces a mildly different result, although the cause of this issue remains inadequate counts within clusters: Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -1.67e-03 9.98e-01 3.84e-05 1.05e-04 -15.9 <2e-16 Likelihood ratio test=6641 on 1 df, p=<2e-16 n= 174353, number of events= 899
coxph ran out of iterations and did not converge
The result: f_cox Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -0.00407 0.996 0
coxph ran out of iterations and did not converge The result: f_cox Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -0.00407 0.996 0 0 -Inf 0 This is the warning message. Notice it is not a failure to converge: Warning message: In fitter(X, Y, strats, offset, init, control, weights = weights, : Loglik converged before variable 1 ; beta may be infinite. When considering problems with survival analyses where estimates blow up, it's often useful to look at tabular displays. (in this case the "explosion" is to the small side rather than the high side.) Consider looking at event crossed with your clustering variable: with(ff_usage, table(event, fault_id)) Every cluster (all 2294 of them) has event count either 0 or 1, so the algorithm ends up doing a simple tabulation and a zero estimate for sd.coef. It's pretty clearly fake data with not much effort at inserting randomness at least for the counts. The numbers at risk ascend in integer sequences along with "fault_id". with(ff_usage, table(event, fault_id)) fault_id event 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 15 16 17 18 19 20 21 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 0 22 23 24 25 26 27 28 29 30 31 31 32 33 34 35 36 37 38 39 40 41 42 43 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 0 44 45 46 47 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 63 64 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 fault_id event 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 0 65 66 67 68 69 70 71 72 73 74 33 35 36 37 39 40 42 43 45 46 47 49 50 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 #--- truncated output which otherwise goes on for pages since there are 2294 "clusters". Running this in 2018 produces a mildly different result, although the cause of this issue remains inadequate counts within clusters: Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -1.67e-03 9.98e-01 3.84e-05 1.05e-04 -15.9 <2e-16 Likelihood ratio test=6641 on 1 df, p=<2e-16 n= 174353, number of events= 899
coxph ran out of iterations and did not converge The result: f_cox Call: coxph(formula = f_sur ~ total_usage + cluster(fault_id), data = ff_usage) coef exp(coef) se(coef) robust se z p total_usage -0.00407 0.996 0
39,069
How to determine trend strength from linear regression slope?
A couple notes: Usually people write the equation as either $y = b_0 + b_1X$ or $y = a + bx$ Your version is OK, but might be confusing when you see other versions. In your equation, $a$ is a measure of how much $f(x)$ is expected to rise for a 1 unit increase in $x$. If $a$ is positive, then $f(x)$ is expected to rise as $x$ rises; if $a$ is negative, then just the reverse. So, $a$ is a measure of the slope. But $a$ is unit-dependent: If you change from measuring $x$ in millimeters to meters, $a$ will change, but its meaning will not. There are a few measures of the strength of the relationship. The most common is $R^2$, this is a measure of the proportion of variance in $f(x)$ that is explained by the linear relationship with $x$. EDIT with regard to new question A trend occur in units per time; there are several ways to standardize th units. You could, perhaps most simply, use percentage change from the beginning point. This is what is often done, e.g., with trend in stock market averages to accommodate their different initial values.
How to determine trend strength from linear regression slope?
A couple notes: Usually people write the equation as either $y = b_0 + b_1X$ or $y = a + bx$ Your version is OK, but might be confusing when you see other versions. In your equation, $a$ is a measure
How to determine trend strength from linear regression slope? A couple notes: Usually people write the equation as either $y = b_0 + b_1X$ or $y = a + bx$ Your version is OK, but might be confusing when you see other versions. In your equation, $a$ is a measure of how much $f(x)$ is expected to rise for a 1 unit increase in $x$. If $a$ is positive, then $f(x)$ is expected to rise as $x$ rises; if $a$ is negative, then just the reverse. So, $a$ is a measure of the slope. But $a$ is unit-dependent: If you change from measuring $x$ in millimeters to meters, $a$ will change, but its meaning will not. There are a few measures of the strength of the relationship. The most common is $R^2$, this is a measure of the proportion of variance in $f(x)$ that is explained by the linear relationship with $x$. EDIT with regard to new question A trend occur in units per time; there are several ways to standardize th units. You could, perhaps most simply, use percentage change from the beginning point. This is what is often done, e.g., with trend in stock market averages to accommodate their different initial values.
How to determine trend strength from linear regression slope? A couple notes: Usually people write the equation as either $y = b_0 + b_1X$ or $y = a + bx$ Your version is OK, but might be confusing when you see other versions. In your equation, $a$ is a measure
39,070
How to determine trend strength from linear regression slope?
R^2 is a scaled measure of the error in the fit. Here is some more information on it. http://mathworld.wolfram.com/CorrelationCoefficient.html Although R^2 is useful, there is no perfect measure. They each have strengths and weaknesses. I find that I use measures like the Akaike Information Criterion much more because I have a number of candidate analytic functions that fit with somewhat consistent R^2 and I need to find a mixture of them, with weights, that more likely indicates the underlying nature of the system. Relevant Links Include: http://www.csse.monash.edu.au/~dschmidt/ModelSectionTutorial1_SchmidtMakalic_2008.pdf The slope can go from -infinitity to +infinity, though in practice it is limited by the economics of the business. If you are selling butter, and you have a finite production capacity then the largest number of sales you can have is constrained by that capacity, or by the sum of the historic under-sold capacity plus storage. A business can't lose infinite money, only its entire net worth, plus all the debt its credit rating can rack up. Your negative slope will be constrained by business realities of that sort - but they will be particular to your business. In theory it could be anything between plus and minus infinity. In practice, if you pick the right sorts of values - the right sorts of domain and range then your slopes will be in a useful range. You might consider two different EWMA (Exponentially Weighted Moving Average) functions, one with a different period than the other. When the short period is above the long, there is some sort of increase (positive), and when the long is on top then there is decrease. It is a very simplistic indicator, but it can be responsive to the data. A single linear fit doesn't respond quickly to changing business realities, especially if it is operating over a large span of time or sample values.
How to determine trend strength from linear regression slope?
R^2 is a scaled measure of the error in the fit. Here is some more information on it. http://mathworld.wolfram.com/CorrelationCoefficient.html Although R^2 is useful, there is no perfect measure. Th
How to determine trend strength from linear regression slope? R^2 is a scaled measure of the error in the fit. Here is some more information on it. http://mathworld.wolfram.com/CorrelationCoefficient.html Although R^2 is useful, there is no perfect measure. They each have strengths and weaknesses. I find that I use measures like the Akaike Information Criterion much more because I have a number of candidate analytic functions that fit with somewhat consistent R^2 and I need to find a mixture of them, with weights, that more likely indicates the underlying nature of the system. Relevant Links Include: http://www.csse.monash.edu.au/~dschmidt/ModelSectionTutorial1_SchmidtMakalic_2008.pdf The slope can go from -infinitity to +infinity, though in practice it is limited by the economics of the business. If you are selling butter, and you have a finite production capacity then the largest number of sales you can have is constrained by that capacity, or by the sum of the historic under-sold capacity plus storage. A business can't lose infinite money, only its entire net worth, plus all the debt its credit rating can rack up. Your negative slope will be constrained by business realities of that sort - but they will be particular to your business. In theory it could be anything between plus and minus infinity. In practice, if you pick the right sorts of values - the right sorts of domain and range then your slopes will be in a useful range. You might consider two different EWMA (Exponentially Weighted Moving Average) functions, one with a different period than the other. When the short period is above the long, there is some sort of increase (positive), and when the long is on top then there is decrease. It is a very simplistic indicator, but it can be responsive to the data. A single linear fit doesn't respond quickly to changing business realities, especially if it is operating over a large span of time or sample values.
How to determine trend strength from linear regression slope? R^2 is a scaled measure of the error in the fit. Here is some more information on it. http://mathworld.wolfram.com/CorrelationCoefficient.html Although R^2 is useful, there is no perfect measure. Th
39,071
How to determine trend strength from linear regression slope?
One way to make comparisons on the relative "strength" of 2 independent variables in the same linear model is to divide them by their respective standard deviations (i.e. use z-scores). It's still not an apples to apples comparison, but it can be useful to see if, for example, a 1 SD increase in X results in a greater increase in Y than 1 SD of C does - particularly if the SD is a useful descriptor of both IVs (e.g. they are normally distributed).
How to determine trend strength from linear regression slope?
One way to make comparisons on the relative "strength" of 2 independent variables in the same linear model is to divide them by their respective standard deviations (i.e. use z-scores). It's still not
How to determine trend strength from linear regression slope? One way to make comparisons on the relative "strength" of 2 independent variables in the same linear model is to divide them by their respective standard deviations (i.e. use z-scores). It's still not an apples to apples comparison, but it can be useful to see if, for example, a 1 SD increase in X results in a greater increase in Y than 1 SD of C does - particularly if the SD is a useful descriptor of both IVs (e.g. they are normally distributed).
How to determine trend strength from linear regression slope? One way to make comparisons on the relative "strength" of 2 independent variables in the same linear model is to divide them by their respective standard deviations (i.e. use z-scores). It's still not
39,072
How to interpret a logistic regression model with all negative coefficient?
Yes, it is possible. Couple of things here. The direction of your predictors are critical to the interpretation; if they were scaled in the opposite direction, they would be positive. Second, it would seem on the face that your predictors lower the log odds given a unit change, which in itself might be good if say the outcome is death. Also, you might consider centering some of your predictors if the interecpt does not seem interpretable.
How to interpret a logistic regression model with all negative coefficient?
Yes, it is possible. Couple of things here. The direction of your predictors are critical to the interpretation; if they were scaled in the opposite direction, they would be positive. Second, it would
How to interpret a logistic regression model with all negative coefficient? Yes, it is possible. Couple of things here. The direction of your predictors are critical to the interpretation; if they were scaled in the opposite direction, they would be positive. Second, it would seem on the face that your predictors lower the log odds given a unit change, which in itself might be good if say the outcome is death. Also, you might consider centering some of your predictors if the interecpt does not seem interpretable.
How to interpret a logistic regression model with all negative coefficient? Yes, it is possible. Couple of things here. The direction of your predictors are critical to the interpretation; if they were scaled in the opposite direction, they would be positive. Second, it would
39,073
How to interpret a logistic regression model with all negative coefficient?
From your description I see nothing out of the ordinary. That the intercept is negative corresponds to that the estimated probability of the response is less than 50% when all model covariates equal zero. If the coefficients of the model covariates are negative, then yes, the corresponding odds ratios are smaller than 1. If this is unexpected given your data, you may need to check how your covariates are coded.
How to interpret a logistic regression model with all negative coefficient?
From your description I see nothing out of the ordinary. That the intercept is negative corresponds to that the estimated probability of the response is less than 50% when all model covariates equal
How to interpret a logistic regression model with all negative coefficient? From your description I see nothing out of the ordinary. That the intercept is negative corresponds to that the estimated probability of the response is less than 50% when all model covariates equal zero. If the coefficients of the model covariates are negative, then yes, the corresponding odds ratios are smaller than 1. If this is unexpected given your data, you may need to check how your covariates are coded.
How to interpret a logistic regression model with all negative coefficient? From your description I see nothing out of the ordinary. That the intercept is negative corresponds to that the estimated probability of the response is less than 50% when all model covariates equal
39,074
How to interpret a logistic regression model with all negative coefficient?
I didn't really get logistic regression until I thought about it this way: Picture the S curve (the logistic) going from -3 to 3 Look at the coefficient estimate on your constant term. Mark it on the x axis of the S curve. Each coefficient moves you $\beta$ units along the X axis of the S curve. If you want to know what probability that corresponds to, go up from the X axis to the S curve and then over to the Y axis. So, say that your intercept is, like, -.5. This is something like 40% probability (or so). Say your first beta is -.2 or something. This means that you follow the X axis over to -.7, which has a lower probability. Say you have a coefficient that is -5. That'll take you way out left, where chances are basically zero. Its really pretty simple when you break it down simply.
How to interpret a logistic regression model with all negative coefficient?
I didn't really get logistic regression until I thought about it this way: Picture the S curve (the logistic) going from -3 to 3 Look at the coefficient estimate on your constant term. Mark it on th
How to interpret a logistic regression model with all negative coefficient? I didn't really get logistic regression until I thought about it this way: Picture the S curve (the logistic) going from -3 to 3 Look at the coefficient estimate on your constant term. Mark it on the x axis of the S curve. Each coefficient moves you $\beta$ units along the X axis of the S curve. If you want to know what probability that corresponds to, go up from the X axis to the S curve and then over to the Y axis. So, say that your intercept is, like, -.5. This is something like 40% probability (or so). Say your first beta is -.2 or something. This means that you follow the X axis over to -.7, which has a lower probability. Say you have a coefficient that is -5. That'll take you way out left, where chances are basically zero. Its really pretty simple when you break it down simply.
How to interpret a logistic regression model with all negative coefficient? I didn't really get logistic regression until I thought about it this way: Picture the S curve (the logistic) going from -3 to 3 Look at the coefficient estimate on your constant term. Mark it on th
39,075
ANOVA results do not match post-hoc Tukey test, how to proceed?
This is not an area where there is universal agreement. My view is that 1) the two tests answer different questions, so it's not surprising that they get different answers. 2) This discrepancy is more a demonstration of the problems of p values and, especially, using cutoff values like p < .05. 3) It also gets at the problems of looking for "cookie cutter" methods of doing statistics. Elaborating a bit: The overall ANOVA asks about the relationships among all the levels; tests like Tukey's LSD compare individual levels. Those are two different questions. Which one are you interested in? Perhaps both? You say (in the title) that the results are inconsistent. But you only give evidence that the p-values are on opposite sides of .05. So? Why do you care about that? It would be much better to examine effect sizes. And these effect sizes don't have the problem of being inconsistent, because they don't vary. Finally, it seems like (but perhaps I am wrong) you are asking for a general solution to a general problem; but really, the questions are particular.
ANOVA results do not match post-hoc Tukey test, how to proceed?
This is not an area where there is universal agreement. My view is that 1) the two tests answer different questions, so it's not surprising that they get different answers. 2) This discrepancy is more
ANOVA results do not match post-hoc Tukey test, how to proceed? This is not an area where there is universal agreement. My view is that 1) the two tests answer different questions, so it's not surprising that they get different answers. 2) This discrepancy is more a demonstration of the problems of p values and, especially, using cutoff values like p < .05. 3) It also gets at the problems of looking for "cookie cutter" methods of doing statistics. Elaborating a bit: The overall ANOVA asks about the relationships among all the levels; tests like Tukey's LSD compare individual levels. Those are two different questions. Which one are you interested in? Perhaps both? You say (in the title) that the results are inconsistent. But you only give evidence that the p-values are on opposite sides of .05. So? Why do you care about that? It would be much better to examine effect sizes. And these effect sizes don't have the problem of being inconsistent, because they don't vary. Finally, it seems like (but perhaps I am wrong) you are asking for a general solution to a general problem; but really, the questions are particular.
ANOVA results do not match post-hoc Tukey test, how to proceed? This is not an area where there is universal agreement. My view is that 1) the two tests answer different questions, so it's not surprising that they get different answers. 2) This discrepancy is more
39,076
ANOVA results do not match post-hoc Tukey test, how to proceed?
It's possible that you could have no significant differences among all of the individual means with the most liberal of tests—even a planned comparison. I was just tasking my students to develop simulated data with just this feature. You do an ANOVA to test the pattern of your data. If the test is passed then, in most situations, all you need to do is then describe that pattern.
ANOVA results do not match post-hoc Tukey test, how to proceed?
It's possible that you could have no significant differences among all of the individual means with the most liberal of tests—even a planned comparison. I was just tasking my students to develop simu
ANOVA results do not match post-hoc Tukey test, how to proceed? It's possible that you could have no significant differences among all of the individual means with the most liberal of tests—even a planned comparison. I was just tasking my students to develop simulated data with just this feature. You do an ANOVA to test the pattern of your data. If the test is passed then, in most situations, all you need to do is then describe that pattern.
ANOVA results do not match post-hoc Tukey test, how to proceed? It's possible that you could have no significant differences among all of the individual means with the most liberal of tests—even a planned comparison. I was just tasking my students to develop simu
39,077
ANOVA results do not match post-hoc Tukey test, how to proceed?
I used SPSS and I used to have the same problem and I have tried different tests in the PostHoc. For equal variance assumed, I suggest you use Dunnet test in which you can have different results if you change the selection in Control category (First or Last) and sometimes in the Test (2-sided, < Control, > Control). For unequal variance assumed, you should use Tamhane's T2. That's my personal experience.
ANOVA results do not match post-hoc Tukey test, how to proceed?
I used SPSS and I used to have the same problem and I have tried different tests in the PostHoc. For equal variance assumed, I suggest you use Dunnet test in which you can have different results if yo
ANOVA results do not match post-hoc Tukey test, how to proceed? I used SPSS and I used to have the same problem and I have tried different tests in the PostHoc. For equal variance assumed, I suggest you use Dunnet test in which you can have different results if you change the selection in Control category (First or Last) and sometimes in the Test (2-sided, < Control, > Control). For unequal variance assumed, you should use Tamhane's T2. That's my personal experience.
ANOVA results do not match post-hoc Tukey test, how to proceed? I used SPSS and I used to have the same problem and I have tried different tests in the PostHoc. For equal variance assumed, I suggest you use Dunnet test in which you can have different results if yo
39,078
Multiple imputation with the Amelia package
Amelia assumes that the data follow a multivariate normal distribution, so all information about the relations in the data can be summarized by just means and covariances. When data are incomplete, Amelia uses the well-known EM algorithm to find corrected estimates of the means and covariances. See Little and Rubin (2002) for more detail. In their original form the EM estimates cannot be used to create multiple imputations, as the estimates do not reflect the fact that they have been estimated from a finite sample. In order to solve this, Amelia first takes m bootstrap samples, and applies the EM-algorithm to each of these bootstrap samples. The m estimates of means and variances will now be different. The first set of estimates is used to draw the first set of imputed values by a form of regression analysis, the second set is used to calculate the second set of imputed values, and so on. As Amelia assumes a multivariate normal distribution, it will work best when your data are approximately normally distributed (possibly after a transformation), and when the statistics you calculate from the data in your complete-data analysis are near the center of the distribution, like means, modes or regression weights.
Multiple imputation with the Amelia package
Amelia assumes that the data follow a multivariate normal distribution, so all information about the relations in the data can be summarized by just means and covariances. When data are incomplete, Am
Multiple imputation with the Amelia package Amelia assumes that the data follow a multivariate normal distribution, so all information about the relations in the data can be summarized by just means and covariances. When data are incomplete, Amelia uses the well-known EM algorithm to find corrected estimates of the means and covariances. See Little and Rubin (2002) for more detail. In their original form the EM estimates cannot be used to create multiple imputations, as the estimates do not reflect the fact that they have been estimated from a finite sample. In order to solve this, Amelia first takes m bootstrap samples, and applies the EM-algorithm to each of these bootstrap samples. The m estimates of means and variances will now be different. The first set of estimates is used to draw the first set of imputed values by a form of regression analysis, the second set is used to calculate the second set of imputed values, and so on. As Amelia assumes a multivariate normal distribution, it will work best when your data are approximately normally distributed (possibly after a transformation), and when the statistics you calculate from the data in your complete-data analysis are near the center of the distribution, like means, modes or regression weights.
Multiple imputation with the Amelia package Amelia assumes that the data follow a multivariate normal distribution, so all information about the relations in the data can be summarized by just means and covariances. When data are incomplete, Am
39,079
Probability distribution of income
Typically lognormal distributions or sometimes pareto distributions are used to model the distribution of income. Here you can find information how well these distrubtions fit real data for Germany, UK and the US: http://ideas.repec.org/p/wpa/wuwpmi/0505006.html Here is a proposal to use a generalized lognormal distribution https://pure.mpg.de/rest/items/item_1586247/component/file_1586246/content
Probability distribution of income
Typically lognormal distributions or sometimes pareto distributions are used to model the distribution of income. Here you can find information how well these distrubtions fit real data for Germany, U
Probability distribution of income Typically lognormal distributions or sometimes pareto distributions are used to model the distribution of income. Here you can find information how well these distrubtions fit real data for Germany, UK and the US: http://ideas.repec.org/p/wpa/wuwpmi/0505006.html Here is a proposal to use a generalized lognormal distribution https://pure.mpg.de/rest/items/item_1586247/component/file_1586246/content
Probability distribution of income Typically lognormal distributions or sometimes pareto distributions are used to model the distribution of income. Here you can find information how well these distrubtions fit real data for Germany, U
39,080
Probability distribution of income
If you want to include heavy tails while maintaining most of the remaining features of the lognormal, might I suggest the log-Cauchy or, if you need finite moments, the log-Student?
Probability distribution of income
If you want to include heavy tails while maintaining most of the remaining features of the lognormal, might I suggest the log-Cauchy or, if you need finite moments, the log-Student?
Probability distribution of income If you want to include heavy tails while maintaining most of the remaining features of the lognormal, might I suggest the log-Cauchy or, if you need finite moments, the log-Student?
Probability distribution of income If you want to include heavy tails while maintaining most of the remaining features of the lognormal, might I suggest the log-Cauchy or, if you need finite moments, the log-Student?
39,081
Skills & coursework needed to be a data analyst
Aside from the technical skills, like R or SAS, SQL will be important, and a few other higher level skills, including: Data Manipulation: To be able to analyse data, you will frequently have to spend some considerable time acquiring the data and manipulating it into a form which can be analysed. Many statisticians will tell you that most of their time on a given project will be spent manipulating the data - so it is important to be good at this! Understanding: Many people vastly under estimate the amount of time that is required just to understand a complex dataset. In bygone days one had to serve time apprenticed to a master crafts man, with a dataset you have to spend time looking at the various facets of the data and understanding it's dimensions and missing data and talking to people to try to understand the data. Again you will spend considerable time doing this, it requires practice to build up this skill! Visualization: Going hand in hand with the above is visualization. Knowing how to plot data to help gain understanding is important. Later when you want to show somebody else what you have found in the dataset, a carefully created picture says a thousand words. Requirements: One of the hardest to learn is requirements gathering. Your customer will frequently not be clear what they want in their own head, and even less clear in what they say to you.
Skills & coursework needed to be a data analyst
Aside from the technical skills, like R or SAS, SQL will be important, and a few other higher level skills, including: Data Manipulation: To be able to analyse data, you will frequently have to spend
Skills & coursework needed to be a data analyst Aside from the technical skills, like R or SAS, SQL will be important, and a few other higher level skills, including: Data Manipulation: To be able to analyse data, you will frequently have to spend some considerable time acquiring the data and manipulating it into a form which can be analysed. Many statisticians will tell you that most of their time on a given project will be spent manipulating the data - so it is important to be good at this! Understanding: Many people vastly under estimate the amount of time that is required just to understand a complex dataset. In bygone days one had to serve time apprenticed to a master crafts man, with a dataset you have to spend time looking at the various facets of the data and understanding it's dimensions and missing data and talking to people to try to understand the data. Again you will spend considerable time doing this, it requires practice to build up this skill! Visualization: Going hand in hand with the above is visualization. Knowing how to plot data to help gain understanding is important. Later when you want to show somebody else what you have found in the dataset, a carefully created picture says a thousand words. Requirements: One of the hardest to learn is requirements gathering. Your customer will frequently not be clear what they want in their own head, and even less clear in what they say to you.
Skills & coursework needed to be a data analyst Aside from the technical skills, like R or SAS, SQL will be important, and a few other higher level skills, including: Data Manipulation: To be able to analyse data, you will frequently have to spend
39,082
Skills & coursework needed to be a data analyst
SAS is important in the pharmaceutical industry but not necessarily in other disciplines. in business and marketing time series analysis and survey sampling are particularly important. Yes big pharma and business use SAS a lot but it is expensive and makes more economical sense with multiple users that you would find in big companies. in the social sciences SPSS is more commonly used but is also expensive. R is free and many advanced statistical procedures can be found in the CRAN libraries.
Skills & coursework needed to be a data analyst
SAS is important in the pharmaceutical industry but not necessarily in other disciplines. in business and marketing time series analysis and survey sampling are particularly important. Yes big pharm
Skills & coursework needed to be a data analyst SAS is important in the pharmaceutical industry but not necessarily in other disciplines. in business and marketing time series analysis and survey sampling are particularly important. Yes big pharma and business use SAS a lot but it is expensive and makes more economical sense with multiple users that you would find in big companies. in the social sciences SPSS is more commonly used but is also expensive. R is free and many advanced statistical procedures can be found in the CRAN libraries.
Skills & coursework needed to be a data analyst SAS is important in the pharmaceutical industry but not necessarily in other disciplines. in business and marketing time series analysis and survey sampling are particularly important. Yes big pharm
39,083
Skills & coursework needed to be a data analyst
I would recommend that you take some microeconometrics courses. Business analysts spend an inordinate amounts of time patching up analyses that don't make much sense once you've done this. One common example is running regressions of revenue or profit on price and believing that these estimate some sort of demand elasticity. A lot of media mix modeling falls into this. Understanding endogeneity and counterfactuals goes a long way in the business world. I would also make sure I know how to clean data, since most of your time will be spent doing exactly that. This spans hard skills like SQL, and perhaps some Hadoop/Pig/Hive, but also being able to do quick gut checks. I call these softer skills number-sense. These things are rarely taught in master's programs, where the data you see comes packaged with ribbons and bows.
Skills & coursework needed to be a data analyst
I would recommend that you take some microeconometrics courses. Business analysts spend an inordinate amounts of time patching up analyses that don't make much sense once you've done this. One common
Skills & coursework needed to be a data analyst I would recommend that you take some microeconometrics courses. Business analysts spend an inordinate amounts of time patching up analyses that don't make much sense once you've done this. One common example is running regressions of revenue or profit on price and believing that these estimate some sort of demand elasticity. A lot of media mix modeling falls into this. Understanding endogeneity and counterfactuals goes a long way in the business world. I would also make sure I know how to clean data, since most of your time will be spent doing exactly that. This spans hard skills like SQL, and perhaps some Hadoop/Pig/Hive, but also being able to do quick gut checks. I call these softer skills number-sense. These things are rarely taught in master's programs, where the data you see comes packaged with ribbons and bows.
Skills & coursework needed to be a data analyst I would recommend that you take some microeconometrics courses. Business analysts spend an inordinate amounts of time patching up analyses that don't make much sense once you've done this. One common
39,084
Skills & coursework needed to be a data analyst
In addition to the many great suggestions, I'd add that you should try to learn some soft/people/analysis skills and perhaps at least a bit about one field where you might want to apply your skills as well. In the real world, no one hands you a clean set of well-documented data with a precise question to answer. You will need to massage the data, understand the data, pull expertise out of people, understand the data-owner's business, beg for data, present interim results, etc. You may often end up using embarrassingly-unsophisticated techniques. (I hope you don't expect labeled data.) Don't neglect statistics and sophisticated techniques. But don't think that you'll spend a significant portion of your on-the-job time doing and thinking about them.
Skills & coursework needed to be a data analyst
In addition to the many great suggestions, I'd add that you should try to learn some soft/people/analysis skills and perhaps at least a bit about one field where you might want to apply your skills as
Skills & coursework needed to be a data analyst In addition to the many great suggestions, I'd add that you should try to learn some soft/people/analysis skills and perhaps at least a bit about one field where you might want to apply your skills as well. In the real world, no one hands you a clean set of well-documented data with a precise question to answer. You will need to massage the data, understand the data, pull expertise out of people, understand the data-owner's business, beg for data, present interim results, etc. You may often end up using embarrassingly-unsophisticated techniques. (I hope you don't expect labeled data.) Don't neglect statistics and sophisticated techniques. But don't think that you'll spend a significant portion of your on-the-job time doing and thinking about them.
Skills & coursework needed to be a data analyst In addition to the many great suggestions, I'd add that you should try to learn some soft/people/analysis skills and perhaps at least a bit about one field where you might want to apply your skills as
39,085
Skills & coursework needed to be a data analyst
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I recommend reading these two before doing anything. Hope they can shape your mind: analyzing the analyzers strata survey
Skills & coursework needed to be a data analyst
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Skills & coursework needed to be a data analyst Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I recommend reading these two before doing anything. Hope they can shape your mind: analyzing the analyzers strata survey
Skills & coursework needed to be a data analyst Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
39,086
Inter-rater statistic for skewed rankings
A measure that is low when highly skewed raters agree is actually highly desirable. Gwet's AC1 specifically assumes that chance agreement should be at most 50%, but if both raters vote +ve 90% of the time, Cohen and Fleiss/Scott says that chance agreement is 81% on the positives and 1% on the negatives for a total of 82% expected accuracy. This is precisely the kind of bias that needs to be eliminated. A contingency table of 81 9 9 1 represents chance level performance. Fleiss and Cohen Kappa and Correlation are 0 but AC1 is a misleading 89%. We of course see the accuracy of 82% and also see Recall and Precision and F-measue of 90%, if we considered them in these terms... Consider two raters, one of whom is a linguist who gives highly reliable part of speech ratings - noun versus verb say, and the other of whom is unbeknownst a computer program which is so hopeless it just guesses. Since water is a noun 90% of the time, the linguist says noun 90% of the time and verb 10% of the time. One form of guessing is to label words with their most frequent part of speech, another is to guess the different parts of speech with probability given by their frequency. This latter "prevalence-biased" approach will be rated 0 by all Kappa and Correlation measures, as well as DeltaP, DeltaP', Informedness and Markedness (which are the regression coefficients which give one directional prediction information, and whose geometric mean is the Matthews Correlation). It corresponds to the table above. The "most frequent" part of speech random tagger gives the following table for 100 words: 90 10 0 0 That is it predicts correctly all 90 the linguist's nouns, but none of the 10 verbs. All Kappas and Correlations, and Informedness, give this 0, but AC1 gives it a misleading 81%. Informedness is giving the probability that the tagger is making an informed decision, that is what proportion of the time it is making an informed decision, and correctly returns no. On the other hand, Markedness is estimating what proportion of the time the linguist is correctly marking the word, and it underestimates 40%. If we considered this in terms of the precision and recall of the program, we have a Precision of 90% (we get the 10% wrong that are verbs), but since we only consider the nouns, we have a Recall of 100% (we get all of them as the computer always guesses noun). But Inverse Recall is 0, and Inverse Precision is undefined as computer makes no -ve predictions (consider the inverse problem where verb is the +ve class, so computer is no always predicting -ve as the more prevalent class). In the Dichotomous case (two classes) we have Informedness = Recall + Inverse Recall - 1. Markedness = Precision + Inverse Precision - 1. Correlation = GeoMean (Informedness, Markedness). Short answer - Correlation is best when there is nothing to choose between the raters, otherwise Informedness. If you want to use Kappa and think both raters should have the same distribution use Fleiss, but normally you will want to allow them to have their own scales and use Cohen. I don't know of any example where AC1 would give a more appropriate answer, but in general the unintuitive results come because of mismatches between the biases/prevalences of the two raters' class choices. When bias=prevalence=0.5 all of the measures agree, when the measures disagree it is your assumptions that determine what is appropriate, and the guidelines I've given reflect the corresponding assumptions. This Water example originated in... Jim Entwisle and David M. W. Powers (1998), "The Present Use of Statistics in the Evaluation of NLP Parsers", pp215-224, NeMLaP3/CoNLL98 Joint Conference, Sydney, January 1998. - should be cited for all Bookmaker theory/history purpose. http://david.wardpowers.info/Research/AI/papers/199801a-CoNLL-USE.pdf http://dl.dropbox.com/u/27743223/199801a-CoNLL-USE.pdf Informedness and Markedness versus Kappa are explained in... David M. W. Powers (2012). "The Problem with Kappa". Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop. - cite for work using Informedness or Kappa in an NLP/CL context. http://aclweb.org/anthology-new/E/E12/E12-1035.pdf http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf
Inter-rater statistic for skewed rankings
A measure that is low when highly skewed raters agree is actually highly desirable. Gwet's AC1 specifically assumes that chance agreement should be at most 50%, but if both raters vote +ve 90% of the
Inter-rater statistic for skewed rankings A measure that is low when highly skewed raters agree is actually highly desirable. Gwet's AC1 specifically assumes that chance agreement should be at most 50%, but if both raters vote +ve 90% of the time, Cohen and Fleiss/Scott says that chance agreement is 81% on the positives and 1% on the negatives for a total of 82% expected accuracy. This is precisely the kind of bias that needs to be eliminated. A contingency table of 81 9 9 1 represents chance level performance. Fleiss and Cohen Kappa and Correlation are 0 but AC1 is a misleading 89%. We of course see the accuracy of 82% and also see Recall and Precision and F-measue of 90%, if we considered them in these terms... Consider two raters, one of whom is a linguist who gives highly reliable part of speech ratings - noun versus verb say, and the other of whom is unbeknownst a computer program which is so hopeless it just guesses. Since water is a noun 90% of the time, the linguist says noun 90% of the time and verb 10% of the time. One form of guessing is to label words with their most frequent part of speech, another is to guess the different parts of speech with probability given by their frequency. This latter "prevalence-biased" approach will be rated 0 by all Kappa and Correlation measures, as well as DeltaP, DeltaP', Informedness and Markedness (which are the regression coefficients which give one directional prediction information, and whose geometric mean is the Matthews Correlation). It corresponds to the table above. The "most frequent" part of speech random tagger gives the following table for 100 words: 90 10 0 0 That is it predicts correctly all 90 the linguist's nouns, but none of the 10 verbs. All Kappas and Correlations, and Informedness, give this 0, but AC1 gives it a misleading 81%. Informedness is giving the probability that the tagger is making an informed decision, that is what proportion of the time it is making an informed decision, and correctly returns no. On the other hand, Markedness is estimating what proportion of the time the linguist is correctly marking the word, and it underestimates 40%. If we considered this in terms of the precision and recall of the program, we have a Precision of 90% (we get the 10% wrong that are verbs), but since we only consider the nouns, we have a Recall of 100% (we get all of them as the computer always guesses noun). But Inverse Recall is 0, and Inverse Precision is undefined as computer makes no -ve predictions (consider the inverse problem where verb is the +ve class, so computer is no always predicting -ve as the more prevalent class). In the Dichotomous case (two classes) we have Informedness = Recall + Inverse Recall - 1. Markedness = Precision + Inverse Precision - 1. Correlation = GeoMean (Informedness, Markedness). Short answer - Correlation is best when there is nothing to choose between the raters, otherwise Informedness. If you want to use Kappa and think both raters should have the same distribution use Fleiss, but normally you will want to allow them to have their own scales and use Cohen. I don't know of any example where AC1 would give a more appropriate answer, but in general the unintuitive results come because of mismatches between the biases/prevalences of the two raters' class choices. When bias=prevalence=0.5 all of the measures agree, when the measures disagree it is your assumptions that determine what is appropriate, and the guidelines I've given reflect the corresponding assumptions. This Water example originated in... Jim Entwisle and David M. W. Powers (1998), "The Present Use of Statistics in the Evaluation of NLP Parsers", pp215-224, NeMLaP3/CoNLL98 Joint Conference, Sydney, January 1998. - should be cited for all Bookmaker theory/history purpose. http://david.wardpowers.info/Research/AI/papers/199801a-CoNLL-USE.pdf http://dl.dropbox.com/u/27743223/199801a-CoNLL-USE.pdf Informedness and Markedness versus Kappa are explained in... David M. W. Powers (2012). "The Problem with Kappa". Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop. - cite for work using Informedness or Kappa in an NLP/CL context. http://aclweb.org/anthology-new/E/E12/E12-1035.pdf http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf
Inter-rater statistic for skewed rankings A measure that is low when highly skewed raters agree is actually highly desirable. Gwet's AC1 specifically assumes that chance agreement should be at most 50%, but if both raters vote +ve 90% of the
39,087
Inter-rater statistic for skewed rankings
Since skewness is a problem in your case, you might want to use the AC1 interrater reliability statistic proposed by Gwet (2001, 2002). See e.g. Gwet 2008. It is a "more robust chance-corrected statistic that consistently yields reliable results" as compared to $\kappa$. The $\kappa$ statistics can be problematic, because "it is effected by skewed distributions of categories (the prevalence problem) and by the degree to which the coders disagree (the bias problem)" (DiEugenio & Glass, 2004). Or as Feinstein and Cicchetti (1990) observed: In a fourfold table showing binary agreement of two observers, the observed proportion of agreement, P0 can be paradoxically altered by the chance-corrected ratio that creates $\kappa$ as an index of concordance. In one paradox, a high value of P0 can be drastically lowered by a substantial imbalance in the table's marginal totals either vertically or horizontally. In the second pardox, (sic) $\kappa$ will be higher with an asymmetrical rather than symmetrical imbalance in marginal totals, and with imperfect rather than perfect symmetry in the imbalance. An adjustment that substitutes Kmax for $\kappa$ does not repair either problem, and seems to make the second one worse. (emphasis added) References: DiEugenio, Barbara & Glass, Michael (2004). The kappa statistic: a second look. Computational Linguistics 30(1). Feinstein, Alvan R. & Cicchetti, Domenic V. (1990). High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology 43(6): 543-549. Gwet, Kilem (2001). Handbook of Inter-Rater Reliability: How to Estimate the Level of Agreement Between Two or Multiple Raters. Gaithersburg, MD, STATAXIS Publishing Company Gwet, Kilem (2002). Inter-Rater Reliability: Dependency on Trait Prevalence and Marginal Homogeneity. Statistical Methods for Inter-Rater Reliability Assessment 2.
Inter-rater statistic for skewed rankings
Since skewness is a problem in your case, you might want to use the AC1 interrater reliability statistic proposed by Gwet (2001, 2002). See e.g. Gwet 2008. It is a "more robust chance-corrected statis
Inter-rater statistic for skewed rankings Since skewness is a problem in your case, you might want to use the AC1 interrater reliability statistic proposed by Gwet (2001, 2002). See e.g. Gwet 2008. It is a "more robust chance-corrected statistic that consistently yields reliable results" as compared to $\kappa$. The $\kappa$ statistics can be problematic, because "it is effected by skewed distributions of categories (the prevalence problem) and by the degree to which the coders disagree (the bias problem)" (DiEugenio & Glass, 2004). Or as Feinstein and Cicchetti (1990) observed: In a fourfold table showing binary agreement of two observers, the observed proportion of agreement, P0 can be paradoxically altered by the chance-corrected ratio that creates $\kappa$ as an index of concordance. In one paradox, a high value of P0 can be drastically lowered by a substantial imbalance in the table's marginal totals either vertically or horizontally. In the second pardox, (sic) $\kappa$ will be higher with an asymmetrical rather than symmetrical imbalance in marginal totals, and with imperfect rather than perfect symmetry in the imbalance. An adjustment that substitutes Kmax for $\kappa$ does not repair either problem, and seems to make the second one worse. (emphasis added) References: DiEugenio, Barbara & Glass, Michael (2004). The kappa statistic: a second look. Computational Linguistics 30(1). Feinstein, Alvan R. & Cicchetti, Domenic V. (1990). High agreement but low kappa: I. The problems of two paradoxes. Journal of Clinical Epidemiology 43(6): 543-549. Gwet, Kilem (2001). Handbook of Inter-Rater Reliability: How to Estimate the Level of Agreement Between Two or Multiple Raters. Gaithersburg, MD, STATAXIS Publishing Company Gwet, Kilem (2002). Inter-Rater Reliability: Dependency on Trait Prevalence and Marginal Homogeneity. Statistical Methods for Inter-Rater Reliability Assessment 2.
Inter-rater statistic for skewed rankings Since skewness is a problem in your case, you might want to use the AC1 interrater reliability statistic proposed by Gwet (2001, 2002). See e.g. Gwet 2008. It is a "more robust chance-corrected statis
39,088
Inter-rater statistic for skewed rankings
I think most of them test concordance versus discordance and so they stress the degree with which the raters agree and so the fact that they will tend to vote yes a 10% of the time is not a factor. Sample size could be though because if the sample size is small you won't have many yeses to compare among the voters. That would be a problem for any test of agreement. So if you can afford it decide on a number of yes votes you would like to see on average for each voter. If that is 50 take 500 samples to be rated. Certainly, the Kappa statistic would be fine for this as will most others.
Inter-rater statistic for skewed rankings
I think most of them test concordance versus discordance and so they stress the degree with which the raters agree and so the fact that they will tend to vote yes a 10% of the time is not a factor. S
Inter-rater statistic for skewed rankings I think most of them test concordance versus discordance and so they stress the degree with which the raters agree and so the fact that they will tend to vote yes a 10% of the time is not a factor. Sample size could be though because if the sample size is small you won't have many yeses to compare among the voters. That would be a problem for any test of agreement. So if you can afford it decide on a number of yes votes you would like to see on average for each voter. If that is 50 take 500 samples to be rated. Certainly, the Kappa statistic would be fine for this as will most others.
Inter-rater statistic for skewed rankings I think most of them test concordance versus discordance and so they stress the degree with which the raters agree and so the fact that they will tend to vote yes a 10% of the time is not a factor. S
39,089
Evaluating recommender systems with (implicit) binary ratings only
I suggest to use Expected Utility or R-score. Assume your model has created an ordered list of recommendations where the first item is the one the user is most likely and the last is the one the user is least likely interested in. Let's say that this recommendations are specified by $r_i$ where $i$ is the position in the list. Expected utility for a particular user u is now defined as: $R_u=\sum_{i=1}^{n}\frac{f_u(r_i)}{2^{\frac{i-1}{\alpha-1}}}$ where ... $f_u(r_i)=1$, if the recommended item $r_i$ is the user's library, else 0 $\alpha$ specifies the slope of the decay. The smaller alpha, the greater the slope. This metric measures how likely it is that a user will view a recommended item, assuming a list of items is recommended to him. If the item of interest is placed at a very high position, it is unlikely that the user will bother to scroll/look that far, even if it exactly what he/she wanted. In this sense, $\alpha$ specifies how patient the average user is. To calculate the R-score for a set of users, it is recommended to normalize the R-score per user beforehand. The resulting score is $R =\sum_u\frac{R_u}{R_u^*}$ where $R_u^*$ is the maximum R-score you can achieve for one user. That is, given a user has k items in his library you want to predict, the items at the first k positions are exactly these ones. For more metrics or as a general read I recommend Evaluating Recommendations Systems by Shani & Gunarwadana
Evaluating recommender systems with (implicit) binary ratings only
I suggest to use Expected Utility or R-score. Assume your model has created an ordered list of recommendations where the first item is the one the user is most likely and the last is the one the user
Evaluating recommender systems with (implicit) binary ratings only I suggest to use Expected Utility or R-score. Assume your model has created an ordered list of recommendations where the first item is the one the user is most likely and the last is the one the user is least likely interested in. Let's say that this recommendations are specified by $r_i$ where $i$ is the position in the list. Expected utility for a particular user u is now defined as: $R_u=\sum_{i=1}^{n}\frac{f_u(r_i)}{2^{\frac{i-1}{\alpha-1}}}$ where ... $f_u(r_i)=1$, if the recommended item $r_i$ is the user's library, else 0 $\alpha$ specifies the slope of the decay. The smaller alpha, the greater the slope. This metric measures how likely it is that a user will view a recommended item, assuming a list of items is recommended to him. If the item of interest is placed at a very high position, it is unlikely that the user will bother to scroll/look that far, even if it exactly what he/she wanted. In this sense, $\alpha$ specifies how patient the average user is. To calculate the R-score for a set of users, it is recommended to normalize the R-score per user beforehand. The resulting score is $R =\sum_u\frac{R_u}{R_u^*}$ where $R_u^*$ is the maximum R-score you can achieve for one user. That is, given a user has k items in his library you want to predict, the items at the first k positions are exactly these ones. For more metrics or as a general read I recommend Evaluating Recommendations Systems by Shani & Gunarwadana
Evaluating recommender systems with (implicit) binary ratings only I suggest to use Expected Utility or R-score. Assume your model has created an ordered list of recommendations where the first item is the one the user is most likely and the last is the one the user
39,090
Association or relationship
Under the definitions you've listed, "association" and "relationship" would not be interchangeable. However, I would argue that a better use of the term "relationship" would make them fairly synonymous in this application. I think that your teacher was making an important, and correct, point about correlation and regression, but that the way it was done (at least according to your memory) used the term "relationship" in a non-standard way. I think you are on solid footing to make the claim as you do in your last paragraph. For more info on the asymmetrical vs. symmetrical nature of regression and correlation, see here.
Association or relationship
Under the definitions you've listed, "association" and "relationship" would not be interchangeable. However, I would argue that a better use of the term "relationship" would make them fairly synonymo
Association or relationship Under the definitions you've listed, "association" and "relationship" would not be interchangeable. However, I would argue that a better use of the term "relationship" would make them fairly synonymous in this application. I think that your teacher was making an important, and correct, point about correlation and regression, but that the way it was done (at least according to your memory) used the term "relationship" in a non-standard way. I think you are on solid footing to make the claim as you do in your last paragraph. For more info on the asymmetrical vs. symmetrical nature of regression and correlation, see here.
Association or relationship Under the definitions you've listed, "association" and "relationship" would not be interchangeable. However, I would argue that a better use of the term "relationship" would make them fairly synonymo
39,091
Association or relationship
Yes. You are basically correct. Regression is used when you want to show how a dependent variable $Y$ is related to one or more independent variables. When we refer to correlation we are taking about an association. Regression is often used to predict future responses for $y$ based on given values for $x$. In least squares regression the predictor variables are assumed to be observed without error and $Y$ has an independent random error term. There is also error in variables regression where both $X$ and $Y$ are assumed to be observed with error. For that problem least squares is not the appropriate was to estimate the regression function. The function $f(x) =E(Y\vert X=x)$ for the model is called the regression function. Nevertheless the two ideas are intertwined. The Pearson product moment correlation measures the strength of the linear relationship between $X$ and $Y$. If you are using a simple linear regression model $Y=bX+a+\epsilon$ where $\epsilon$ is the independent error term in $Y$ and $a$ and $b$ are the intercept and slope parameters respectively then there is a direct relationship between the parameter $b$ and the Pearson correlation coefficient.
Association or relationship
Yes. You are basically correct. Regression is used when you want to show how a dependent variable $Y$ is related to one or more independent variables. When we refer to correlation we are taking abo
Association or relationship Yes. You are basically correct. Regression is used when you want to show how a dependent variable $Y$ is related to one or more independent variables. When we refer to correlation we are taking about an association. Regression is often used to predict future responses for $y$ based on given values for $x$. In least squares regression the predictor variables are assumed to be observed without error and $Y$ has an independent random error term. There is also error in variables regression where both $X$ and $Y$ are assumed to be observed with error. For that problem least squares is not the appropriate was to estimate the regression function. The function $f(x) =E(Y\vert X=x)$ for the model is called the regression function. Nevertheless the two ideas are intertwined. The Pearson product moment correlation measures the strength of the linear relationship between $X$ and $Y$. If you are using a simple linear regression model $Y=bX+a+\epsilon$ where $\epsilon$ is the independent error term in $Y$ and $a$ and $b$ are the intercept and slope parameters respectively then there is a direct relationship between the parameter $b$ and the Pearson correlation coefficient.
Association or relationship Yes. You are basically correct. Regression is used when you want to show how a dependent variable $Y$ is related to one or more independent variables. When we refer to correlation we are taking abo
39,092
How would you fit ARIMA model with lots of autocorrelations?
I think the issue here is whether a hypothesis test of the residuals is appropriate. You have 60000 observations, so any model will fail a residual test as there is so much data. That doesn't make the model bad, it just means that you have enough data to be able to tell that the model is an inaccurate representation of reality. Step back and ask, what do you want a model for? And what do you know about the data that would help in selecting an appropriate model? Whatever model you end up with, don't expect to find that the residuals are white noise. With enough data, every model can be shown to be inadequate.
How would you fit ARIMA model with lots of autocorrelations?
I think the issue here is whether a hypothesis test of the residuals is appropriate. You have 60000 observations, so any model will fail a residual test as there is so much data. That doesn't make the
How would you fit ARIMA model with lots of autocorrelations? I think the issue here is whether a hypothesis test of the residuals is appropriate. You have 60000 observations, so any model will fail a residual test as there is so much data. That doesn't make the model bad, it just means that you have enough data to be able to tell that the model is an inaccurate representation of reality. Step back and ask, what do you want a model for? And what do you know about the data that would help in selecting an appropriate model? Whatever model you end up with, don't expect to find that the residuals are white noise. With enough data, every model can be shown to be inadequate.
How would you fit ARIMA model with lots of autocorrelations? I think the issue here is whether a hypothesis test of the residuals is appropriate. You have 60000 observations, so any model will fail a residual test as there is so much data. That doesn't make the
39,093
How would you fit ARIMA model with lots of autocorrelations?
We are working with data like this for a major fast food franchise. The series represents the demand for tacos in 15 minute intervals for the last 5 years (180,000 observations) . This series can be treated by building 96 separate models (4x24) for each 15 minute interval a daily model reflecting overall trends,level shifts,holiday effects etc in daily values. By integrating the impact of daily values and their history on each of the 96 models and then reconciling, we are able to accurately predict both the demand for 15 minute intervals and the daily totals. The reason you think the acf is significant is as Rob points out due to the sample size since the standard error of the acf is equal to 1/sqrt(N). @Luna As you correctly point out in your comment one loses the connection between the different time slices BUT one gains the impact of activity over days/weeks/months while being able to detect changes in daily effects , while discovering the impact of particualr days of the month etc.. We like you had studied the "one-time series approach" using semi-hourly electricity demand data only to conclude that we were getting FALSE CONCLUSIONS due to the size/length of the data. In general one could have 96 equations with X eXogenous series . This would be called a Vector ARIMA problem and would be unwieldy as outlier /inliers cpuld distort parameter estimates. Standard errors would be microscopic in size due to large N . We have found ways to incorporate daily trends directly into each of the 96 equations
How would you fit ARIMA model with lots of autocorrelations?
We are working with data like this for a major fast food franchise. The series represents the demand for tacos in 15 minute intervals for the last 5 years (180,000 observations) . This series can be t
How would you fit ARIMA model with lots of autocorrelations? We are working with data like this for a major fast food franchise. The series represents the demand for tacos in 15 minute intervals for the last 5 years (180,000 observations) . This series can be treated by building 96 separate models (4x24) for each 15 minute interval a daily model reflecting overall trends,level shifts,holiday effects etc in daily values. By integrating the impact of daily values and their history on each of the 96 models and then reconciling, we are able to accurately predict both the demand for 15 minute intervals and the daily totals. The reason you think the acf is significant is as Rob points out due to the sample size since the standard error of the acf is equal to 1/sqrt(N). @Luna As you correctly point out in your comment one loses the connection between the different time slices BUT one gains the impact of activity over days/weeks/months while being able to detect changes in daily effects , while discovering the impact of particualr days of the month etc.. We like you had studied the "one-time series approach" using semi-hourly electricity demand data only to conclude that we were getting FALSE CONCLUSIONS due to the size/length of the data. In general one could have 96 equations with X eXogenous series . This would be called a Vector ARIMA problem and would be unwieldy as outlier /inliers cpuld distort parameter estimates. Standard errors would be microscopic in size due to large N . We have found ways to incorporate daily trends directly into each of the 96 equations
How would you fit ARIMA model with lots of autocorrelations? We are working with data like this for a major fast food franchise. The series represents the demand for tacos in 15 minute intervals for the last 5 years (180,000 observations) . This series can be t
39,094
Detect outliers in mixture of Gaussians
I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quantified, but apparently they can be: "small" would be a cluster of less than 10 values and "extreme" can be determined as outlying relative to the set of component means in the mixture model. In this case, outliers can be found with simple post-processing of any reasonable cluster analysis of the data. Choices have to be made in fine-tuning this approach. These choices will depend on the nature of the data and therefore cannot be completely specified in a general answer like this. Instead, let's analyze some data. I use R due to its popularity on this site and succinctness (even compared to Python). First, create some data as described in the question: set.seed(17) # For reproducible results centers <- rnorm(100, mean=100, sd=20) x <- c(centers + rnorm(100*100, mean=0, sd=1), rnorm(100, mean=250, sd=1), rnorm(9, mean=300, sd=1)) This command specifies 102 components: 100 of them are situated like 100 independent draws from a normal(100, 20) distribution (and will therefore tend to lie between 50 and 150); one of them is centered at 250, and one is centered at 300. It then draws 100 values independently from each component (using a common standard deviation of 1) but, in the last component centered at 300, it draws only 9 values. According to the characterization of outliers, the 100 values centered at 250 do not constitute outliers: they should be viewed as a component of the mixture, albeit situated far from the others. However, one cluster of nine high values consists entirely of outliers. We need to detect these but no others. Most omnibus univariate outlier-detection procedures would either not detect any of these 109 highest values or would indicate all 109 are outliers. Suppose we have a good sense of the standard deviations of the components (obtained from prior information or from exploring the data). Use this to construct a kernel density estimate of the mixture: d <- density(x, bw=1, n=1000) plot(d, main="Kernel density") The (almost invisible) blip at the extreme right qualifies as a set of outliers: its small area (less than 10/10109 = 0.001 of the total) indicates it consists of just a few values and its situation at one extreme of the x-axis earns it the appellation of "outlier" rather than "inlier." Checking these things is straightforward: x0 <- d$x[d$y > 1000/length(x) * dnorm(5)] gaps <- tail(x0, -1) - head(x0, -1) histogram(gaps, main="Gap Counts") The density estimate d is represented by a 1D grid of 1000 bins. These commands have retained all bins in which the density is sufficiently large. For "large" I chose a very small value, to make sure that even the density of a single isolated value is picked up, but not so small that obviously separated components are merged. Evidently the gap distribution has two high outliers (which can automatically be detected using any simple procedure, even an ad hoc one). One characterization is that they both exceed 25 (in this example). Let's find the values associated with them: large.gaps <- gaps > 25 ranges <- rbind(tail(x0,-1)[large.gaps], c(tail(head(x0,-1)[large.gaps], -1), max(x)) The output is [,1] [,2] [1,] 243.9937 295.7732 [2,] 256.3758 300.9340 Within the range of data (from 25 to 301) these gaps determine two potential outlying ranges, one from 244 to 256 (column 1) and another from 296 to 301 (column 2). Let's see how many values lie within these ranges: lapply(apply(ranges, 2, function(r){x[r[1] <= x & x <= r[2]]}), length) The result is [[1]] [1] 100 [[2]] [1] 9 The 100 is too large to be unusual: that's one of the components of the mixture. But the 9 is small enough. It remains to see whether any of these components might be considered outlying (as opposed to inlying): apply(ranges, 2, mean) The result is [1] 250.1848 298.3536 The center of the 100-point cluster is at 250 and the center of the 9-point cluster is at 298, far enough from the rest of the data to constitute a cluster of outliers. We conclude there are nine outliers. Specifically, these are the values determined by column 2 of ranges, x[ranges[1,2] <= x & x <= ranges[2,2]] In order, they are 299.0379 300.0376 300.2696 300.3892 300.4250 300.5659 300.7018 300.8436 300.9340
Detect outliers in mixture of Gaussians
I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quanti
Detect outliers in mixture of Gaussians I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quantified, but apparently they can be: "small" would be a cluster of less than 10 values and "extreme" can be determined as outlying relative to the set of component means in the mixture model. In this case, outliers can be found with simple post-processing of any reasonable cluster analysis of the data. Choices have to be made in fine-tuning this approach. These choices will depend on the nature of the data and therefore cannot be completely specified in a general answer like this. Instead, let's analyze some data. I use R due to its popularity on this site and succinctness (even compared to Python). First, create some data as described in the question: set.seed(17) # For reproducible results centers <- rnorm(100, mean=100, sd=20) x <- c(centers + rnorm(100*100, mean=0, sd=1), rnorm(100, mean=250, sd=1), rnorm(9, mean=300, sd=1)) This command specifies 102 components: 100 of them are situated like 100 independent draws from a normal(100, 20) distribution (and will therefore tend to lie between 50 and 150); one of them is centered at 250, and one is centered at 300. It then draws 100 values independently from each component (using a common standard deviation of 1) but, in the last component centered at 300, it draws only 9 values. According to the characterization of outliers, the 100 values centered at 250 do not constitute outliers: they should be viewed as a component of the mixture, albeit situated far from the others. However, one cluster of nine high values consists entirely of outliers. We need to detect these but no others. Most omnibus univariate outlier-detection procedures would either not detect any of these 109 highest values or would indicate all 109 are outliers. Suppose we have a good sense of the standard deviations of the components (obtained from prior information or from exploring the data). Use this to construct a kernel density estimate of the mixture: d <- density(x, bw=1, n=1000) plot(d, main="Kernel density") The (almost invisible) blip at the extreme right qualifies as a set of outliers: its small area (less than 10/10109 = 0.001 of the total) indicates it consists of just a few values and its situation at one extreme of the x-axis earns it the appellation of "outlier" rather than "inlier." Checking these things is straightforward: x0 <- d$x[d$y > 1000/length(x) * dnorm(5)] gaps <- tail(x0, -1) - head(x0, -1) histogram(gaps, main="Gap Counts") The density estimate d is represented by a 1D grid of 1000 bins. These commands have retained all bins in which the density is sufficiently large. For "large" I chose a very small value, to make sure that even the density of a single isolated value is picked up, but not so small that obviously separated components are merged. Evidently the gap distribution has two high outliers (which can automatically be detected using any simple procedure, even an ad hoc one). One characterization is that they both exceed 25 (in this example). Let's find the values associated with them: large.gaps <- gaps > 25 ranges <- rbind(tail(x0,-1)[large.gaps], c(tail(head(x0,-1)[large.gaps], -1), max(x)) The output is [,1] [,2] [1,] 243.9937 295.7732 [2,] 256.3758 300.9340 Within the range of data (from 25 to 301) these gaps determine two potential outlying ranges, one from 244 to 256 (column 1) and another from 296 to 301 (column 2). Let's see how many values lie within these ranges: lapply(apply(ranges, 2, function(r){x[r[1] <= x & x <= r[2]]}), length) The result is [[1]] [1] 100 [[2]] [1] 9 The 100 is too large to be unusual: that's one of the components of the mixture. But the 9 is small enough. It remains to see whether any of these components might be considered outlying (as opposed to inlying): apply(ranges, 2, mean) The result is [1] 250.1848 298.3536 The center of the 100-point cluster is at 250 and the center of the 9-point cluster is at 298, far enough from the rest of the data to constitute a cluster of outliers. We conclude there are nine outliers. Specifically, these are the values determined by column 2 of ranges, x[ranges[1,2] <= x & x <= ranges[2,2]] In order, they are 299.0379 300.0376 300.2696 300.3892 300.4250 300.5659 300.7018 300.8436 300.9340
Detect outliers in mixture of Gaussians I have suggested, in comments, that an "outlier" in this situation might be defined as a member of a "small" cluster centered at an "extreme" value. The meanings of the quoted terms need to be quanti
39,095
Detect outliers in mixture of Gaussians
I'm not sure I understand the issue here, but the MAD-Median rule: $\frac{|X-M|}{MADN}>2.24$, where $M$ is the median and $MADN$ is the $\frac{\text{median absolute deviation from the median}}{0.6745}$ is pretty commonly used. Wilcox's WRS package in R has an out() function that fits this and returns the cases to keep and cases to drop, and I'm sure it would be easy to code in other languages. On the face of it this would be an answer to your question - one of many of course because there is a vast literature on outliers. You may need a more restrictive definition of "outlier", of course. If you are happy with any observations that are consistent with a mixed distribution of 100s of Gaussian variables it is hard to imagine anything being ruled an outlier.
Detect outliers in mixture of Gaussians
I'm not sure I understand the issue here, but the MAD-Median rule: $\frac{|X-M|}{MADN}>2.24$, where $M$ is the median and $MADN$ is the $\frac{\text{median absolute deviation from the median}}{0.6745}
Detect outliers in mixture of Gaussians I'm not sure I understand the issue here, but the MAD-Median rule: $\frac{|X-M|}{MADN}>2.24$, where $M$ is the median and $MADN$ is the $\frac{\text{median absolute deviation from the median}}{0.6745}$ is pretty commonly used. Wilcox's WRS package in R has an out() function that fits this and returns the cases to keep and cases to drop, and I'm sure it would be easy to code in other languages. On the face of it this would be an answer to your question - one of many of course because there is a vast literature on outliers. You may need a more restrictive definition of "outlier", of course. If you are happy with any observations that are consistent with a mixed distribution of 100s of Gaussian variables it is hard to imagine anything being ruled an outlier.
Detect outliers in mixture of Gaussians I'm not sure I understand the issue here, but the MAD-Median rule: $\frac{|X-M|}{MADN}>2.24$, where $M$ is the median and $MADN$ is the $\frac{\text{median absolute deviation from the median}}{0.6745}
39,096
Detect outliers in mixture of Gaussians
If your range of possible distributions of non-outliers is so broad, I don't think you can have any outliers. But perhaps you can impose some restrictions on the mixture? For example, if N = 10,000 and it's a mixture of $\mathcal{N}~(9900, 10, 10)$ and $\mathcal{N}~(100, 50, 100)$ then some very large values would be non-outliers. In addition, in general, automated searching for outliers can only be a first step.
Detect outliers in mixture of Gaussians
If your range of possible distributions of non-outliers is so broad, I don't think you can have any outliers. But perhaps you can impose some restrictions on the mixture? For example, if N = 10,000 an
Detect outliers in mixture of Gaussians If your range of possible distributions of non-outliers is so broad, I don't think you can have any outliers. But perhaps you can impose some restrictions on the mixture? For example, if N = 10,000 and it's a mixture of $\mathcal{N}~(9900, 10, 10)$ and $\mathcal{N}~(100, 50, 100)$ then some very large values would be non-outliers. In addition, in general, automated searching for outliers can only be a first step.
Detect outliers in mixture of Gaussians If your range of possible distributions of non-outliers is so broad, I don't think you can have any outliers. But perhaps you can impose some restrictions on the mixture? For example, if N = 10,000 an
39,097
Detect outliers in mixture of Gaussians
The most elegant solution I can think of is a mixture of Gaussians model, in which you have k Gaussians corresponding to your signal (with a prior encouraging their variances to be reasonably small), and 1 diffuse Gaussian capturing the outliers ("diffuse" means huge variance), where you specify the prior proportion of outliers (e.g. 1%) in a Dirichlet prior. If you don't want to do EM, you may consider using k-means as a warm-start, and then optimize iteratively, where the slow step is the optimization of the discrete cluster assignments. But if the (co)variances of the signal Gaussians are approximately equal, this means that most reassignments will be to/from neighboring clusters, or to/from the outlier cluster.
Detect outliers in mixture of Gaussians
The most elegant solution I can think of is a mixture of Gaussians model, in which you have k Gaussians corresponding to your signal (with a prior encouraging their variances to be reasonably small),
Detect outliers in mixture of Gaussians The most elegant solution I can think of is a mixture of Gaussians model, in which you have k Gaussians corresponding to your signal (with a prior encouraging their variances to be reasonably small), and 1 diffuse Gaussian capturing the outliers ("diffuse" means huge variance), where you specify the prior proportion of outliers (e.g. 1%) in a Dirichlet prior. If you don't want to do EM, you may consider using k-means as a warm-start, and then optimize iteratively, where the slow step is the optimization of the discrete cluster assignments. But if the (co)variances of the signal Gaussians are approximately equal, this means that most reassignments will be to/from neighboring clusters, or to/from the outlier cluster.
Detect outliers in mixture of Gaussians The most elegant solution I can think of is a mixture of Gaussians model, in which you have k Gaussians corresponding to your signal (with a prior encouraging their variances to be reasonably small),
39,098
Joint distribution of two gamma random variables
As stated the problem does not make sense, because a joint distribution cannot be found from the marginal distributions! The only meaningful case (as an homework) is to assume independence. In which case the density of the joint distribution is obviously the product of both densities...
Joint distribution of two gamma random variables
As stated the problem does not make sense, because a joint distribution cannot be found from the marginal distributions! The only meaningful case (as an homework) is to assume independence. In which c
Joint distribution of two gamma random variables As stated the problem does not make sense, because a joint distribution cannot be found from the marginal distributions! The only meaningful case (as an homework) is to assume independence. In which case the density of the joint distribution is obviously the product of both densities...
Joint distribution of two gamma random variables As stated the problem does not make sense, because a joint distribution cannot be found from the marginal distributions! The only meaningful case (as an homework) is to assume independence. In which c
39,099
Joint distribution of two gamma random variables
OP notrockstar knows the solution for the case when the random variables are independent but presumably cannot use it since a solution without the independence assumption is being sought. Perhaps the OP has posted only a simplified version of the question, and what has been left out makes a solution possible. For example, if $X_1$ and $X_2$ are given to be the times of the $k$-th and $(k+\ell)$-th arrivals in a Poisson process of intensity (arrival rate) $\lambda$, then these are Gamma random variables with order parameters $k$ and $k+\ell$ respectively. Furthermore, conditioned on $X_1 = x_1$, $X_2$ is a displaced Gamma random variable with order parameter $\ell$, that is, $X_2 = x_1 + Y$ where $Y$ is a Gamma random variable with order parameter $\ell$. Thus, $$\begin{align*} f_{X_1,X_2}(x_1,x_2) &= f_{X_2|X_1}(x_2|x_1)f_{X_1}(x_1)\\ &= \begin{cases} f_Y(x_2-x_1)f_{X_1}(x_1), & 0 < x_1 < x_2 < \infty,\\ 0, & \text{otherwise.} \end{cases} \end{align*}$$ In view of the additional information provided by the OP that what is really wanted is the joint distribution of $Y_1 = X_1 + X_2$ and $Y_2 = \frac{X_1}{X_1+X_2}$, maybe the problem is intended as drill in transformation of variables: can you express the joint density $f_{Y_1,Y_2}(y_1,y_2)$ in terms of the joint density $f_{X_1,X_2}(\cdot,\cdot)$ as $J(y_1,y_2)f_{X_1,X_2}(g_1(y_1,y_), g_2(y_1, y_2))$ with the Gamma functions thrown in as distractions, or merely as hints that $X_1, X_2 \in (0, \infty)$ to see if the students can deduce that $Y_2 \in (0,1)$. This problem is readily solvable since it is easy to invert the transformation, find the Jacobian etc. At the end, one could say something like "If $X_1$, $X_2$ are assumed to be independent (this is not stated in the problem given) random variables with Gamma distributions, then the joint density $f_{X_1,X_2}(\cdot,\cdot)$ factors into the product of the marginal densities, and in this case, $f_{Y_1,Y_2}(y_1,y_2)$ equals "$\cdots$" possibly adding that $Y_1$ and $Y_2$ are obviously independent if they are (I don't believe they are but am willing to abide a proof that they are), or giving their marginal pdfs too etc. In summary, $f_{Y_1,Y_2}(y_1,y_2)$ can be stated in terms of the joint density $f_{X_1,X_2}(\cdot,\cdot)$ without knowing the exact form of $f_{X_1,X_2}$ or the marginal densities of $X_1$ and $X_2$. The assumption that $X_1$, $X_2$ are independent can be used at the very end to say explicitly what $f_{Y_1,Y_2}(y_1,y_2)$ is; the Gammaity or independence of $X_1$ and $X_2$ is not needed or used at all in the earlier work, and indeed serves merely to clutter up the calculations without shedding much light on the matter.
Joint distribution of two gamma random variables
OP notrockstar knows the solution for the case when the random variables are independent but presumably cannot use it since a solution without the independence assumption is being sought. Perhaps the
Joint distribution of two gamma random variables OP notrockstar knows the solution for the case when the random variables are independent but presumably cannot use it since a solution without the independence assumption is being sought. Perhaps the OP has posted only a simplified version of the question, and what has been left out makes a solution possible. For example, if $X_1$ and $X_2$ are given to be the times of the $k$-th and $(k+\ell)$-th arrivals in a Poisson process of intensity (arrival rate) $\lambda$, then these are Gamma random variables with order parameters $k$ and $k+\ell$ respectively. Furthermore, conditioned on $X_1 = x_1$, $X_2$ is a displaced Gamma random variable with order parameter $\ell$, that is, $X_2 = x_1 + Y$ where $Y$ is a Gamma random variable with order parameter $\ell$. Thus, $$\begin{align*} f_{X_1,X_2}(x_1,x_2) &= f_{X_2|X_1}(x_2|x_1)f_{X_1}(x_1)\\ &= \begin{cases} f_Y(x_2-x_1)f_{X_1}(x_1), & 0 < x_1 < x_2 < \infty,\\ 0, & \text{otherwise.} \end{cases} \end{align*}$$ In view of the additional information provided by the OP that what is really wanted is the joint distribution of $Y_1 = X_1 + X_2$ and $Y_2 = \frac{X_1}{X_1+X_2}$, maybe the problem is intended as drill in transformation of variables: can you express the joint density $f_{Y_1,Y_2}(y_1,y_2)$ in terms of the joint density $f_{X_1,X_2}(\cdot,\cdot)$ as $J(y_1,y_2)f_{X_1,X_2}(g_1(y_1,y_), g_2(y_1, y_2))$ with the Gamma functions thrown in as distractions, or merely as hints that $X_1, X_2 \in (0, \infty)$ to see if the students can deduce that $Y_2 \in (0,1)$. This problem is readily solvable since it is easy to invert the transformation, find the Jacobian etc. At the end, one could say something like "If $X_1$, $X_2$ are assumed to be independent (this is not stated in the problem given) random variables with Gamma distributions, then the joint density $f_{X_1,X_2}(\cdot,\cdot)$ factors into the product of the marginal densities, and in this case, $f_{Y_1,Y_2}(y_1,y_2)$ equals "$\cdots$" possibly adding that $Y_1$ and $Y_2$ are obviously independent if they are (I don't believe they are but am willing to abide a proof that they are), or giving their marginal pdfs too etc. In summary, $f_{Y_1,Y_2}(y_1,y_2)$ can be stated in terms of the joint density $f_{X_1,X_2}(\cdot,\cdot)$ without knowing the exact form of $f_{X_1,X_2}$ or the marginal densities of $X_1$ and $X_2$. The assumption that $X_1$, $X_2$ are independent can be used at the very end to say explicitly what $f_{Y_1,Y_2}(y_1,y_2)$ is; the Gammaity or independence of $X_1$ and $X_2$ is not needed or used at all in the earlier work, and indeed serves merely to clutter up the calculations without shedding much light on the matter.
Joint distribution of two gamma random variables OP notrockstar knows the solution for the case when the random variables are independent but presumably cannot use it since a solution without the independence assumption is being sought. Perhaps the
39,100
Kruskal-Wallis or Fligner test to check homogeneity of variances?
If I understand correctly, you have one predictor (explanatory variable $x$) and one criterion (predicted variable $y$) in a simple linear regression. The significance tests rests on the model assumption that for each observation $i$ $$ y_{i} = \beta_{0} + \beta_{1} x_{i} + \epsilon_{i} $$ where $\beta_{0}, \beta_{1}$ are the parameters we want to estimate and test hypotheses about, and the errors $\epsilon_{i} \sim N(0, \sigma^{2})$ are normally-distributed random variables with mean 0 and constant variance $\sigma^{2}$. All $\epsilon_{i}$ are assumed to be independent of each other, and of the $x_{i}$. The $x_{i}$ themselves are assumed to be error free. You used the term "homogeneity of variances" which is typically used when you have distinct groups (as in ANOVA), i.e., when the $x_{i}$ only take on a few distinct values. In the context of regression, where $x$ is continuous, the assumption that the error variance is $\sigma^{2}$ everywhere is called homoscedasticity. This means that all conditional error distributions have the same variance. This assumption cannot be tested with a test for distinct groups (Fligner-Killeen, Levene). The following diagram tries to illustrate the idea of identical conditional error distributions (R-code here). Tests for heteroscedasticity are the Breusch-Pagan-Godfrey-Test (bptest() from package lmtest or ncvTest() from package car) or the White-Test (white.test() from package tseries). You can also consider just using heteroscedasticity-consistent standard errors (modified White estimator, see function hccm() from package car or vcovHC() from package sandwich). These standard errors can then be used in combination with function coeftest() from package lmtest(), as described on page 184-186 in Fox & Weisberg (2011), An R Companion to Applied Regression. You could also just plot the empirical residuals (or some transform thereof) against the fitted values. Typical transforms are the studentized residuals (spread-level-plot) or the square-root of the absolute residuals (scale-location-plot). These plots should not reveal an obvious trend of residual distribution that depends on the prediction. N <- 100 # number of observations X <- seq(from=75, to=140, length.out=N) # predictor Y <- 0.6*X + 10 + rnorm(N, 0, 10) # DV fit <- lm(Y ~ X) # regression E <- residuals(fit) # raw residuals Estud <- rstudent(fit) # studentized residuals plot(fitted(fit), Estud, pch=20, ylab="studentized residuals", xlab="prediction", main="Spread-Level-Plot") abline(h=0, col="red", lwd=2) plot(fitted(fit), sqrt(abs(E)), pch=20, ylab="sqrt(|residuals|)", xlab="prediction", main="Scale-Location-Plot")
Kruskal-Wallis or Fligner test to check homogeneity of variances?
If I understand correctly, you have one predictor (explanatory variable $x$) and one criterion (predicted variable $y$) in a simple linear regression. The significance tests rests on the model assumpt
Kruskal-Wallis or Fligner test to check homogeneity of variances? If I understand correctly, you have one predictor (explanatory variable $x$) and one criterion (predicted variable $y$) in a simple linear regression. The significance tests rests on the model assumption that for each observation $i$ $$ y_{i} = \beta_{0} + \beta_{1} x_{i} + \epsilon_{i} $$ where $\beta_{0}, \beta_{1}$ are the parameters we want to estimate and test hypotheses about, and the errors $\epsilon_{i} \sim N(0, \sigma^{2})$ are normally-distributed random variables with mean 0 and constant variance $\sigma^{2}$. All $\epsilon_{i}$ are assumed to be independent of each other, and of the $x_{i}$. The $x_{i}$ themselves are assumed to be error free. You used the term "homogeneity of variances" which is typically used when you have distinct groups (as in ANOVA), i.e., when the $x_{i}$ only take on a few distinct values. In the context of regression, where $x$ is continuous, the assumption that the error variance is $\sigma^{2}$ everywhere is called homoscedasticity. This means that all conditional error distributions have the same variance. This assumption cannot be tested with a test for distinct groups (Fligner-Killeen, Levene). The following diagram tries to illustrate the idea of identical conditional error distributions (R-code here). Tests for heteroscedasticity are the Breusch-Pagan-Godfrey-Test (bptest() from package lmtest or ncvTest() from package car) or the White-Test (white.test() from package tseries). You can also consider just using heteroscedasticity-consistent standard errors (modified White estimator, see function hccm() from package car or vcovHC() from package sandwich). These standard errors can then be used in combination with function coeftest() from package lmtest(), as described on page 184-186 in Fox & Weisberg (2011), An R Companion to Applied Regression. You could also just plot the empirical residuals (or some transform thereof) against the fitted values. Typical transforms are the studentized residuals (spread-level-plot) or the square-root of the absolute residuals (scale-location-plot). These plots should not reveal an obvious trend of residual distribution that depends on the prediction. N <- 100 # number of observations X <- seq(from=75, to=140, length.out=N) # predictor Y <- 0.6*X + 10 + rnorm(N, 0, 10) # DV fit <- lm(Y ~ X) # regression E <- residuals(fit) # raw residuals Estud <- rstudent(fit) # studentized residuals plot(fitted(fit), Estud, pch=20, ylab="studentized residuals", xlab="prediction", main="Spread-Level-Plot") abline(h=0, col="red", lwd=2) plot(fitted(fit), sqrt(abs(E)), pch=20, ylab="sqrt(|residuals|)", xlab="prediction", main="Scale-Location-Plot")
Kruskal-Wallis or Fligner test to check homogeneity of variances? If I understand correctly, you have one predictor (explanatory variable $x$) and one criterion (predicted variable $y$) in a simple linear regression. The significance tests rests on the model assumpt