idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
26,801
Why is using cross-sectional data to infer / predict longitudinal changes a Bad Thing?
It sounds very much like the definition of a non-ergodic process (measures over realizations not being equal to measures over time). Sadly, very few interesting real-world phenomena are ergodic. I guess this could be a case for finer-scale sampling and inference, where certain simplifications might be carried out. I'm thinking for examples of small time- or spatial scales, where chaotic behaviour is not observed so predictors can be linearized. But I'm just rambling here.. I'm afraid I can't help you with specific literature on the topic, either. Sorry :/ But interesting question nonetheless
Why is using cross-sectional data to infer / predict longitudinal changes a Bad Thing?
It sounds very much like the definition of a non-ergodic process (measures over realizations not being equal to measures over time). Sadly, very few interesting real-world phenomena are ergodic. I gue
Why is using cross-sectional data to infer / predict longitudinal changes a Bad Thing? It sounds very much like the definition of a non-ergodic process (measures over realizations not being equal to measures over time). Sadly, very few interesting real-world phenomena are ergodic. I guess this could be a case for finer-scale sampling and inference, where certain simplifications might be carried out. I'm thinking for examples of small time- or spatial scales, where chaotic behaviour is not observed so predictors can be linearized. But I'm just rambling here.. I'm afraid I can't help you with specific literature on the topic, either. Sorry :/ But interesting question nonetheless
Why is using cross-sectional data to infer / predict longitudinal changes a Bad Thing? It sounds very much like the definition of a non-ergodic process (measures over realizations not being equal to measures over time). Sadly, very few interesting real-world phenomena are ergodic. I gue
26,802
Degrees of freedom for a weighted average
This is wrong: (as correctly pointed out by @zkurtz) I think the answer given by @zbicyclist as a comment above is quite sensible. One way to rationalize it is as follows. If you arrange your sample $x_1,\ldots,x_n$ and you "regress" on a vector whose $i$-th element is $w_i / \sum w_i^2$ (with no intercept), you get as the estimate $\hat\beta = \sum_iw_ix_i$. The degrees of freedom of such regression would be the trace of the matrix $X(X'X)^{-1}X'$; replacing $X$ by the column vector whose $i$ element is as given above, those degrees of freedom turn out to be $(\sum_iw_i)^2/\sum_iw_i^2$. Of course if the $w_i$ are constrained to add up to 1, this coincides with your guess of $1/\sum_iw_i^2$. As references to lend support to such "degrees of freedom" I think it is interesting Hastie-Tibshirani(1990) Generalized Additive Models, Chapman & Hall, section 3.5. (They give alternatives to the trace of the "hat" matrix.) This may be right: Hastie-Tibshirani(1990) cited above propose alternative definitions of "degrees of freedom used" in a general non-parametric smoother $\hat{\boldsymbol{x}} = S \boldsymbol{x}$ as follows: i) trace$(S)$, ii) trace$(S^TS)$ and iii) trace$(2S-S^TS)$. They draw on the analogy with a linear model, in which $S = X^T(X^TX)^{-1}X^T$, whose trace is $p$, the number of parameters (throughout I consider the full rank case). Since $S = X^T(X^TX)^{-1}X^T$ is symmetric idempotent, $S^TS$ and $(2S-S^TS)$ are equal to $S$, so the three definitions give the same answer in the linear regression case. In the case of the question asked, we may consider $\boldsymbol{\hat{x}} = W \boldsymbol{x}$ where $\boldsymbol{\hat{x}}$ is the weighted mean multiplied by a column vector $\boldsymbol{1}$ and $W$ is a symmetric matrix of weights, each of whose rows is equal to the set of weights used. If we adopt the definition i) above, the number of degrees of freedom used would be 1 (assuming $\sum_iw_i = 1$, if we adopt ii) it would be $n\sum_iw_i^2$. (In the case $w_i = 1/n$ for all $i$ (ordinary average), this produces 1 as it should.) I find such definition to have some intuitive appeal, but totally aggree with @whuber that the name "degrees of freedom" (used in the smooth or fit) is abuse of language. I do not believe there is a non-controversial definition. On this topic I have also found interesting Hodges, J. S. and Sargent, D. J. (2001) Counting Degrees of Freedom in Hierarchical and Other Richly-Parameterised Models, Biometrika, vol. 88, p.367-379. There are many other papers dealing with counting "equivalent parameters" (or "degrees of freedom used") in different situations.
Degrees of freedom for a weighted average
This is wrong: (as correctly pointed out by @zkurtz) I think the answer given by @zbicyclist as a comment above is quite sensible. One way to rationalize it is as follows. If you arrange your sample
Degrees of freedom for a weighted average This is wrong: (as correctly pointed out by @zkurtz) I think the answer given by @zbicyclist as a comment above is quite sensible. One way to rationalize it is as follows. If you arrange your sample $x_1,\ldots,x_n$ and you "regress" on a vector whose $i$-th element is $w_i / \sum w_i^2$ (with no intercept), you get as the estimate $\hat\beta = \sum_iw_ix_i$. The degrees of freedom of such regression would be the trace of the matrix $X(X'X)^{-1}X'$; replacing $X$ by the column vector whose $i$ element is as given above, those degrees of freedom turn out to be $(\sum_iw_i)^2/\sum_iw_i^2$. Of course if the $w_i$ are constrained to add up to 1, this coincides with your guess of $1/\sum_iw_i^2$. As references to lend support to such "degrees of freedom" I think it is interesting Hastie-Tibshirani(1990) Generalized Additive Models, Chapman & Hall, section 3.5. (They give alternatives to the trace of the "hat" matrix.) This may be right: Hastie-Tibshirani(1990) cited above propose alternative definitions of "degrees of freedom used" in a general non-parametric smoother $\hat{\boldsymbol{x}} = S \boldsymbol{x}$ as follows: i) trace$(S)$, ii) trace$(S^TS)$ and iii) trace$(2S-S^TS)$. They draw on the analogy with a linear model, in which $S = X^T(X^TX)^{-1}X^T$, whose trace is $p$, the number of parameters (throughout I consider the full rank case). Since $S = X^T(X^TX)^{-1}X^T$ is symmetric idempotent, $S^TS$ and $(2S-S^TS)$ are equal to $S$, so the three definitions give the same answer in the linear regression case. In the case of the question asked, we may consider $\boldsymbol{\hat{x}} = W \boldsymbol{x}$ where $\boldsymbol{\hat{x}}$ is the weighted mean multiplied by a column vector $\boldsymbol{1}$ and $W$ is a symmetric matrix of weights, each of whose rows is equal to the set of weights used. If we adopt the definition i) above, the number of degrees of freedom used would be 1 (assuming $\sum_iw_i = 1$, if we adopt ii) it would be $n\sum_iw_i^2$. (In the case $w_i = 1/n$ for all $i$ (ordinary average), this produces 1 as it should.) I find such definition to have some intuitive appeal, but totally aggree with @whuber that the name "degrees of freedom" (used in the smooth or fit) is abuse of language. I do not believe there is a non-controversial definition. On this topic I have also found interesting Hodges, J. S. and Sargent, D. J. (2001) Counting Degrees of Freedom in Hierarchical and Other Richly-Parameterised Models, Biometrika, vol. 88, p.367-379. There are many other papers dealing with counting "equivalent parameters" (or "degrees of freedom used") in different situations.
Degrees of freedom for a weighted average This is wrong: (as correctly pointed out by @zkurtz) I think the answer given by @zbicyclist as a comment above is quite sensible. One way to rationalize it is as follows. If you arrange your sample
26,803
Should I use Welch's (1947) approximate degrees of freedom or Satterthwaite's (1946)?
Welcome to CV! I cannot answer on which one is preferred (they are actually really close so I don't think it matters much), but generally, major statistical software packages use Satterthwaite's method. SPSS and SAS both use it. In Stata, some commands like ttest would allow user to specify Welch's method, but Satterthwaite's is still the default. And in literature, I have mostly seen Satterthwaite's formula being cited. Time to time it's referred to as Satterthwaite-Welch's degrees of freedom, but the formula cited is Satterthwaite's. I guess having published it one year earlier did matter.
Should I use Welch's (1947) approximate degrees of freedom or Satterthwaite's (1946)?
Welcome to CV! I cannot answer on which one is preferred (they are actually really close so I don't think it matters much), but generally, major statistical software packages use Satterthwaite's metho
Should I use Welch's (1947) approximate degrees of freedom or Satterthwaite's (1946)? Welcome to CV! I cannot answer on which one is preferred (they are actually really close so I don't think it matters much), but generally, major statistical software packages use Satterthwaite's method. SPSS and SAS both use it. In Stata, some commands like ttest would allow user to specify Welch's method, but Satterthwaite's is still the default. And in literature, I have mostly seen Satterthwaite's formula being cited. Time to time it's referred to as Satterthwaite-Welch's degrees of freedom, but the formula cited is Satterthwaite's. I guess having published it one year earlier did matter.
Should I use Welch's (1947) approximate degrees of freedom or Satterthwaite's (1946)? Welcome to CV! I cannot answer on which one is preferred (they are actually really close so I don't think it matters much), but generally, major statistical software packages use Satterthwaite's metho
26,804
Metrics for covariance matrices: drawbacks and strengths
Well, I don't think there is a good metric or 'the best way' to analyze Covariance matrices. The analysis should be always aligned to your goal. Let's say C is my covariance matrix. The diagonal contains the variance for each computed parameter. So if you're interested in parameter significance then trace(C) is a good start since it's your overall performance. If you plot your parameter and their significance you can see something like this: x1 = 1.0 ± 0.1 x2 = 10.0 ± 5.0 x3 = 5.0 ± 15.0 <-- non-significant parameter If you're interested in their mutual correlation then such a table might yield something interesting: x1 1.0 x2 0.9 1.0 x3 -0.3 -0.1 1.0 x1 x2 x3 Each element is the correlation coefficient between the parameter xi and xj. From the example it's visible that parameter x1 and x2 are highly correlated.
Metrics for covariance matrices: drawbacks and strengths
Well, I don't think there is a good metric or 'the best way' to analyze Covariance matrices. The analysis should be always aligned to your goal. Let's say C is my covariance matrix. The diagonal conta
Metrics for covariance matrices: drawbacks and strengths Well, I don't think there is a good metric or 'the best way' to analyze Covariance matrices. The analysis should be always aligned to your goal. Let's say C is my covariance matrix. The diagonal contains the variance for each computed parameter. So if you're interested in parameter significance then trace(C) is a good start since it's your overall performance. If you plot your parameter and their significance you can see something like this: x1 = 1.0 ± 0.1 x2 = 10.0 ± 5.0 x3 = 5.0 ± 15.0 <-- non-significant parameter If you're interested in their mutual correlation then such a table might yield something interesting: x1 1.0 x2 0.9 1.0 x3 -0.3 -0.1 1.0 x1 x2 x3 Each element is the correlation coefficient between the parameter xi and xj. From the example it's visible that parameter x1 and x2 are highly correlated.
Metrics for covariance matrices: drawbacks and strengths Well, I don't think there is a good metric or 'the best way' to analyze Covariance matrices. The analysis should be always aligned to your goal. Let's say C is my covariance matrix. The diagonal conta
26,805
Metrics for covariance matrices: drawbacks and strengths
Interesting question, I'm grappling with the same issue at the moment! It depends on how you define 'best', i.e., are you looking for some average single value for the spread, or for the correlation between the data, etc. I found in Press, S.J. (1972): Applied Multivariate Analysis, p. 108 that the generalized variance, defined as the determinant of the covariance matrix, is useful as a single measure for spread. But if it's correlation that you are after, I will need to think futher. Let me know.
Metrics for covariance matrices: drawbacks and strengths
Interesting question, I'm grappling with the same issue at the moment! It depends on how you define 'best', i.e., are you looking for some average single value for the spread, or for the correlation b
Metrics for covariance matrices: drawbacks and strengths Interesting question, I'm grappling with the same issue at the moment! It depends on how you define 'best', i.e., are you looking for some average single value for the spread, or for the correlation between the data, etc. I found in Press, S.J. (1972): Applied Multivariate Analysis, p. 108 that the generalized variance, defined as the determinant of the covariance matrix, is useful as a single measure for spread. But if it's correlation that you are after, I will need to think futher. Let me know.
Metrics for covariance matrices: drawbacks and strengths Interesting question, I'm grappling with the same issue at the moment! It depends on how you define 'best', i.e., are you looking for some average single value for the spread, or for the correlation b
26,806
How to write the error term in repeated measures ANOVA in R: Error(subject) vs Error(subject/time)
First, subject/time is notation for time nested in subject, and so expands to two parts, subject and the subject:time interaction. So the question more properly becomes, when should one specify the subject:time interaction, and what difference does it make? Before answering this question, one other important thing to realize is that all models include one additional error term that need not be specified, which is the error term associated with the individual measurements (the lowest level, if you think about this hierarchically). In this case, the subject:time interaction is that lowest level, which is always included in the model. So using Error(subject) and Error(subject/time) give the same result; the only difference is that in the output, that level of results is called Within for the first and is called subject:time for the second. However, in cases where there are multiple measurements at each subject/time combination, it is necessary to specify the subject:time interaction, as then that interaction is not at the lowest level.
How to write the error term in repeated measures ANOVA in R: Error(subject) vs Error(subject/time)
First, subject/time is notation for time nested in subject, and so expands to two parts, subject and the subject:time interaction. So the question more properly becomes, when should one specify the su
How to write the error term in repeated measures ANOVA in R: Error(subject) vs Error(subject/time) First, subject/time is notation for time nested in subject, and so expands to two parts, subject and the subject:time interaction. So the question more properly becomes, when should one specify the subject:time interaction, and what difference does it make? Before answering this question, one other important thing to realize is that all models include one additional error term that need not be specified, which is the error term associated with the individual measurements (the lowest level, if you think about this hierarchically). In this case, the subject:time interaction is that lowest level, which is always included in the model. So using Error(subject) and Error(subject/time) give the same result; the only difference is that in the output, that level of results is called Within for the first and is called subject:time for the second. However, in cases where there are multiple measurements at each subject/time combination, it is necessary to specify the subject:time interaction, as then that interaction is not at the lowest level.
How to write the error term in repeated measures ANOVA in R: Error(subject) vs Error(subject/time) First, subject/time is notation for time nested in subject, and so expands to two parts, subject and the subject:time interaction. So the question more properly becomes, when should one specify the su
26,807
Statistical Tests That Incorporate Measurement Uncertainty
It sounds like you want to conduct a weighted analysis. See the "Weighted Statistics Example" in the "Concepts" section of the SAS documentation.
Statistical Tests That Incorporate Measurement Uncertainty
It sounds like you want to conduct a weighted analysis. See the "Weighted Statistics Example" in the "Concepts" section of the SAS documentation.
Statistical Tests That Incorporate Measurement Uncertainty It sounds like you want to conduct a weighted analysis. See the "Weighted Statistics Example" in the "Concepts" section of the SAS documentation.
Statistical Tests That Incorporate Measurement Uncertainty It sounds like you want to conduct a weighted analysis. See the "Weighted Statistics Example" in the "Concepts" section of the SAS documentation.
26,808
Statistical Tests That Incorporate Measurement Uncertainty
Why not simulate it? That is, add in your uncertainty as realizations of noise to each observation. Then repeat the hypothesis test. Do this about 1000 times and see how many times the null was rejected. You will need to pick a distribution for the noise. The normal seems like one option, but it could produce negative observations, which is not realistic.
Statistical Tests That Incorporate Measurement Uncertainty
Why not simulate it? That is, add in your uncertainty as realizations of noise to each observation. Then repeat the hypothesis test. Do this about 1000 times and see how many times the null was reject
Statistical Tests That Incorporate Measurement Uncertainty Why not simulate it? That is, add in your uncertainty as realizations of noise to each observation. Then repeat the hypothesis test. Do this about 1000 times and see how many times the null was rejected. You will need to pick a distribution for the noise. The normal seems like one option, but it could produce negative observations, which is not realistic.
Statistical Tests That Incorporate Measurement Uncertainty Why not simulate it? That is, add in your uncertainty as realizations of noise to each observation. Then repeat the hypothesis test. Do this about 1000 times and see how many times the null was reject
26,809
Statistical Tests That Incorporate Measurement Uncertainty
You could turn it into a regression problem and use the uncertainties as weights. That is, predict group (1 or 2?) from measurement in a regression. But The uncertainties are approximately constant, so it seems likely that nothing much will change by using them too. You have a mild outlier at 10.5, which is complicating matters by reducing the difference between means. But if you can believe the uncertainties, that value is no more suspect than any others. The t-test does not know that your alternative hypothesis is that two samples are drawn from different populations. All it knows about is comparing means, under certain assumptions. Rank-based tests are an alternative, but if you are interested in these data as measurements, they don't sound preferable for your goals.
Statistical Tests That Incorporate Measurement Uncertainty
You could turn it into a regression problem and use the uncertainties as weights. That is, predict group (1 or 2?) from measurement in a regression. But The uncertainties are approximately constant,
Statistical Tests That Incorporate Measurement Uncertainty You could turn it into a regression problem and use the uncertainties as weights. That is, predict group (1 or 2?) from measurement in a regression. But The uncertainties are approximately constant, so it seems likely that nothing much will change by using them too. You have a mild outlier at 10.5, which is complicating matters by reducing the difference between means. But if you can believe the uncertainties, that value is no more suspect than any others. The t-test does not know that your alternative hypothesis is that two samples are drawn from different populations. All it knows about is comparing means, under certain assumptions. Rank-based tests are an alternative, but if you are interested in these data as measurements, they don't sound preferable for your goals.
Statistical Tests That Incorporate Measurement Uncertainty You could turn it into a regression problem and use the uncertainties as weights. That is, predict group (1 or 2?) from measurement in a regression. But The uncertainties are approximately constant,
26,810
Statistical Tests That Incorporate Measurement Uncertainty
In ordinary least squares (e.g., lm(y~x)) you are allowing for variability (uncertainty) around y values, given an x value. If you flip the regression around (lm(x~)) you minimize the errors around x. In both cases, the errors are assume to be fairly homogenous. If you know the amount of variance around each observation of your response variable, and that variance is not constant when ordered by x, then you would want to use weighted least squares. You can weight the y values by factors of 1/(variance). In the case where you are concerned that both x and y have uncertainty, and that the uncertainty is not the same between the two, then you don't want to simply minimize residuals (address uncertainty) in perpendicular to one of your axes. Ideally, you would minimize uncertainty that is perpendicular to the fitted trend line. To do this, you could use PCA regression (also known as orthogonal regression, or total least squares. There are R packages for PCA regression, and there have previously been posts on this topic on this web site, which have then also been discussed elsewhere. Furthermore, I think (i.e., I may be wrong...) you can still do a weighted version of this regression, making use of your knowledge of the variances.
Statistical Tests That Incorporate Measurement Uncertainty
In ordinary least squares (e.g., lm(y~x)) you are allowing for variability (uncertainty) around y values, given an x value. If you flip the regression around (lm(x~)) you minimize the errors around x
Statistical Tests That Incorporate Measurement Uncertainty In ordinary least squares (e.g., lm(y~x)) you are allowing for variability (uncertainty) around y values, given an x value. If you flip the regression around (lm(x~)) you minimize the errors around x. In both cases, the errors are assume to be fairly homogenous. If you know the amount of variance around each observation of your response variable, and that variance is not constant when ordered by x, then you would want to use weighted least squares. You can weight the y values by factors of 1/(variance). In the case where you are concerned that both x and y have uncertainty, and that the uncertainty is not the same between the two, then you don't want to simply minimize residuals (address uncertainty) in perpendicular to one of your axes. Ideally, you would minimize uncertainty that is perpendicular to the fitted trend line. To do this, you could use PCA regression (also known as orthogonal regression, or total least squares. There are R packages for PCA regression, and there have previously been posts on this topic on this web site, which have then also been discussed elsewhere. Furthermore, I think (i.e., I may be wrong...) you can still do a weighted version of this regression, making use of your knowledge of the variances.
Statistical Tests That Incorporate Measurement Uncertainty In ordinary least squares (e.g., lm(y~x)) you are allowing for variability (uncertainty) around y values, given an x value. If you flip the regression around (lm(x~)) you minimize the errors around x
26,811
How to interpret coefficients of a multivariate mixed model in lme4 without overall intercept?
Your idea is good, but in your example, you forgot to model different intercept and different random variances for each trait, so your output is not interpretable as is. A correct model would be: m1 <- lmer(value ~ -1 + variable + variable:gear + variable:carb + (0 + variable | factor(carb)) In that case, you would get the estimates of fixed effects on each variable (for example, variabledrat:gear is the effect of predictor gear on response drat), but you would also get the intercepts for each variable (for example variabledrat for the intercept of response drat) and the random variance of each variable and the correlations between variables: Groups Name Std.Dev. Corr factor(carb) variabledrat 23.80 variablempg 24.27 0.20 variablehp 23.80 0.00 0.00 Residual 23.80 A more detailed description of these methods has been written down by Ben Bolker, as well as the use of MCMCglmmin a Bayesian framework. Another new package, mcglm can also handle multivariate models, even with non-normal responses, but you have to code your random design matrices. A tutorial should be available soon (see the R help page).
How to interpret coefficients of a multivariate mixed model in lme4 without overall intercept?
Your idea is good, but in your example, you forgot to model different intercept and different random variances for each trait, so your output is not interpretable as is. A correct model would be: m1 <
How to interpret coefficients of a multivariate mixed model in lme4 without overall intercept? Your idea is good, but in your example, you forgot to model different intercept and different random variances for each trait, so your output is not interpretable as is. A correct model would be: m1 <- lmer(value ~ -1 + variable + variable:gear + variable:carb + (0 + variable | factor(carb)) In that case, you would get the estimates of fixed effects on each variable (for example, variabledrat:gear is the effect of predictor gear on response drat), but you would also get the intercepts for each variable (for example variabledrat for the intercept of response drat) and the random variance of each variable and the correlations between variables: Groups Name Std.Dev. Corr factor(carb) variabledrat 23.80 variablempg 24.27 0.20 variablehp 23.80 0.00 0.00 Residual 23.80 A more detailed description of these methods has been written down by Ben Bolker, as well as the use of MCMCglmmin a Bayesian framework. Another new package, mcglm can also handle multivariate models, even with non-normal responses, but you have to code your random design matrices. A tutorial should be available soon (see the R help page).
How to interpret coefficients of a multivariate mixed model in lme4 without overall intercept? Your idea is good, but in your example, you forgot to model different intercept and different random variances for each trait, so your output is not interpretable as is. A correct model would be: m1 <
26,812
How to fix a coefficient in an ordinal logistic regression without proportional odds assumption in R?
I'm not sure I understand what the OP meant when he/she says "I can't use offset because it completely removes the corresponding regressor from the regression." You can fix a parameter using the offset() function in R. I'm using lm() below, but it should work in your model as well. dat <- data.frame(x=rnorm(30)) dat$y <- dat$x * 2 + rnorm(30) free <- lm(y ~ x,dat) fixed1<- lm(y ~ offset(2 * x),dat) summary(free) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.03899 0.17345 0.225 0.824 #x 2.17532 0.18492 11.764 2.38e-12 *** summary(fixed1) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.05043 0.17273 0.292 0.772 The fixed parameter doesn't show up in the output, but it's still fixed at 2. Next I'll fix the x parameter to its estimated value in the free model fixed2<- lm(y ~ offset(2.17532 * x),dat) summary(fixed2) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.03899 0.17002 0.229 0.82 Notice the intercept in fixed2 is estimated with the same value as in the free model.
How to fix a coefficient in an ordinal logistic regression without proportional odds assumption in R
I'm not sure I understand what the OP meant when he/she says "I can't use offset because it completely removes the corresponding regressor from the regression." You can fix a parameter using the offse
How to fix a coefficient in an ordinal logistic regression without proportional odds assumption in R? I'm not sure I understand what the OP meant when he/she says "I can't use offset because it completely removes the corresponding regressor from the regression." You can fix a parameter using the offset() function in R. I'm using lm() below, but it should work in your model as well. dat <- data.frame(x=rnorm(30)) dat$y <- dat$x * 2 + rnorm(30) free <- lm(y ~ x,dat) fixed1<- lm(y ~ offset(2 * x),dat) summary(free) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.03899 0.17345 0.225 0.824 #x 2.17532 0.18492 11.764 2.38e-12 *** summary(fixed1) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.05043 0.17273 0.292 0.772 The fixed parameter doesn't show up in the output, but it's still fixed at 2. Next I'll fix the x parameter to its estimated value in the free model fixed2<- lm(y ~ offset(2.17532 * x),dat) summary(fixed2) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 0.03899 0.17002 0.229 0.82 Notice the intercept in fixed2 is estimated with the same value as in the free model.
How to fix a coefficient in an ordinal logistic regression without proportional odds assumption in R I'm not sure I understand what the OP meant when he/she says "I can't use offset because it completely removes the corresponding regressor from the regression." You can fix a parameter using the offse
26,813
Modeling a spatial trend by regression with the $(x,y)$ coordinates as predictors
I think you might be better off fitting a linear mixed effects model with spatially-correlated random effects (sometimes called a geostatistical model). Assuming your data is Gaussian, you specify a model of the form: $ Y_i = \mu_i + S_i + \epsilon_i, $ for $n$ observations $1 \leq i \leq n$, with $\epsilon \sim N(0,\tau^2)$ representing iid errors and $\mathbb{S} \sim MVN(\mathbb{0},\sigma^2 R)$ representing your spatial terms (where $\mathbb{S} = \{S_1,...,S_n\}$). The mean $\mu_i$ could be a function of other covariates (i.e. $\mu_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2}$ etc.) or it could be just be a constant (might be best to start with the latter for simplicity). The correlation matrix $R$ for the spatial terms (which determines how correlated you think each observation should be) can be specified by looking at the empirical variogram. Generally the correlation between observations is chosen to only depend on the distance between them (this is where your coordinates come into the model). Chapter 2 of Model-based geostatistics by Diggle and Ribeiro (2000) should give you a more detailed introduction. The R package geoR has many procedures for fitting geostatistical models, so you may find it useful (see http://cran.r-project.org/web/packages/geoR/geoR.pdf).
Modeling a spatial trend by regression with the $(x,y)$ coordinates as predictors
I think you might be better off fitting a linear mixed effects model with spatially-correlated random effects (sometimes called a geostatistical model). Assuming your data is Gaussian, you specify a
Modeling a spatial trend by regression with the $(x,y)$ coordinates as predictors I think you might be better off fitting a linear mixed effects model with spatially-correlated random effects (sometimes called a geostatistical model). Assuming your data is Gaussian, you specify a model of the form: $ Y_i = \mu_i + S_i + \epsilon_i, $ for $n$ observations $1 \leq i \leq n$, with $\epsilon \sim N(0,\tau^2)$ representing iid errors and $\mathbb{S} \sim MVN(\mathbb{0},\sigma^2 R)$ representing your spatial terms (where $\mathbb{S} = \{S_1,...,S_n\}$). The mean $\mu_i$ could be a function of other covariates (i.e. $\mu_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2}$ etc.) or it could be just be a constant (might be best to start with the latter for simplicity). The correlation matrix $R$ for the spatial terms (which determines how correlated you think each observation should be) can be specified by looking at the empirical variogram. Generally the correlation between observations is chosen to only depend on the distance between them (this is where your coordinates come into the model). Chapter 2 of Model-based geostatistics by Diggle and Ribeiro (2000) should give you a more detailed introduction. The R package geoR has many procedures for fitting geostatistical models, so you may find it useful (see http://cran.r-project.org/web/packages/geoR/geoR.pdf).
Modeling a spatial trend by regression with the $(x,y)$ coordinates as predictors I think you might be better off fitting a linear mixed effects model with spatially-correlated random effects (sometimes called a geostatistical model). Assuming your data is Gaussian, you specify a
26,814
What is a valid post-hoc analysis for a three-way repeated measures ANOVA?
I think statisticians will tell you that there is always a problem with any post hoc analysis because seeing the data may influence what you look at and you could be biased becuase you are hunting for significant results. The FDA in clinical trial studies requires that the statistical plan be completely spelled out in the protocol. in a linear model you certainly could prespecify the contrasts that you would like to look at in the event that the ANOVA or ANCOVA finds an overall difference. Such prespecified contrasts would be fine to look at as long as the usual treatment for multiplicity is also part of it.
What is a valid post-hoc analysis for a three-way repeated measures ANOVA?
I think statisticians will tell you that there is always a problem with any post hoc analysis because seeing the data may influence what you look at and you could be biased becuase you are hunting for
What is a valid post-hoc analysis for a three-way repeated measures ANOVA? I think statisticians will tell you that there is always a problem with any post hoc analysis because seeing the data may influence what you look at and you could be biased becuase you are hunting for significant results. The FDA in clinical trial studies requires that the statistical plan be completely spelled out in the protocol. in a linear model you certainly could prespecify the contrasts that you would like to look at in the event that the ANOVA or ANCOVA finds an overall difference. Such prespecified contrasts would be fine to look at as long as the usual treatment for multiplicity is also part of it.
What is a valid post-hoc analysis for a three-way repeated measures ANOVA? I think statisticians will tell you that there is always a problem with any post hoc analysis because seeing the data may influence what you look at and you could be biased becuase you are hunting for
26,815
What is a valid post-hoc analysis for a three-way repeated measures ANOVA?
If you have a software package like SAS you would probably use proc mixed to do the the repeated measures mixed model and if you specify which contrast you want to use SAS will handle it properly for you. You may also be able to do it with the repeated option in PROC GLM but be careful because they behave differently and make different assumptions. The repeated observations are usually correalted because they have something common. I often have repeated measures on the same patient at different time points. So in computing the contrasts the covariance terms enter into the problem.
What is a valid post-hoc analysis for a three-way repeated measures ANOVA?
If you have a software package like SAS you would probably use proc mixed to do the the repeated measures mixed model and if you specify which contrast you want to use SAS will handle it properly for
What is a valid post-hoc analysis for a three-way repeated measures ANOVA? If you have a software package like SAS you would probably use proc mixed to do the the repeated measures mixed model and if you specify which contrast you want to use SAS will handle it properly for you. You may also be able to do it with the repeated option in PROC GLM but be careful because they behave differently and make different assumptions. The repeated observations are usually correalted because they have something common. I often have repeated measures on the same patient at different time points. So in computing the contrasts the covariance terms enter into the problem.
What is a valid post-hoc analysis for a three-way repeated measures ANOVA? If you have a software package like SAS you would probably use proc mixed to do the the repeated measures mixed model and if you specify which contrast you want to use SAS will handle it properly for
26,816
Good books covering data preprocessing and outlier detection techniques
Although specific to Stata, I've found Scott Long's book, The Workflow of Data Analysis Using Stata, invaluable in the area of data management and preparation. The author gives a lot of helpful advice regarding good practices in data management, such as cleaning and archiving data, checking for outliers and dealing with missing data.
Good books covering data preprocessing and outlier detection techniques
Although specific to Stata, I've found Scott Long's book, The Workflow of Data Analysis Using Stata, invaluable in the area of data management and preparation. The author gives a lot of helpful advice
Good books covering data preprocessing and outlier detection techniques Although specific to Stata, I've found Scott Long's book, The Workflow of Data Analysis Using Stata, invaluable in the area of data management and preparation. The author gives a lot of helpful advice regarding good practices in data management, such as cleaning and archiving data, checking for outliers and dealing with missing data.
Good books covering data preprocessing and outlier detection techniques Although specific to Stata, I've found Scott Long's book, The Workflow of Data Analysis Using Stata, invaluable in the area of data management and preparation. The author gives a lot of helpful advice
26,817
Good books covering data preprocessing and outlier detection techniques
For SAS, there is Ron Cody's Data Cleaning Techniques using SAS Software. There is a saying on SAS-L: "You can never go wrong with a book by Ron Cody"
Good books covering data preprocessing and outlier detection techniques
For SAS, there is Ron Cody's Data Cleaning Techniques using SAS Software. There is a saying on SAS-L: "You can never go wrong with a book by Ron Cody"
Good books covering data preprocessing and outlier detection techniques For SAS, there is Ron Cody's Data Cleaning Techniques using SAS Software. There is a saying on SAS-L: "You can never go wrong with a book by Ron Cody"
Good books covering data preprocessing and outlier detection techniques For SAS, there is Ron Cody's Data Cleaning Techniques using SAS Software. There is a saying on SAS-L: "You can never go wrong with a book by Ron Cody"
26,818
Good books covering data preprocessing and outlier detection techniques
If you have the basics (identifying outliers, missing values, weighting, coding) depending on the topic there's a lot more in the plain academic literature to be found. For example in survey research (which is a topic where many things can go wrong, and prone to many sources of bias) there are a lot of good articles to be found. When preparing for regular crossectional regression, things may be less complex. Problem there may for example that you remove too many 'outliers' and thus artificially fitting your model well. I thus also recommend you besides learning good techniques, also keep common sense in mind. Make sure you apply the techniques rightfully and not blindly. As for the software discussion in the other answers. I think SPSS is not bad for data preparation (I also heard good things about SAS) depending on your dataset size. The drop down menus are very intuitive. But as a direct answer to your question, academic literature may or may not be a very good source for your data preparation depending on the topic and analysis.
Good books covering data preprocessing and outlier detection techniques
If you have the basics (identifying outliers, missing values, weighting, coding) depending on the topic there's a lot more in the plain academic literature to be found. For example in survey research
Good books covering data preprocessing and outlier detection techniques If you have the basics (identifying outliers, missing values, weighting, coding) depending on the topic there's a lot more in the plain academic literature to be found. For example in survey research (which is a topic where many things can go wrong, and prone to many sources of bias) there are a lot of good articles to be found. When preparing for regular crossectional regression, things may be less complex. Problem there may for example that you remove too many 'outliers' and thus artificially fitting your model well. I thus also recommend you besides learning good techniques, also keep common sense in mind. Make sure you apply the techniques rightfully and not blindly. As for the software discussion in the other answers. I think SPSS is not bad for data preparation (I also heard good things about SAS) depending on your dataset size. The drop down menus are very intuitive. But as a direct answer to your question, academic literature may or may not be a very good source for your data preparation depending on the topic and analysis.
Good books covering data preprocessing and outlier detection techniques If you have the basics (identifying outliers, missing values, weighting, coding) depending on the topic there's a lot more in the plain academic literature to be found. For example in survey research
26,819
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
No, there isn't a general asymptotic result. Let $x_{[1]} \dots x_{[N]}$ be the ordered $x_i$, where $x_{[1]}$ is the largest. Consider the following two examples: 1) $P(x=0) = 1$. Clearly the CLT holds. You only need $M=1$ observation for $\sum_{j=1}^M|x_{[j]}| \ge \frac{1}{2} \sum_N|x_i|$. 2) $P(x=1) = 1$. Clearly the CLT holds. You need $M=\lceil N/2\rceil$ observations for $\sum_{j=1}^M|x_{[j]}| \ge \frac{1}{2} \sum_N|x_i|$. For a nontrivial example, the Bernoulli distribution: 3) $P(x=1) = p,\space P(x=0) = 1-p$. Once again the CLT holds. You need $\lceil pN/2\rceil $ of the observations to meet your conditions. By varying $p$ between 0 and 1, you can get as close to example 1 or example 2 as you like.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
No, there isn't a general asymptotic result. Let $x_{[1]} \dots x_{[N]}$ be the ordered $x_i$, where $x_{[1]}$ is the largest. Consider the following two examples: 1) $P(x=0) = 1$. Clearly the CLT h
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? No, there isn't a general asymptotic result. Let $x_{[1]} \dots x_{[N]}$ be the ordered $x_i$, where $x_{[1]}$ is the largest. Consider the following two examples: 1) $P(x=0) = 1$. Clearly the CLT holds. You only need $M=1$ observation for $\sum_{j=1}^M|x_{[j]}| \ge \frac{1}{2} \sum_N|x_i|$. 2) $P(x=1) = 1$. Clearly the CLT holds. You need $M=\lceil N/2\rceil$ observations for $\sum_{j=1}^M|x_{[j]}| \ge \frac{1}{2} \sum_N|x_i|$. For a nontrivial example, the Bernoulli distribution: 3) $P(x=1) = p,\space P(x=0) = 1-p$. Once again the CLT holds. You need $\lceil pN/2\rceil $ of the observations to meet your conditions. By varying $p$ between 0 and 1, you can get as close to example 1 or example 2 as you like.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? No, there isn't a general asymptotic result. Let $x_{[1]} \dots x_{[N]}$ be the ordered $x_i$, where $x_{[1]}$ is the largest. Consider the following two examples: 1) $P(x=0) = 1$. Clearly the CLT h
26,820
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
Here is a crude argument giving a slightly different estimate for uniformly distributed random variables. Suppose the $X_i$ are continuous random variables uniformly distributed on $[0,1]$. Then, $\sum_i X_i$ has mean value $N/2$. Assume that by a surprising and totally unbelievable coincidence, the sum is exactly equal to $N/2$. So we want to estimate how many of the largest values of $X$ sum up to $N/4$ or more. Now, the histogram of $N$ samples ($N$ very large) drawn from the uniformm distribution $U[0,1]$ is roughly flat from $0$ to $1$, and so for any $x$, $0 < x < 1$, there are $(1-x)N$ samples distributed roughly uniformly between $x$ to $1$. These samples have average value $(1+x)/2$ and sum equal to $(1-x)N(1+x)/2) = (1-x^2)N/2$. The sum exceeds $N/4$ for $x \leq 1/\sqrt{2}$. So, the sum of $(1-1/\sqrt{2})N \approx 0.3N$ largest samples exceeds $N/4$. You could try and generalize this a bit. If $\sum_i X_i = Y$, then for any given $Y$, we want $x$ to be such that $(1-x^2)N/2 = Y/2$ where $Y$ is normal with mean $N/2$ and variance $N/12$. Thus, conditioned on a value of $Y$, $x = \sqrt{1-(Y/N)}$. Multiply by the density of $Y$ and integrate (from $Y=0$ to $Y=N$) to find the average number of largest samples that will exceed half the random sum.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
Here is a crude argument giving a slightly different estimate for uniformly distributed random variables. Suppose the $X_i$ are continuous random variables uniformly distributed on $[0,1]$. Then, $\
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? Here is a crude argument giving a slightly different estimate for uniformly distributed random variables. Suppose the $X_i$ are continuous random variables uniformly distributed on $[0,1]$. Then, $\sum_i X_i$ has mean value $N/2$. Assume that by a surprising and totally unbelievable coincidence, the sum is exactly equal to $N/2$. So we want to estimate how many of the largest values of $X$ sum up to $N/4$ or more. Now, the histogram of $N$ samples ($N$ very large) drawn from the uniformm distribution $U[0,1]$ is roughly flat from $0$ to $1$, and so for any $x$, $0 < x < 1$, there are $(1-x)N$ samples distributed roughly uniformly between $x$ to $1$. These samples have average value $(1+x)/2$ and sum equal to $(1-x)N(1+x)/2) = (1-x^2)N/2$. The sum exceeds $N/4$ for $x \leq 1/\sqrt{2}$. So, the sum of $(1-1/\sqrt{2})N \approx 0.3N$ largest samples exceeds $N/4$. You could try and generalize this a bit. If $\sum_i X_i = Y$, then for any given $Y$, we want $x$ to be such that $(1-x^2)N/2 = Y/2$ where $Y$ is normal with mean $N/2$ and variance $N/12$. Thus, conditioned on a value of $Y$, $x = \sqrt{1-(Y/N)}$. Multiply by the density of $Y$ and integrate (from $Y=0$ to $Y=N$) to find the average number of largest samples that will exceed half the random sum.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? Here is a crude argument giving a slightly different estimate for uniformly distributed random variables. Suppose the $X_i$ are continuous random variables uniformly distributed on $[0,1]$. Then, $\
26,821
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
Let's assume X has just positive values to get rid of the absolute value. Without an exact prove, I think you have to solve for k $(1-F_{X}(k))E(X|X>=k)= \frac{1}{2} E(X)$ with F being the cumulative distribution function for X and then the answer is given by taking the $n(1-F_X(k))$ highest values. My logic is that asymtopically the sum of all values higher than k should be about $n(1-F_{X}(k))E(X|X>=k)$ and asymtopically half the total sum is about $\frac{1}{2}nE(X)$. Numerical simulation show that the result holds for the uniform case (uniform in $[0,1]$) where $F(k)=k$ and I get $k=\sqrt(\frac{1}{2})$. I am not certain if the result always hold or if it can be simplified further, but I think it really depends on the distribution function F.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total?
Let's assume X has just positive values to get rid of the absolute value. Without an exact prove, I think you have to solve for k $(1-F_{X}(k))E(X|X>=k)= \frac{1}{2} E(X)$ with F being the cumulative
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? Let's assume X has just positive values to get rid of the absolute value. Without an exact prove, I think you have to solve for k $(1-F_{X}(k))E(X|X>=k)= \frac{1}{2} E(X)$ with F being the cumulative distribution function for X and then the answer is given by taking the $n(1-F_X(k))$ highest values. My logic is that asymtopically the sum of all values higher than k should be about $n(1-F_{X}(k))E(X|X>=k)$ and asymtopically half the total sum is about $\frac{1}{2}nE(X)$. Numerical simulation show that the result holds for the uniform case (uniform in $[0,1]$) where $F(k)=k$ and I get $k=\sqrt(\frac{1}{2})$. I am not certain if the result always hold or if it can be simplified further, but I think it really depends on the distribution function F.
How many of the biggest terms in $\sum_{i=1}^N |X_i|$ add up to half the total? Let's assume X has just positive values to get rid of the absolute value. Without an exact prove, I think you have to solve for k $(1-F_{X}(k))E(X|X>=k)= \frac{1}{2} E(X)$ with F being the cumulative
26,822
Log-likelihood ratio in document summarization
With my limited knowledge, I think: "the probability of observing w in input" requires a distribution in order to compute the value "the probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora" means "the likelihood of observing w ... given that the probability for w is equal in both corpora". Here's my formulation for it: Formulating the problem a little: Hypothesis 1: P(w in input) = P(w in background) = p Hypothesis 2: P(w in input) = p1 and P(w in background) = p2 and p1 $\ne$ p2 The critical part is that you will need to assume a distribution here. Simplistically, we assume Binomial distribution for generating w in a text. Given the sampledata, we can use maximum likelihood estimation to compute the value for p, p1, and p2, and here they are: p = (count-of-w-in-input + count-of-w-in-background) / (input-size + background-size) = (c1 + c2) / (N1 + N2) p1 = c1 / N1 p2 = c2 / N2 We want to know which hypothesis is more likely. Therefore, we compute the likelihood of each hypothesis and compare to each other (which is basically what the likelihood ratio does). Since we assume binomial distribution, we can compute the likelihood of having c1 and c2. For Hypothesis 1: L(c1) = The probability of observing w in input = the likelihood of achieving c1 when there is N1 words assuming the probability p (or, in other words, selecting w for c1 times out of N1 times) is b(N1, c1, p) -- please see the binomial probability formula here L(c2) = The probability of observing w in background = the likelihood of achieving c2 when there is N2 words assuming the probability p is b(N2, c2, p) For Hypothesis 2, we can use p1 and p2 instead. Now we want to know which hypothesis is more likely; we will need to some how compare an output value from each hypothesis. But each hypothesis has 2 values, L(c1) and L(c2). How can we compare which hypothesis is more likely? --- We choose to multiply them together to achieve a single-valued output. (because it's analogous to geometry, I guess)
Log-likelihood ratio in document summarization
With my limited knowledge, I think: "the probability of observing w in input" requires a distribution in order to compute the value "the probability of observing w in both the input and in the backgr
Log-likelihood ratio in document summarization With my limited knowledge, I think: "the probability of observing w in input" requires a distribution in order to compute the value "the probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora" means "the likelihood of observing w ... given that the probability for w is equal in both corpora". Here's my formulation for it: Formulating the problem a little: Hypothesis 1: P(w in input) = P(w in background) = p Hypothesis 2: P(w in input) = p1 and P(w in background) = p2 and p1 $\ne$ p2 The critical part is that you will need to assume a distribution here. Simplistically, we assume Binomial distribution for generating w in a text. Given the sampledata, we can use maximum likelihood estimation to compute the value for p, p1, and p2, and here they are: p = (count-of-w-in-input + count-of-w-in-background) / (input-size + background-size) = (c1 + c2) / (N1 + N2) p1 = c1 / N1 p2 = c2 / N2 We want to know which hypothesis is more likely. Therefore, we compute the likelihood of each hypothesis and compare to each other (which is basically what the likelihood ratio does). Since we assume binomial distribution, we can compute the likelihood of having c1 and c2. For Hypothesis 1: L(c1) = The probability of observing w in input = the likelihood of achieving c1 when there is N1 words assuming the probability p (or, in other words, selecting w for c1 times out of N1 times) is b(N1, c1, p) -- please see the binomial probability formula here L(c2) = The probability of observing w in background = the likelihood of achieving c2 when there is N2 words assuming the probability p is b(N2, c2, p) For Hypothesis 2, we can use p1 and p2 instead. Now we want to know which hypothesis is more likely; we will need to some how compare an output value from each hypothesis. But each hypothesis has 2 values, L(c1) and L(c2). How can we compare which hypothesis is more likely? --- We choose to multiply them together to achieve a single-valued output. (because it's analogous to geometry, I guess)
Log-likelihood ratio in document summarization With my limited knowledge, I think: "the probability of observing w in input" requires a distribution in order to compute the value "the probability of observing w in both the input and in the backgr
26,823
Log-likelihood ratio in document summarization
I would like to give an example and use the definition from the question. Suppose there's a word w appears once in a 30-word document d: C(d) = 1 N(d) = 30 // the probability of w in the input = p(d) = 1/30 Suppose the background corpus has 4000 words, with w appears 20 times: C(b) = 20 N(b) = 4000 // the probability of w in the corpus = p(b) = 20/4000 = 1/200 The LLR for a word, generally called lambda(w), is the ratio between (the probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora), and the probability of observing w in both assuming different probabilities for w in the input and the background corpus. The probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora: [C(d)+C(b)]/[N(d)+N(b)] = p = (1+20)/(30+4000) The calculation:
Log-likelihood ratio in document summarization
I would like to give an example and use the definition from the question. Suppose there's a word w appears once in a 30-word document d: C(d) = 1 N(d) = 30 // the probability of w in the input = p(d)
Log-likelihood ratio in document summarization I would like to give an example and use the definition from the question. Suppose there's a word w appears once in a 30-word document d: C(d) = 1 N(d) = 30 // the probability of w in the input = p(d) = 1/30 Suppose the background corpus has 4000 words, with w appears 20 times: C(b) = 20 N(b) = 4000 // the probability of w in the corpus = p(b) = 20/4000 = 1/200 The LLR for a word, generally called lambda(w), is the ratio between (the probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora), and the probability of observing w in both assuming different probabilities for w in the input and the background corpus. The probability of observing w in both the input and in the background corpus assuming equal probabilities in both corpora: [C(d)+C(b)]/[N(d)+N(b)] = p = (1+20)/(30+4000) The calculation:
Log-likelihood ratio in document summarization I would like to give an example and use the definition from the question. Suppose there's a word w appears once in a 30-word document d: C(d) = 1 N(d) = 30 // the probability of w in the input = p(d)
26,824
When to transform predictor variables when doing multiple regression?
I take your question to be: how do you detect when the conditions that make transformations appropriate exist, rather than what the logical conditions are. It's always nice to bookend data analyses with exploration, especially graphical data exploration. (Various tests can be conducted, but I'll focus on graphical EDA here.) Kernel density plots are better than histograms for an initial overview of each variable's univariate distribution. With multiple variables, a scatterplot matrix can be handy. Lowess is also always advisable at the start. This will give you a quick and dirty look at whether the relationships are approximately linear. John Fox's car package usefully combines these: library(car) scatterplot.matrix(data) Be sure to have your variables as columns. If you have many variables, the individual plots can be small. Maximize the plot window and the scatterplots should be big enough to pick out the plots you want to examine individually, and then make single plots. E.g., windows() plot(density(X[,3])) rug(x[,3]) windows() plot(x[,3], y) lines(lowess(y~X[,3])) After fitting a multiple regression model, you should still plot and check your data, just as with simple linear regression. QQ plots for residuals are just as necessary, and you could do a scatterplot matrix of your residuals against your predictors, following a similar procedure as before. windows() qq.plot(model$residuals) windows() scatterplot.matrix(cbind(model$residuals,X)) If anything looks suspicious, plot it individually and add abline(h=0), as a visual guide. If you have an interaction, you can create an X[,1]*X[,2] variable, and examine the residuals against that. Likewise, you can make a scatterplot of residuals vs. X[,3]^2, etc. Other types of plots than residuals vs. x that you like can be done similarly. Bear in mind that these are all ignoring the other x dimensions that aren't being plotted. If your data are grouped (i.e. from an experiment), you can make partial plots instead of / in addition to marginal plots. Hope that helps.
When to transform predictor variables when doing multiple regression?
I take your question to be: how do you detect when the conditions that make transformations appropriate exist, rather than what the logical conditions are. It's always nice to bookend data analyses w
When to transform predictor variables when doing multiple regression? I take your question to be: how do you detect when the conditions that make transformations appropriate exist, rather than what the logical conditions are. It's always nice to bookend data analyses with exploration, especially graphical data exploration. (Various tests can be conducted, but I'll focus on graphical EDA here.) Kernel density plots are better than histograms for an initial overview of each variable's univariate distribution. With multiple variables, a scatterplot matrix can be handy. Lowess is also always advisable at the start. This will give you a quick and dirty look at whether the relationships are approximately linear. John Fox's car package usefully combines these: library(car) scatterplot.matrix(data) Be sure to have your variables as columns. If you have many variables, the individual plots can be small. Maximize the plot window and the scatterplots should be big enough to pick out the plots you want to examine individually, and then make single plots. E.g., windows() plot(density(X[,3])) rug(x[,3]) windows() plot(x[,3], y) lines(lowess(y~X[,3])) After fitting a multiple regression model, you should still plot and check your data, just as with simple linear regression. QQ plots for residuals are just as necessary, and you could do a scatterplot matrix of your residuals against your predictors, following a similar procedure as before. windows() qq.plot(model$residuals) windows() scatterplot.matrix(cbind(model$residuals,X)) If anything looks suspicious, plot it individually and add abline(h=0), as a visual guide. If you have an interaction, you can create an X[,1]*X[,2] variable, and examine the residuals against that. Likewise, you can make a scatterplot of residuals vs. X[,3]^2, etc. Other types of plots than residuals vs. x that you like can be done similarly. Bear in mind that these are all ignoring the other x dimensions that aren't being plotted. If your data are grouped (i.e. from an experiment), you can make partial plots instead of / in addition to marginal plots. Hope that helps.
When to transform predictor variables when doing multiple regression? I take your question to be: how do you detect when the conditions that make transformations appropriate exist, rather than what the logical conditions are. It's always nice to bookend data analyses w
26,825
Handling large data sets in R -- tutorials, best practices, etc
Here are a couple of blog posts I did on this subject of Large Data Sets with R. There are a couple of packages like ff and bigmemory that make use of file swapping and memory allocation. A couple of other packages make use of connectivity to databases such as sqldf, RMySQL, and RSQLite. R References for Handling Big Data Big Data Logistic Regression in R with ODBC
Handling large data sets in R -- tutorials, best practices, etc
Here are a couple of blog posts I did on this subject of Large Data Sets with R. There are a couple of packages like ff and bigmemory that make use of file swapping and memory allocation. A couple o
Handling large data sets in R -- tutorials, best practices, etc Here are a couple of blog posts I did on this subject of Large Data Sets with R. There are a couple of packages like ff and bigmemory that make use of file swapping and memory allocation. A couple of other packages make use of connectivity to databases such as sqldf, RMySQL, and RSQLite. R References for Handling Big Data Big Data Logistic Regression in R with ODBC
Handling large data sets in R -- tutorials, best practices, etc Here are a couple of blog posts I did on this subject of Large Data Sets with R. There are a couple of packages like ff and bigmemory that make use of file swapping and memory allocation. A couple o
26,826
Reference for the sum and difference of highly correlated variables being almost uncorrelated
I would refer to Seber GAF (1977) Linear regression analysis. Wiley, New York. Theorem 1.4. This says $\text{cov}(AX, BY) = A \text{cov}(X,Y) B'$. Take $A$ = (1 1) and $B$ = (1 -1) and $X$ = $Y$ = vector with your X and Y. Note that, to have $\text{cov}(X+Y, X-Y) \approx 0$, it's critical that X and Y have the similar variances. If $\text{var}(X) \gg \text{var}(Y)$, $\text{cov}(X+Y, X-Y)$ will be large.
Reference for the sum and difference of highly correlated variables being almost uncorrelated
I would refer to Seber GAF (1977) Linear regression analysis. Wiley, New York. Theorem 1.4. This says $\text{cov}(AX, BY) = A \text{cov}(X,Y) B'$. Take $A$ = (1 1) and $B$ = (1 -1) and $X$ = $Y$ = vec
Reference for the sum and difference of highly correlated variables being almost uncorrelated I would refer to Seber GAF (1977) Linear regression analysis. Wiley, New York. Theorem 1.4. This says $\text{cov}(AX, BY) = A \text{cov}(X,Y) B'$. Take $A$ = (1 1) and $B$ = (1 -1) and $X$ = $Y$ = vector with your X and Y. Note that, to have $\text{cov}(X+Y, X-Y) \approx 0$, it's critical that X and Y have the similar variances. If $\text{var}(X) \gg \text{var}(Y)$, $\text{cov}(X+Y, X-Y)$ will be large.
Reference for the sum and difference of highly correlated variables being almost uncorrelated I would refer to Seber GAF (1977) Linear regression analysis. Wiley, New York. Theorem 1.4. This says $\text{cov}(AX, BY) = A \text{cov}(X,Y) B'$. Take $A$ = (1 1) and $B$ = (1 -1) and $X$ = $Y$ = vec
26,827
How to make a matrix positive definite?
OK, since you're doing FA I'm assuming that $B$ is of full column rank $q$ and $q<p$. We need a few more details though. This may be a numerical problem; it may also be a problem with your data. How are you computing the inverse? Do you need the inverse explicitly, or can re-express the calculation as the solution to a linear system? (ie to get $A^{-1}b$ solve $Ax=b$ for x, which is typically faster and more stable) What is happening to $D$? Are the estimates really small/0/negative? In some sense it is the critical link, because $BB'$ is of course rank deficient and defines a singular covariance matrix before adding $D$, so you can't invert it. Adding the positive diagonal matrix $D$ technically makes it full rank but $BB'+D$ could still be horribly ill conditioned if $D$ is small. Oftentimes the estimate for the idiosyncratic variances (your $\sigma^2_i$, the diagonal elements of $D$) is near zero or even negative; these are called Heywood cases. See eg http://www.technion.ac.il/docs/sas/stat/chap26/sect21.htm (any FA text should discuss this as well, it's a very old and well-known problem). This can result from model misspecification, outliers, bad luck, solar flares... the MLE is particularly prone to this problem, so if your EM algorithm is designed to get the MLE look out. If your EM algorithm is approaching a mode with such estimates it's possible for $BB'+D$ to lose its positive definiteness, I think. There are various solutions; personally I'd prefer a Bayesian approach but even then you need to be careful with your priors (improper priors or even proper priors with too much mass near 0 can have the same problem for basically the same reason)
How to make a matrix positive definite?
OK, since you're doing FA I'm assuming that $B$ is of full column rank $q$ and $q<p$. We need a few more details though. This may be a numerical problem; it may also be a problem with your data. How
How to make a matrix positive definite? OK, since you're doing FA I'm assuming that $B$ is of full column rank $q$ and $q<p$. We need a few more details though. This may be a numerical problem; it may also be a problem with your data. How are you computing the inverse? Do you need the inverse explicitly, or can re-express the calculation as the solution to a linear system? (ie to get $A^{-1}b$ solve $Ax=b$ for x, which is typically faster and more stable) What is happening to $D$? Are the estimates really small/0/negative? In some sense it is the critical link, because $BB'$ is of course rank deficient and defines a singular covariance matrix before adding $D$, so you can't invert it. Adding the positive diagonal matrix $D$ technically makes it full rank but $BB'+D$ could still be horribly ill conditioned if $D$ is small. Oftentimes the estimate for the idiosyncratic variances (your $\sigma^2_i$, the diagonal elements of $D$) is near zero or even negative; these are called Heywood cases. See eg http://www.technion.ac.il/docs/sas/stat/chap26/sect21.htm (any FA text should discuss this as well, it's a very old and well-known problem). This can result from model misspecification, outliers, bad luck, solar flares... the MLE is particularly prone to this problem, so if your EM algorithm is designed to get the MLE look out. If your EM algorithm is approaching a mode with such estimates it's possible for $BB'+D$ to lose its positive definiteness, I think. There are various solutions; personally I'd prefer a Bayesian approach but even then you need to be careful with your priors (improper priors or even proper priors with too much mass near 0 can have the same problem for basically the same reason)
How to make a matrix positive definite? OK, since you're doing FA I'm assuming that $B$ is of full column rank $q$ and $q<p$. We need a few more details though. This may be a numerical problem; it may also be a problem with your data. How
26,828
When/why not to use studentized residuals for regression diagnostics?
$H_{ii}$ is small for large $n$ The magnitude of the diagonal of the hat matrix $H$ decreases quickly with the increase of the number of observations and scales as $1/n$. If we have the matrix $X$ such that the columns are perpendicular then $$H_{ii} = \frac{X_{i1}^2}{\sum_{j=1}^n X_{j1}^2} + \frac{X_{i2}^2}{\sum_{j=1}^n X_{j2}^2} + \dots + \frac{X_{ip}^2}{\sum_{j=1}^n X_{jp}^2} $$ The mean of the diagonal will be equal to $p/n$*. So the size of the inhomogeneities that are due to the contribution of the diagonal of the hat matrix $H_{ii}$, is of the order of $\sim p/n$. Diagnostic plots are often with large $n$ For a diagnostic plot one often has a large $n$ (because few points do not really show much of a pattern) and then the contribution of $H_{ii}$ to the variance of $e_i$ will be small and the variance of the $e_i$ will be relatively homogeneous. Or at least the $H_{ii}$ won't contribute much to inhomogeneity. The effect of $H_{ii}$ is negligible and the reason to not use studentized residuals is simplicity. *The trace of the projection matrix equals the rank of $X$, and since the rank is often the number of columns $p$ we have $$\sum_{i=1}^n H_{ii} = p$$ Which means that the average $H_{ii}$ is equal to $p/n$
When/why not to use studentized residuals for regression diagnostics?
$H_{ii}$ is small for large $n$ The magnitude of the diagonal of the hat matrix $H$ decreases quickly with the increase of the number of observations and scales as $1/n$. If we have the matrix $X$ suc
When/why not to use studentized residuals for regression diagnostics? $H_{ii}$ is small for large $n$ The magnitude of the diagonal of the hat matrix $H$ decreases quickly with the increase of the number of observations and scales as $1/n$. If we have the matrix $X$ such that the columns are perpendicular then $$H_{ii} = \frac{X_{i1}^2}{\sum_{j=1}^n X_{j1}^2} + \frac{X_{i2}^2}{\sum_{j=1}^n X_{j2}^2} + \dots + \frac{X_{ip}^2}{\sum_{j=1}^n X_{jp}^2} $$ The mean of the diagonal will be equal to $p/n$*. So the size of the inhomogeneities that are due to the contribution of the diagonal of the hat matrix $H_{ii}$, is of the order of $\sim p/n$. Diagnostic plots are often with large $n$ For a diagnostic plot one often has a large $n$ (because few points do not really show much of a pattern) and then the contribution of $H_{ii}$ to the variance of $e_i$ will be small and the variance of the $e_i$ will be relatively homogeneous. Or at least the $H_{ii}$ won't contribute much to inhomogeneity. The effect of $H_{ii}$ is negligible and the reason to not use studentized residuals is simplicity. *The trace of the projection matrix equals the rank of $X$, and since the rank is often the number of columns $p$ we have $$\sum_{i=1}^n H_{ii} = p$$ Which means that the average $H_{ii}$ is equal to $p/n$
When/why not to use studentized residuals for regression diagnostics? $H_{ii}$ is small for large $n$ The magnitude of the diagonal of the hat matrix $H$ decreases quickly with the increase of the number of observations and scales as $1/n$. If we have the matrix $X$ suc
26,829
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret?
When a particular predictor is statistically significant doesn't really mean that it also considerably improves the predictive performance of a model. Predictive performance is more related to the effect size. As an example, the function below simulates data from a linear regression model with two predictors x1 and x2, and fits two models, one with both x1 and x2, and one with x1 alone. In the function you can change the effect size for x2. The function reports the confidence intervals for the coefficients of x1 and x2, and the $R^2$ values of the two models as a measure of predictive performance. The function is: sim_ES <- function (effect_size = 1, sd = 2, n = 200) { # simulate some data DF <- data.frame(x1 = runif(n, -3, 3), x2 = runif(n, -3, 3)) DF$y <- 2 + 5 * DF$x1 + (effect_size * sd) * DF$x2 + rnorm(n, sd = sd) # fit the models with and without x2 fm1 <- lm(y ~ x1 + x2, data = DF) fm2 <- lm(y ~ x1, data = DF) # results list("95% CIs" = confint(fm1), "R2_X1_X2" = summary(fm1)$r.squared, "R2_only_X1" = summary(fm2)$r.squared) } As an exampple, for the default values we get, $`95% CIs` 2.5 % 97.5 % (Intercept) 1.769235 2.349051 x1 4.857439 5.196503 x2 1.759917 2.094877 $R2_X1_X2 [1] 0.9512757 $R2_only_X1 [1] 0.8238826 So x2 is significant, and not including it in the model has a big impact on the $R^2$. But if we set the effect size to 0.3, we get: > sim_ES(effect_size = 0.3) $`95% CIs` 2.5 % 97.5 % (Intercept) 1.9888073 2.5563233 x1 4.9383698 5.2547929 x2 0.3512024 0.6717464 $R2_X1_X2 [1] 0.9542341 $R2_only_X1 [1] 0.9450327 The coefficient is still significant but the improvement in the $R^2$ is very small.
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret?
When a particular predictor is statistically significant doesn't really mean that it also considerably improves the predictive performance of a model. Predictive performance is more related to the eff
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret? When a particular predictor is statistically significant doesn't really mean that it also considerably improves the predictive performance of a model. Predictive performance is more related to the effect size. As an example, the function below simulates data from a linear regression model with two predictors x1 and x2, and fits two models, one with both x1 and x2, and one with x1 alone. In the function you can change the effect size for x2. The function reports the confidence intervals for the coefficients of x1 and x2, and the $R^2$ values of the two models as a measure of predictive performance. The function is: sim_ES <- function (effect_size = 1, sd = 2, n = 200) { # simulate some data DF <- data.frame(x1 = runif(n, -3, 3), x2 = runif(n, -3, 3)) DF$y <- 2 + 5 * DF$x1 + (effect_size * sd) * DF$x2 + rnorm(n, sd = sd) # fit the models with and without x2 fm1 <- lm(y ~ x1 + x2, data = DF) fm2 <- lm(y ~ x1, data = DF) # results list("95% CIs" = confint(fm1), "R2_X1_X2" = summary(fm1)$r.squared, "R2_only_X1" = summary(fm2)$r.squared) } As an exampple, for the default values we get, $`95% CIs` 2.5 % 97.5 % (Intercept) 1.769235 2.349051 x1 4.857439 5.196503 x2 1.759917 2.094877 $R2_X1_X2 [1] 0.9512757 $R2_only_X1 [1] 0.8238826 So x2 is significant, and not including it in the model has a big impact on the $R^2$. But if we set the effect size to 0.3, we get: > sim_ES(effect_size = 0.3) $`95% CIs` 2.5 % 97.5 % (Intercept) 1.9888073 2.5563233 x1 4.9383698 5.2547929 x2 0.3512024 0.6717464 $R2_X1_X2 [1] 0.9542341 $R2_only_X1 [1] 0.9450327 The coefficient is still significant but the improvement in the $R^2$ is very small.
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret? When a particular predictor is statistically significant doesn't really mean that it also considerably improves the predictive performance of a model. Predictive performance is more related to the eff
26,830
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret?
This is a fairly normal thing to happen in multiple regression. The most common reason is that your predictors are related to one another. In other words, you can infer X from the values of the other predictors. Therefore, while it's useful for predictions if it's the only predictor you have, once you have all the other predictors it doesn't provide much extra information. You can check whether this is the case by regressing X on the other predictors. I would also refer to the chapter on linear regression in the free online textbook, Elements of Statistical Learning.
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret?
This is a fairly normal thing to happen in multiple regression. The most common reason is that your predictors are related to one another. In other words, you can infer X from the values of the other
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret? This is a fairly normal thing to happen in multiple regression. The most common reason is that your predictors are related to one another. In other words, you can infer X from the values of the other predictors. Therefore, while it's useful for predictions if it's the only predictor you have, once you have all the other predictors it doesn't provide much extra information. You can check whether this is the case by regressing X on the other predictors. I would also refer to the chapter on linear regression in the free online textbook, Elements of Statistical Learning.
A ''significant variable'' that does not improve out-of-sample predictions - how to interpret? This is a fairly normal thing to happen in multiple regression. The most common reason is that your predictors are related to one another. In other words, you can infer X from the values of the other
26,831
Why does this example show that stock picking requires no skill?
Daniel Kahneman is writing about consistency. If you went to a casino and came back with a large amount of money, then you would be lucky. If you went to a casino on another day and lost a large amount of money, you would be unlucky. However if you went to a casino for a number of days in the row and won some pretty large amount of the money each time, then either something unlikely would have happened, or you could be a skilled player. Is something is about skills, then it should be consistent over time (you are either good, or bad, if it changes, it changes rather gradually then dramatically, so it is auto-correlated). If something does not depend on skills, but luck, then it can change dramatically and wouldn't be auto-correlated. As about your argument about golfers, you'd need to prove it with data for it to be valid, otherwise it is a bold claim. Nonetheless, many things in sports depends on luck rather then skills. On another hand, I see your point that there is no comparison group of people who knew nothing about finance, who would be monitored over the time in terms of their investment successes.
Why does this example show that stock picking requires no skill?
Daniel Kahneman is writing about consistency. If you went to a casino and came back with a large amount of money, then you would be lucky. If you went to a casino on another day and lost a large amoun
Why does this example show that stock picking requires no skill? Daniel Kahneman is writing about consistency. If you went to a casino and came back with a large amount of money, then you would be lucky. If you went to a casino on another day and lost a large amount of money, you would be unlucky. However if you went to a casino for a number of days in the row and won some pretty large amount of the money each time, then either something unlikely would have happened, or you could be a skilled player. Is something is about skills, then it should be consistent over time (you are either good, or bad, if it changes, it changes rather gradually then dramatically, so it is auto-correlated). If something does not depend on skills, but luck, then it can change dramatically and wouldn't be auto-correlated. As about your argument about golfers, you'd need to prove it with data for it to be valid, otherwise it is a bold claim. Nonetheless, many things in sports depends on luck rather then skills. On another hand, I see your point that there is no comparison group of people who knew nothing about finance, who would be monitored over the time in terms of their investment successes.
Why does this example show that stock picking requires no skill? Daniel Kahneman is writing about consistency. If you went to a casino and came back with a large amount of money, then you would be lucky. If you went to a casino on another day and lost a large amoun
26,832
Why does this example show that stock picking requires no skill?
This is not the best way to do it. Fund managers will do better in different market conditions, etc. For each fund manager, you would just perform a t-test on the 8 years of returns (adjusted for risk, so you're only getting the portion of returns caused by stock-selection ability) and test whether the mean statistically different from 0. If it's not, you have no evidence of skill. The power of his 'correlation method' will be very small since only 8 years of returns are provided.
Why does this example show that stock picking requires no skill?
This is not the best way to do it. Fund managers will do better in different market conditions, etc. For each fund manager, you would just perform a t-test on the 8 years of returns (adjusted for risk
Why does this example show that stock picking requires no skill? This is not the best way to do it. Fund managers will do better in different market conditions, etc. For each fund manager, you would just perform a t-test on the 8 years of returns (adjusted for risk, so you're only getting the portion of returns caused by stock-selection ability) and test whether the mean statistically different from 0. If it's not, you have no evidence of skill. The power of his 'correlation method' will be very small since only 8 years of returns are provided.
Why does this example show that stock picking requires no skill? This is not the best way to do it. Fund managers will do better in different market conditions, etc. For each fund manager, you would just perform a t-test on the 8 years of returns (adjusted for risk
26,833
Why maximum likelihood estimation use the product of pdfs rather than cdfs
How can a CDF be used to rank two possible parametrizations for a model? It is a cumulative probability, so it can only tell us the probability of obtaining such a result or a lower value given a probability model. If we took $\theta$ to predict the smallest possible outcomes, the CDF is nearly 1 at every observation and this would be the most "likely" in the sense that "yup, if the mean height were truly -99 I am very confident that repeating my sample would produce values smaller than the ones I observed". We could balance the left cumulative probability with the right cumulative probability. Consider the converse in our calculation: a median unbiased estimator satisfies: $$P(X < \theta) = P(X > \theta)$$ Here the best value of $\theta$ is the one for which $X$ is equally likely to be greater or less than it's predicted value (assuming $\theta$ is a mean here). But that certainly doesn't correspond with our idea of being able to rank alternate parameterizations as more likely for a particular sample. Perhaps, on the other hand you wanted to be sure $X$ was very probable in a small interval of the value, that is maximize this probability: $$P(\theta - d < X < \theta + d)/d = \left(F(X+d) - F(X-d)\right)/d$$ But how big should $d$ be? Well if $d$ is taken to be arbitrarily small: $$\lim_{d \rightarrow 0} \left(F(X+d) - F(X-d)\right)/d = f(X)$$ And you get the density. It is the instantaneous probability function that best characterizes the likelihood of a specific observation under a parametrization.
Why maximum likelihood estimation use the product of pdfs rather than cdfs
How can a CDF be used to rank two possible parametrizations for a model? It is a cumulative probability, so it can only tell us the probability of obtaining such a result or a lower value given a prob
Why maximum likelihood estimation use the product of pdfs rather than cdfs How can a CDF be used to rank two possible parametrizations for a model? It is a cumulative probability, so it can only tell us the probability of obtaining such a result or a lower value given a probability model. If we took $\theta$ to predict the smallest possible outcomes, the CDF is nearly 1 at every observation and this would be the most "likely" in the sense that "yup, if the mean height were truly -99 I am very confident that repeating my sample would produce values smaller than the ones I observed". We could balance the left cumulative probability with the right cumulative probability. Consider the converse in our calculation: a median unbiased estimator satisfies: $$P(X < \theta) = P(X > \theta)$$ Here the best value of $\theta$ is the one for which $X$ is equally likely to be greater or less than it's predicted value (assuming $\theta$ is a mean here). But that certainly doesn't correspond with our idea of being able to rank alternate parameterizations as more likely for a particular sample. Perhaps, on the other hand you wanted to be sure $X$ was very probable in a small interval of the value, that is maximize this probability: $$P(\theta - d < X < \theta + d)/d = \left(F(X+d) - F(X-d)\right)/d$$ But how big should $d$ be? Well if $d$ is taken to be arbitrarily small: $$\lim_{d \rightarrow 0} \left(F(X+d) - F(X-d)\right)/d = f(X)$$ And you get the density. It is the instantaneous probability function that best characterizes the likelihood of a specific observation under a parametrization.
Why maximum likelihood estimation use the product of pdfs rather than cdfs How can a CDF be used to rank two possible parametrizations for a model? It is a cumulative probability, so it can only tell us the probability of obtaining such a result or a lower value given a prob
26,834
Why maximum likelihood estimation use the product of pdfs rather than cdfs
You have an empirical dataset and want to find the best fitting parameters of a hypothetical distribution. Say your empirical is Gaussian with mean 50, sd 10. Let's let the algorithm make a guess... mean 0, sd 1. Your real points will be far in the tail of this guess, but we can summarize it by multiplying all the probabilities of your values based on an assumption of mean 0, sd 1. Actually, instead of multiplying, let's sum the log, since that's more manageable. Also, since our algorithm likes to minimize, not maximize, we'll flip the sign, so you end up with -logLiklihood. Turns out that when you make a good guess for mean and sd, -LogLiklihood will be smaller than a bad guess. Rinse and repeat until the change in -logLiklihood is small enough, and there's your fit. The CDF doesn't natively lend itself to this kind of objective function. Multiplying out the product of the PDF (or summing the log) tells you quite literally, the likelihood of your data under the hypothesis of a particular parameter set.
Why maximum likelihood estimation use the product of pdfs rather than cdfs
You have an empirical dataset and want to find the best fitting parameters of a hypothetical distribution. Say your empirical is Gaussian with mean 50, sd 10. Let's let the algorithm make a guess...
Why maximum likelihood estimation use the product of pdfs rather than cdfs You have an empirical dataset and want to find the best fitting parameters of a hypothetical distribution. Say your empirical is Gaussian with mean 50, sd 10. Let's let the algorithm make a guess... mean 0, sd 1. Your real points will be far in the tail of this guess, but we can summarize it by multiplying all the probabilities of your values based on an assumption of mean 0, sd 1. Actually, instead of multiplying, let's sum the log, since that's more manageable. Also, since our algorithm likes to minimize, not maximize, we'll flip the sign, so you end up with -logLiklihood. Turns out that when you make a good guess for mean and sd, -LogLiklihood will be smaller than a bad guess. Rinse and repeat until the change in -logLiklihood is small enough, and there's your fit. The CDF doesn't natively lend itself to this kind of objective function. Multiplying out the product of the PDF (or summing the log) tells you quite literally, the likelihood of your data under the hypothesis of a particular parameter set.
Why maximum likelihood estimation use the product of pdfs rather than cdfs You have an empirical dataset and want to find the best fitting parameters of a hypothetical distribution. Say your empirical is Gaussian with mean 50, sd 10. Let's let the algorithm make a guess...
26,835
PCA is to CCA as ICA is to?
The first step of ICA is to use PCA and project the dataset into a low-dimensional latent space. The second step is to perform a change of coordinates within the latent space, which is chosen to optimize a measure of non-gaussianity. This tends to lead to coefficients and loadings that are, if not sparse, then at least concentrated within small numbers of observations and features, and that way it facilitates interpretation. Likewise, in this paper on CCA+ICA (Sui et al., "A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia"), the first (see footnote) step is to perform CCA, which yields a projection of each dataset into a low-dimensional space. If the input datasets are $X_1$ and $X_2$, each with $N$ rows=observations, then CCA yields $Z_1 = X_1W_1$ and $Z_2 = X_2W_2$ where the $Y$'s also have $N$ rows=observations. Note that the $Y$'s have a small number of columns, paired between $Y_1$ and $Y_2$, as opposed to the $X$'s, which may not even have the same number of columns. The authors then apply the same coordinate-changing strategy as is used in ICA, but they apply it to the concatenated matrix $[Z_1 | Z_2]$. Footnote: the authors also use preprocessing steps involving PCA, which I ignore here. They are part of the paper's domain-specific analysis choices, rather than being essential to the CCA+ICA method.
PCA is to CCA as ICA is to?
The first step of ICA is to use PCA and project the dataset into a low-dimensional latent space. The second step is to perform a change of coordinates within the latent space, which is chosen to optim
PCA is to CCA as ICA is to? The first step of ICA is to use PCA and project the dataset into a low-dimensional latent space. The second step is to perform a change of coordinates within the latent space, which is chosen to optimize a measure of non-gaussianity. This tends to lead to coefficients and loadings that are, if not sparse, then at least concentrated within small numbers of observations and features, and that way it facilitates interpretation. Likewise, in this paper on CCA+ICA (Sui et al., "A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia"), the first (see footnote) step is to perform CCA, which yields a projection of each dataset into a low-dimensional space. If the input datasets are $X_1$ and $X_2$, each with $N$ rows=observations, then CCA yields $Z_1 = X_1W_1$ and $Z_2 = X_2W_2$ where the $Y$'s also have $N$ rows=observations. Note that the $Y$'s have a small number of columns, paired between $Y_1$ and $Y_2$, as opposed to the $X$'s, which may not even have the same number of columns. The authors then apply the same coordinate-changing strategy as is used in ICA, but they apply it to the concatenated matrix $[Z_1 | Z_2]$. Footnote: the authors also use preprocessing steps involving PCA, which I ignore here. They are part of the paper's domain-specific analysis choices, rather than being essential to the CCA+ICA method.
PCA is to CCA as ICA is to? The first step of ICA is to use PCA and project the dataset into a low-dimensional latent space. The second step is to perform a change of coordinates within the latent space, which is chosen to optim
26,836
Minimum number of repeated measures and levels per nested random effect
I agree with your reasoning, but it makes it easier to think about when we remember that: (1| site/block/subject) is the same as (1| site) + (1|site:block) + (1|site:block:subject) So, the limiting number of levels for each factor only applies to the "top" level - that is, site in this case. Here we have 8, sites, so that is OK. Obviously regardless of how many levels we have for block and subject, the other two grouping terms will have more than 8 levels, so all is good here.
Minimum number of repeated measures and levels per nested random effect
I agree with your reasoning, but it makes it easier to think about when we remember that: (1| site/block/subject) is the same as (1| site) + (1|site:block) + (1|site:block:subject) So, the limiting
Minimum number of repeated measures and levels per nested random effect I agree with your reasoning, but it makes it easier to think about when we remember that: (1| site/block/subject) is the same as (1| site) + (1|site:block) + (1|site:block:subject) So, the limiting number of levels for each factor only applies to the "top" level - that is, site in this case. Here we have 8, sites, so that is OK. Obviously regardless of how many levels we have for block and subject, the other two grouping terms will have more than 8 levels, so all is good here.
Minimum number of repeated measures and levels per nested random effect I agree with your reasoning, but it makes it easier to think about when we remember that: (1| site/block/subject) is the same as (1| site) + (1|site:block) + (1|site:block:subject) So, the limiting
26,837
If shrinkage is applied in a clever way, does it always work better for more efficient estimators?
Let me suggest an admittedly slightly boring counterexample. Say that $\hat{\beta}_1$ is not just asymptotically more efficient than $\hat{\beta}_2$, but also attains the Cramer Rao Lower Bound. A clever shrinkage technique for $\hat{\beta}_2$ would be: $$ \hat{\beta}_2^\ast = w \hat{\beta}_2 + (1 - w) \hat{\beta}_1 $$ with $w\in(0,1)$. The asymptotic variance of $\hat{\beta}_2^\ast$ is $$ V^\ast = \mathbb{Avar}(w \hat{\beta}_2 + (1 - w) \hat{\beta}_1) = \mathbb{Avar}(w (\hat{\beta}_2 - \hat{\beta}_1) + \hat{\beta}_1 ) = V_1 + w^2 (V_2 - V_1) $$ where the last equality uses the Lemma in Hausman's paper. We have $$ V_2 - V^\ast = V_2(1-w^2) - V_1(1-w^2) \geq 0 $$ so there is an asymptotic risk improvement (there are no bias terms). So we found a shrinkage technique that gives some asymptotic (and therefore hopefully finite sample) improvements over $\hat{\beta}_2$. Yet, there is no similar shrinkage estimator $\hat{\beta}_1^\ast$ that follows from this procedure. The point here of course is that the shrinkage is done towards the efficient estimator and is therefore not applicable to the efficient estimator itself. This seems pretty obvious on a high level but I would guess that in a specific example this is not so obvious (MLE and Method of Moments estimator for the uniform distribution may be an example?).
If shrinkage is applied in a clever way, does it always work better for more efficient estimators?
Let me suggest an admittedly slightly boring counterexample. Say that $\hat{\beta}_1$ is not just asymptotically more efficient than $\hat{\beta}_2$, but also attains the Cramer Rao Lower Bound. A cle
If shrinkage is applied in a clever way, does it always work better for more efficient estimators? Let me suggest an admittedly slightly boring counterexample. Say that $\hat{\beta}_1$ is not just asymptotically more efficient than $\hat{\beta}_2$, but also attains the Cramer Rao Lower Bound. A clever shrinkage technique for $\hat{\beta}_2$ would be: $$ \hat{\beta}_2^\ast = w \hat{\beta}_2 + (1 - w) \hat{\beta}_1 $$ with $w\in(0,1)$. The asymptotic variance of $\hat{\beta}_2^\ast$ is $$ V^\ast = \mathbb{Avar}(w \hat{\beta}_2 + (1 - w) \hat{\beta}_1) = \mathbb{Avar}(w (\hat{\beta}_2 - \hat{\beta}_1) + \hat{\beta}_1 ) = V_1 + w^2 (V_2 - V_1) $$ where the last equality uses the Lemma in Hausman's paper. We have $$ V_2 - V^\ast = V_2(1-w^2) - V_1(1-w^2) \geq 0 $$ so there is an asymptotic risk improvement (there are no bias terms). So we found a shrinkage technique that gives some asymptotic (and therefore hopefully finite sample) improvements over $\hat{\beta}_2$. Yet, there is no similar shrinkage estimator $\hat{\beta}_1^\ast$ that follows from this procedure. The point here of course is that the shrinkage is done towards the efficient estimator and is therefore not applicable to the efficient estimator itself. This seems pretty obvious on a high level but I would guess that in a specific example this is not so obvious (MLE and Method of Moments estimator for the uniform distribution may be an example?).
If shrinkage is applied in a clever way, does it always work better for more efficient estimators? Let me suggest an admittedly slightly boring counterexample. Say that $\hat{\beta}_1$ is not just asymptotically more efficient than $\hat{\beta}_2$, but also attains the Cramer Rao Lower Bound. A cle
26,838
If shrinkage is applied in a clever way, does it always work better for more efficient estimators?
This is an interesting question where I want to point out some highlights first. Two estimators are consistent $\hat{\beta}_1$ is more efficient than $\hat\beta_2$ since it achieves less variation Loss functions are not the same one shrinkage method is applied to one so that it reduces the variation that by itself ends up a better estimator Question: In other words, if shrinkage is applied cleverly, does it always work better for more efficient estimators? Fundamentally, it is possible to improve an estimator in a certain framework, such as unbiased class of estimators. However, as pointed out by you, different loss functions makes the situation difficult as one loss function may minimise quadratic loss and the other one minimises the entropy. Moreover, using the word "always" is very tricky since if one estimator is the best one in the class, you cannot claim any better estimator, logically speaking. For a simple example (in the same framework), let two estimators, namely a Bridge (penalised regression with $l_p$ norm penalty) and Lasso (first norm penalised likelihood) and a sparse set of parameters namely $\beta$, a linear model $y=x\beta+e$, normality of error term,$e\sim N(0,\sigma^2<\infty)$, known $\sigma$, quadratic loss function (least square errors), and independency of covariates in $x$. Let choose $l_p$ for $p=3$ for the first estimator and $p=2$ for the second estimators. Then you can improve the estimators by choosing $p\rightarrow 1$ that ends up a better estimator with lower variance. Then in this example there is a chance of improving estimator. So my answer to your question is yes, given you assume the same family of estimators and the same loss function as well as assumptions.
If shrinkage is applied in a clever way, does it always work better for more efficient estimators?
This is an interesting question where I want to point out some highlights first. Two estimators are consistent $\hat{\beta}_1$ is more efficient than $\hat\beta_2$ since it achieves less variation Lo
If shrinkage is applied in a clever way, does it always work better for more efficient estimators? This is an interesting question where I want to point out some highlights first. Two estimators are consistent $\hat{\beta}_1$ is more efficient than $\hat\beta_2$ since it achieves less variation Loss functions are not the same one shrinkage method is applied to one so that it reduces the variation that by itself ends up a better estimator Question: In other words, if shrinkage is applied cleverly, does it always work better for more efficient estimators? Fundamentally, it is possible to improve an estimator in a certain framework, such as unbiased class of estimators. However, as pointed out by you, different loss functions makes the situation difficult as one loss function may minimise quadratic loss and the other one minimises the entropy. Moreover, using the word "always" is very tricky since if one estimator is the best one in the class, you cannot claim any better estimator, logically speaking. For a simple example (in the same framework), let two estimators, namely a Bridge (penalised regression with $l_p$ norm penalty) and Lasso (first norm penalised likelihood) and a sparse set of parameters namely $\beta$, a linear model $y=x\beta+e$, normality of error term,$e\sim N(0,\sigma^2<\infty)$, known $\sigma$, quadratic loss function (least square errors), and independency of covariates in $x$. Let choose $l_p$ for $p=3$ for the first estimator and $p=2$ for the second estimators. Then you can improve the estimators by choosing $p\rightarrow 1$ that ends up a better estimator with lower variance. Then in this example there is a chance of improving estimator. So my answer to your question is yes, given you assume the same family of estimators and the same loss function as well as assumptions.
If shrinkage is applied in a clever way, does it always work better for more efficient estimators? This is an interesting question where I want to point out some highlights first. Two estimators are consistent $\hat{\beta}_1$ is more efficient than $\hat\beta_2$ since it achieves less variation Lo
26,839
How do I choose the best metric to measure my calibration?
I assume that you are doing unit-tests for your code. One idea that I can think of, which would maybe not do exactly what you want, is to use a linear model. The benefit of doing that, is that you can create a bunch of other variables that you can include in the analysis. Let's say that you have a vector $\mathbf{Y}$ which includes the outcome of your tests, and another vector $\mathbf{x}$ that includes your predictions of the outcome. Now you can simply fit the linear model $$ y_i = a + bx_i +\epsilon $$ and find the value of $b$, the higher the value of $b$ would indicate that your predictions are becoming better. The thing that makes this approach nice is that now you can start to add a bunch of other variables to see if that creates a better model, and those variables can help in making better predictions. The variables could be an indicator for the day of the week, e.g. for Monday it would always be 1, and zero for all the other days. If you include that variable in the model, you would get: $$ y_i = a + a_{\text{Monday}} + bx_i +\epsilon $$ And if the variable $a_{\text{Monday}}$ is significant and positive, then it could mean that you are more conservative in your predictions on Mondays. You could also create a new variable where you give a score to assess the difficulty of the task you performed. If you have version control, then you could e.g. use the number of lines of code as difficulty, i.e. the more code you write, the more likely something will break. Other variables could be, number of coffee cups that day, indicator for upcoming deadlines, meaning there is more stress to finish stuff etc. You can also use a time variable to see if your predictions are getting better. Also, how long you spent on the task, or how many sessions you have spent on it, whether you were doing a quick fix and it might be sloppy etc. In the end you have a prediction model, where you can try to predict the likelihood of success. If you manage to create this, then maybe you do not even have to make your own predictions, you can just use all the variables and have a pretty good guess on whether things will work. The thing is that you only wanted a single number. In that case you can use the simple model I presented in the beginning and just use the slope, and redo the calculations for each period, then you can look if there is a trend in that score over time. Hope this helps.
How do I choose the best metric to measure my calibration?
I assume that you are doing unit-tests for your code. One idea that I can think of, which would maybe not do exactly what you want, is to use a linear model. The benefit of doing that, is that you can
How do I choose the best metric to measure my calibration? I assume that you are doing unit-tests for your code. One idea that I can think of, which would maybe not do exactly what you want, is to use a linear model. The benefit of doing that, is that you can create a bunch of other variables that you can include in the analysis. Let's say that you have a vector $\mathbf{Y}$ which includes the outcome of your tests, and another vector $\mathbf{x}$ that includes your predictions of the outcome. Now you can simply fit the linear model $$ y_i = a + bx_i +\epsilon $$ and find the value of $b$, the higher the value of $b$ would indicate that your predictions are becoming better. The thing that makes this approach nice is that now you can start to add a bunch of other variables to see if that creates a better model, and those variables can help in making better predictions. The variables could be an indicator for the day of the week, e.g. for Monday it would always be 1, and zero for all the other days. If you include that variable in the model, you would get: $$ y_i = a + a_{\text{Monday}} + bx_i +\epsilon $$ And if the variable $a_{\text{Monday}}$ is significant and positive, then it could mean that you are more conservative in your predictions on Mondays. You could also create a new variable where you give a score to assess the difficulty of the task you performed. If you have version control, then you could e.g. use the number of lines of code as difficulty, i.e. the more code you write, the more likely something will break. Other variables could be, number of coffee cups that day, indicator for upcoming deadlines, meaning there is more stress to finish stuff etc. You can also use a time variable to see if your predictions are getting better. Also, how long you spent on the task, or how many sessions you have spent on it, whether you were doing a quick fix and it might be sloppy etc. In the end you have a prediction model, where you can try to predict the likelihood of success. If you manage to create this, then maybe you do not even have to make your own predictions, you can just use all the variables and have a pretty good guess on whether things will work. The thing is that you only wanted a single number. In that case you can use the simple model I presented in the beginning and just use the slope, and redo the calculations for each period, then you can look if there is a trend in that score over time. Hope this helps.
How do I choose the best metric to measure my calibration? I assume that you are doing unit-tests for your code. One idea that I can think of, which would maybe not do exactly what you want, is to use a linear model. The benefit of doing that, is that you can
26,840
How do I choose the best metric to measure my calibration?
Although this is far from an answer and more of a reference, it might be a good idea to check Steyerberg E - Epidemiology 2012. In this article Steyerberg and colleagues explain different ways to check prediction model performance for models with binary outcomes (succes or failure). Calibration is just one of these measures. Depending on whether you want to have an accurate probability, accurate classification, or accurate reclassification you might want to use different measures of model performance. Even though this manuscript concerns models to be used in biomedical research I feel they could be applicable to other situations (yours) as well. More specific to your situation, calibration metrics are really difficult to interpret because they summarize (i.e. average) the calibration over the entire range of possible predictions. Consequently, you might have a good calibration summary score, while your predictions were off in an important range of predicted probabilities (e.g. you might have a low (=good) brier score, while the prediction for succes is off in above or below a certain predicted probability) or vice versa (a poor summary score, while predictions are well-calibrated in the critical area). I would therefore suggest you think about whether such a critical range of predicted succes probability exists in your case. If so, use the appropriate measures (e.g. reclassification indices). If not (meaning you are interested in overall calibration), use brier, or check the intercept and slopes of your calibration plot (see Steyerberg article). To conclude, any one of the calibration summary measures require your first step to plot your predicted probabilities versus the observed probability (see Outlier's answer for example how to). Next, the summary measure can be calculated, but the choice of summary measure should reflect the goal of predicting succes of failure in the first place.
How do I choose the best metric to measure my calibration?
Although this is far from an answer and more of a reference, it might be a good idea to check Steyerberg E - Epidemiology 2012. In this article Steyerberg and colleagues explain different ways to chec
How do I choose the best metric to measure my calibration? Although this is far from an answer and more of a reference, it might be a good idea to check Steyerberg E - Epidemiology 2012. In this article Steyerberg and colleagues explain different ways to check prediction model performance for models with binary outcomes (succes or failure). Calibration is just one of these measures. Depending on whether you want to have an accurate probability, accurate classification, or accurate reclassification you might want to use different measures of model performance. Even though this manuscript concerns models to be used in biomedical research I feel they could be applicable to other situations (yours) as well. More specific to your situation, calibration metrics are really difficult to interpret because they summarize (i.e. average) the calibration over the entire range of possible predictions. Consequently, you might have a good calibration summary score, while your predictions were off in an important range of predicted probabilities (e.g. you might have a low (=good) brier score, while the prediction for succes is off in above or below a certain predicted probability) or vice versa (a poor summary score, while predictions are well-calibrated in the critical area). I would therefore suggest you think about whether such a critical range of predicted succes probability exists in your case. If so, use the appropriate measures (e.g. reclassification indices). If not (meaning you are interested in overall calibration), use brier, or check the intercept and slopes of your calibration plot (see Steyerberg article). To conclude, any one of the calibration summary measures require your first step to plot your predicted probabilities versus the observed probability (see Outlier's answer for example how to). Next, the summary measure can be calculated, but the choice of summary measure should reflect the goal of predicting succes of failure in the first place.
How do I choose the best metric to measure my calibration? Although this is far from an answer and more of a reference, it might be a good idea to check Steyerberg E - Epidemiology 2012. In this article Steyerberg and colleagues explain different ways to chec
26,841
How do I choose the best metric to measure my calibration?
I have done prediction model on sparse data and it is a big challenge to get your model calibrated in these cases. I will tell you what I did, you can get some help from that. I made 20 bins of predicted probability and tried to plot average predicted and actual probability of success. For average predicted probability, I took average of the bin range. For average actual probability, I calculated actual success and failure count in the bins, from which I got actual (median) probability of success in the bin. To reduce impact of outliers, I removed the top and bottom 5% data before taking actual median probability in each bin. Once I got these I could easily plot the data.
How do I choose the best metric to measure my calibration?
I have done prediction model on sparse data and it is a big challenge to get your model calibrated in these cases. I will tell you what I did, you can get some help from that. I made 20 bins of predic
How do I choose the best metric to measure my calibration? I have done prediction model on sparse data and it is a big challenge to get your model calibrated in these cases. I will tell you what I did, you can get some help from that. I made 20 bins of predicted probability and tried to plot average predicted and actual probability of success. For average predicted probability, I took average of the bin range. For average actual probability, I calculated actual success and failure count in the bins, from which I got actual (median) probability of success in the bin. To reduce impact of outliers, I removed the top and bottom 5% data before taking actual median probability in each bin. Once I got these I could easily plot the data.
How do I choose the best metric to measure my calibration? I have done prediction model on sparse data and it is a big challenge to get your model calibrated in these cases. I will tell you what I did, you can get some help from that. I made 20 bins of predic
26,842
Sudden accuracy drop when training LSTM or GRU in Keras
Here are my suggestion to pinpoint the issue: 1) Look at training learning curve: How is the learning curve on train set? Does it learn the training set? If not, first work on that to make sure you can over fit on the training set. 2) Check your data to make sure there is no NaN in it (training, validation, test) 3) Check the gradients and the weights to make sure there is no NaN. 4) Decrease the learning rate as you train to make sure it's not because of a sudden big update that stuck in a sharp minima. 5) To make sure everything's right, check the predictions of your network so that your network is not making some constant, or repetitive predictions. 6) Check if your data in your batch is balanced with respect to all classes. 7) normalize your data to be zero mean unit variance. Initialize the weights likewise. It will assist the training.
Sudden accuracy drop when training LSTM or GRU in Keras
Here are my suggestion to pinpoint the issue: 1) Look at training learning curve: How is the learning curve on train set? Does it learn the training set? If not, first work on that to make sure you ca
Sudden accuracy drop when training LSTM or GRU in Keras Here are my suggestion to pinpoint the issue: 1) Look at training learning curve: How is the learning curve on train set? Does it learn the training set? If not, first work on that to make sure you can over fit on the training set. 2) Check your data to make sure there is no NaN in it (training, validation, test) 3) Check the gradients and the weights to make sure there is no NaN. 4) Decrease the learning rate as you train to make sure it's not because of a sudden big update that stuck in a sharp minima. 5) To make sure everything's right, check the predictions of your network so that your network is not making some constant, or repetitive predictions. 6) Check if your data in your batch is balanced with respect to all classes. 7) normalize your data to be zero mean unit variance. Initialize the weights likewise. It will assist the training.
Sudden accuracy drop when training LSTM or GRU in Keras Here are my suggestion to pinpoint the issue: 1) Look at training learning curve: How is the learning curve on train set? Does it learn the training set? If not, first work on that to make sure you ca
26,843
Algorithm: Binary search when values are uncertain
Treat the problem as an array of bayesian probabilities; initially, assume there's a 1/13 chance that the child is just below each level and, for completeness, a 1/13 chance they're off the top. Then: 1) Find the median level of your array i.e. the the level where the probability of being above it is closest to 50% 2) Ask the child a question from that level. 3) Use Bayes' Rule to update each cell's probability, assuming a 25% error rate. Terminate and return the most likely level when one cell hits a sufficiently high probability, or I guess when you run out of questions on a level. A more rigorous treatment of this algorithm is here; I recommend reading it before implementing.
Algorithm: Binary search when values are uncertain
Treat the problem as an array of bayesian probabilities; initially, assume there's a 1/13 chance that the child is just below each level and, for completeness, a 1/13 chance they're off the top. Then
Algorithm: Binary search when values are uncertain Treat the problem as an array of bayesian probabilities; initially, assume there's a 1/13 chance that the child is just below each level and, for completeness, a 1/13 chance they're off the top. Then: 1) Find the median level of your array i.e. the the level where the probability of being above it is closest to 50% 2) Ask the child a question from that level. 3) Use Bayes' Rule to update each cell's probability, assuming a 25% error rate. Terminate and return the most likely level when one cell hits a sufficiently high probability, or I guess when you run out of questions on a level. A more rigorous treatment of this algorithm is here; I recommend reading it before implementing.
Algorithm: Binary search when values are uncertain Treat the problem as an array of bayesian probabilities; initially, assume there's a 1/13 chance that the child is just below each level and, for completeness, a 1/13 chance they're off the top. Then
26,844
Algorithm: Binary search when values are uncertain
Here's an implementation of a binary search algorithm that uses some probability techniques (possibly the same as Thimothy mentioned in his answer) to deal with a noisy binary search: https://github.com/adamcrume/robust-binary-search
Algorithm: Binary search when values are uncertain
Here's an implementation of a binary search algorithm that uses some probability techniques (possibly the same as Thimothy mentioned in his answer) to deal with a noisy binary search: https://github.c
Algorithm: Binary search when values are uncertain Here's an implementation of a binary search algorithm that uses some probability techniques (possibly the same as Thimothy mentioned in his answer) to deal with a noisy binary search: https://github.com/adamcrume/robust-binary-search
Algorithm: Binary search when values are uncertain Here's an implementation of a binary search algorithm that uses some probability techniques (possibly the same as Thimothy mentioned in his answer) to deal with a noisy binary search: https://github.c
26,845
What's the name for a time series with constant mean?
I suspect there is no general term that will cover all cases. Consider, for example, a white noise generator. In that case, we would just call it white noise. Now if the white noise comes from a natural source, e.g., AM radio band white noise, then it has effects including superimposed diurnal, seasonal, and sun-spot (11 year) solar variability, and man made primary and beat interference from radio broadcasts. For example, the graph in the link mentioned by the OP looks like amplitude modulated white noise, almost like an earthquake. I personally would examine such a curve in the frequency and or phase domain, and describe it as an evolution of such in time because it would reveal a lot more about the signal structure by direct observation of how the amplitudes over a set of ranges of frequencies evolve in time with respect to detection limits as opposed to thinking about stationarity, mainly by reason of conceptual compactness. I understand the appeal of statistical testing. However, it would take umpteen tests and oodles of different criteria, as in the link, to incompletely describe an evolving frequency domain concept making the attempt at developing the concept of stationarity as a fundamental property seem rather confining. How does one go from that to Bode plotting, and phase plotting? Having said that much, signal processing becomes more complicated when a "primary" violation of stationarity occurs; patient dies, signal stops, random walk continues, and so forth. Such processes are easier to describe as a non-stationarity than variously as an infinite sum of odd harmonics, or a decreasing to zero frequency. The OP complaint about not having much literature to document secondary stationarity is entirely reasonable; there does not seem to be complete agreement as to what even constitutes ordinary stationarity. For example, NIST claims that "A stationary process has the property that the mean, variance and autocorrelation structure do not change over time." Others on this site claim that "Autocorrelation doesn't cause non-stationarity," or using mixture distributions of RV's that "This process is clearly not stationary, but the autocorrelation is zero for all lags since the variables are independent." This is problematic because auto-non-correlation is typically "tacked-on" as an additional criterion of non-stationarity without much consideration given to how necessary and sufficient that is for defining a process. My advice on this would be first observe a process, and then to describe it, and to use phrases crouched in modifiers such as, "stationary/non-stationarity with respect to" as the alternative is to confuse many readers as to what is meant.
What's the name for a time series with constant mean?
I suspect there is no general term that will cover all cases. Consider, for example, a white noise generator. In that case, we would just call it white noise. Now if the white noise comes from a natur
What's the name for a time series with constant mean? I suspect there is no general term that will cover all cases. Consider, for example, a white noise generator. In that case, we would just call it white noise. Now if the white noise comes from a natural source, e.g., AM radio band white noise, then it has effects including superimposed diurnal, seasonal, and sun-spot (11 year) solar variability, and man made primary and beat interference from radio broadcasts. For example, the graph in the link mentioned by the OP looks like amplitude modulated white noise, almost like an earthquake. I personally would examine such a curve in the frequency and or phase domain, and describe it as an evolution of such in time because it would reveal a lot more about the signal structure by direct observation of how the amplitudes over a set of ranges of frequencies evolve in time with respect to detection limits as opposed to thinking about stationarity, mainly by reason of conceptual compactness. I understand the appeal of statistical testing. However, it would take umpteen tests and oodles of different criteria, as in the link, to incompletely describe an evolving frequency domain concept making the attempt at developing the concept of stationarity as a fundamental property seem rather confining. How does one go from that to Bode plotting, and phase plotting? Having said that much, signal processing becomes more complicated when a "primary" violation of stationarity occurs; patient dies, signal stops, random walk continues, and so forth. Such processes are easier to describe as a non-stationarity than variously as an infinite sum of odd harmonics, or a decreasing to zero frequency. The OP complaint about not having much literature to document secondary stationarity is entirely reasonable; there does not seem to be complete agreement as to what even constitutes ordinary stationarity. For example, NIST claims that "A stationary process has the property that the mean, variance and autocorrelation structure do not change over time." Others on this site claim that "Autocorrelation doesn't cause non-stationarity," or using mixture distributions of RV's that "This process is clearly not stationary, but the autocorrelation is zero for all lags since the variables are independent." This is problematic because auto-non-correlation is typically "tacked-on" as an additional criterion of non-stationarity without much consideration given to how necessary and sufficient that is for defining a process. My advice on this would be first observe a process, and then to describe it, and to use phrases crouched in modifiers such as, "stationary/non-stationarity with respect to" as the alternative is to confuse many readers as to what is meant.
What's the name for a time series with constant mean? I suspect there is no general term that will cover all cases. Consider, for example, a white noise generator. In that case, we would just call it white noise. Now if the white noise comes from a natur
26,846
What is pretraining and how do you pretrain a neural network?
You start by training each RBM in the stack separately and then combine into a new model which can be further tuned. Suppose you have 3 RBMs, you train RBM1 with your data (e.g a bunch of images). RBM2 is trained with RBM1's output. RBM3 is trained with RBM2's output. The idea is that each RBM models features representative of the images and the weights that they learn in doing so are useful in other discriminative tasks like classification.
What is pretraining and how do you pretrain a neural network?
You start by training each RBM in the stack separately and then combine into a new model which can be further tuned. Suppose you have 3 RBMs, you train RBM1 with your data (e.g a bunch of images). RBM
What is pretraining and how do you pretrain a neural network? You start by training each RBM in the stack separately and then combine into a new model which can be further tuned. Suppose you have 3 RBMs, you train RBM1 with your data (e.g a bunch of images). RBM2 is trained with RBM1's output. RBM3 is trained with RBM2's output. The idea is that each RBM models features representative of the images and the weights that they learn in doing so are useful in other discriminative tasks like classification.
What is pretraining and how do you pretrain a neural network? You start by training each RBM in the stack separately and then combine into a new model which can be further tuned. Suppose you have 3 RBMs, you train RBM1 with your data (e.g a bunch of images). RBM
26,847
What is pretraining and how do you pretrain a neural network?
Pretraining a stacked RBM is to greedily layerwise minimize the defined energy, i.e., maximize the likelihood. G. Hinton proposed the CD-k algorithm, which can be viewed as a single iteration of Gibbs sampling.
What is pretraining and how do you pretrain a neural network?
Pretraining a stacked RBM is to greedily layerwise minimize the defined energy, i.e., maximize the likelihood. G. Hinton proposed the CD-k algorithm, which can be viewed as a single iteration of Gibbs
What is pretraining and how do you pretrain a neural network? Pretraining a stacked RBM is to greedily layerwise minimize the defined energy, i.e., maximize the likelihood. G. Hinton proposed the CD-k algorithm, which can be viewed as a single iteration of Gibbs sampling.
What is pretraining and how do you pretrain a neural network? Pretraining a stacked RBM is to greedily layerwise minimize the defined energy, i.e., maximize the likelihood. G. Hinton proposed the CD-k algorithm, which can be viewed as a single iteration of Gibbs
26,848
What is pretraining and how do you pretrain a neural network?
Pretraining is a multi-stage learning strategy that a simpler model is trained before the training of the desired complex model is performed. In your case, the pretraining with restricted Boltzmann Machines is a method of greedy layer-wise unsupervised pretraining. You train the RBM layer by layer with the previous pre-trained layers fixed. Pretraining helps both in terms of optimization and generalization. Reference: Deep Learning by Ian Goodfellow and etc.
What is pretraining and how do you pretrain a neural network?
Pretraining is a multi-stage learning strategy that a simpler model is trained before the training of the desired complex model is performed. In your case, the pretraining with restricted Boltzmann Ma
What is pretraining and how do you pretrain a neural network? Pretraining is a multi-stage learning strategy that a simpler model is trained before the training of the desired complex model is performed. In your case, the pretraining with restricted Boltzmann Machines is a method of greedy layer-wise unsupervised pretraining. You train the RBM layer by layer with the previous pre-trained layers fixed. Pretraining helps both in terms of optimization and generalization. Reference: Deep Learning by Ian Goodfellow and etc.
What is pretraining and how do you pretrain a neural network? Pretraining is a multi-stage learning strategy that a simpler model is trained before the training of the desired complex model is performed. In your case, the pretraining with restricted Boltzmann Ma
26,849
What is the difference between "gold standard" and "ground truth"?
The more complete quote is In some cases it can be impossible to get the actual label (also known as the ground truth or gold standard) and it is estimated from the subjective opinion of a small number of experts who can often disagree on the labels [14, 29]. That strikes me as inane on several levels. If, for example, we use multiple subjective opinions to form a consensus, we would call that adjudicated opinion a reference standard, and it would not be without statistical properties that can be investigated. For example, look at The dirty coins and the three judges. Thus, although we cannot ascertain an absolute truth, we can explore how good our standard is and seek to improve it until it is good enough to be used as a measurement of whatever we wish to analyze. In the alternative case, we have only negative results. No matter what the context is, our responsibility is to state what the limits of accuracy and precision are for our measurements. "Gold standard" is a term that is common for medical and allied fields. There are many papers submitted that use the term that are never published for good reason. Most frequently, this is because of circular reasoning of the "make an assumption then prove that that assumption was made" type, with the territory covered by that circle consisting of fanciful results that cannot be duplicated without making the same plethora of ridiculous errors. It is better to use other terms to mitigate the opportunity for self-delusion. Unfortunately, the AMA preferred term referred to in the Wikipedia entry as Criterion Standard is not a synonym for gold standard, but rather refers to disease occurrence reporting. That is only rarely the circumstance under which authors have the bad habit of glibly using gilded comparisons. A better term in most contexts would be reference standard which is much more to the point. For example, if we refer to a "standard kilogram" we are not saying that that standard is correct in any sense, just that we have used it as a "yardstick" because that is what we had available. It is also better in the sense that just because we use something as a reference standard does not mean that there is not a better standard that could be created whereas the words "gold standard," are frequently followed in journal articles by the word "true", or "truth" often used with an order of magnitude increased frequency to the so-called "gold-standard" assumption. Case in point, the reference standard platinum iridium (i.e., better than gold) kilogram has slowly been losing mass (50 micrograms total) since it was first cast in the 19th century. As of May 20, 2019 that standard will be replaced by a kilogram mass standard defined in terms of Planck's constant, that is not likely to change as much as the current reference kilogram. Like criterion standard, ground truth is frequently jargonesque. It sometimes refers to remote sensing analytic results compared to outcomes obtained from pictures at least figuratively taken while someone is standing on the ground, i.e., by collation of more direct observations. Once again, the more generic term is reference standard, which I suggest has a much firmer scientific basis in the form of established practice and rules for its precision, accuracy, criticism, evaluation, improvement and deployment.
What is the difference between "gold standard" and "ground truth"?
The more complete quote is In some cases it can be impossible to get the actual label (also known as the ground truth or gold standard) and it is estimated from the subjective opinion of a small numb
What is the difference between "gold standard" and "ground truth"? The more complete quote is In some cases it can be impossible to get the actual label (also known as the ground truth or gold standard) and it is estimated from the subjective opinion of a small number of experts who can often disagree on the labels [14, 29]. That strikes me as inane on several levels. If, for example, we use multiple subjective opinions to form a consensus, we would call that adjudicated opinion a reference standard, and it would not be without statistical properties that can be investigated. For example, look at The dirty coins and the three judges. Thus, although we cannot ascertain an absolute truth, we can explore how good our standard is and seek to improve it until it is good enough to be used as a measurement of whatever we wish to analyze. In the alternative case, we have only negative results. No matter what the context is, our responsibility is to state what the limits of accuracy and precision are for our measurements. "Gold standard" is a term that is common for medical and allied fields. There are many papers submitted that use the term that are never published for good reason. Most frequently, this is because of circular reasoning of the "make an assumption then prove that that assumption was made" type, with the territory covered by that circle consisting of fanciful results that cannot be duplicated without making the same plethora of ridiculous errors. It is better to use other terms to mitigate the opportunity for self-delusion. Unfortunately, the AMA preferred term referred to in the Wikipedia entry as Criterion Standard is not a synonym for gold standard, but rather refers to disease occurrence reporting. That is only rarely the circumstance under which authors have the bad habit of glibly using gilded comparisons. A better term in most contexts would be reference standard which is much more to the point. For example, if we refer to a "standard kilogram" we are not saying that that standard is correct in any sense, just that we have used it as a "yardstick" because that is what we had available. It is also better in the sense that just because we use something as a reference standard does not mean that there is not a better standard that could be created whereas the words "gold standard," are frequently followed in journal articles by the word "true", or "truth" often used with an order of magnitude increased frequency to the so-called "gold-standard" assumption. Case in point, the reference standard platinum iridium (i.e., better than gold) kilogram has slowly been losing mass (50 micrograms total) since it was first cast in the 19th century. As of May 20, 2019 that standard will be replaced by a kilogram mass standard defined in terms of Planck's constant, that is not likely to change as much as the current reference kilogram. Like criterion standard, ground truth is frequently jargonesque. It sometimes refers to remote sensing analytic results compared to outcomes obtained from pictures at least figuratively taken while someone is standing on the ground, i.e., by collation of more direct observations. Once again, the more generic term is reference standard, which I suggest has a much firmer scientific basis in the form of established practice and rules for its precision, accuracy, criticism, evaluation, improvement and deployment.
What is the difference between "gold standard" and "ground truth"? The more complete quote is In some cases it can be impossible to get the actual label (also known as the ground truth or gold standard) and it is estimated from the subjective opinion of a small numb
26,850
bayesglm (arm) versus MCMCpack
To see the full source code you need to download the arm package source from CRAN (it's a tarball). A quick look at the sim function makes me think that arm is an approximate Bayes method as it seems to assume multivariate normality of the maximum likelihood estimates. In models with a very non-quadratic log likelihood, such as the binary logistic model, this may be unlikely to be accurate enough. I'd like to get some comments from others about this. I have used MCMCpack with success; it provides an exact Bayesian solution for many models, given enough posterior draws and convergence of MCMC.
bayesglm (arm) versus MCMCpack
To see the full source code you need to download the arm package source from CRAN (it's a tarball). A quick look at the sim function makes me think that arm is an approximate Bayes method as it seems
bayesglm (arm) versus MCMCpack To see the full source code you need to download the arm package source from CRAN (it's a tarball). A quick look at the sim function makes me think that arm is an approximate Bayes method as it seems to assume multivariate normality of the maximum likelihood estimates. In models with a very non-quadratic log likelihood, such as the binary logistic model, this may be unlikely to be accurate enough. I'd like to get some comments from others about this. I have used MCMCpack with success; it provides an exact Bayesian solution for many models, given enough posterior draws and convergence of MCMC.
bayesglm (arm) versus MCMCpack To see the full source code you need to download the arm package source from CRAN (it's a tarball). A quick look at the sim function makes me think that arm is an approximate Bayes method as it seems
26,851
Time Series Anomaly Detection with Python
I think an approach similar to statistical process control, with control charts etc. might be useful here.
Time Series Anomaly Detection with Python
I think an approach similar to statistical process control, with control charts etc. might be useful here.
Time Series Anomaly Detection with Python I think an approach similar to statistical process control, with control charts etc. might be useful here.
Time Series Anomaly Detection with Python I think an approach similar to statistical process control, with control charts etc. might be useful here.
26,852
Time Series Anomaly Detection with Python
There is plenty of options for anomaly detection, from a standard deviation using Pandas std deviation function, to a Bayesian method and many Machine learning methods in between like: clustering, SVM, Gaussian Process, Neural networks. Take a look to this tutorial: https://www.datascience.com/blog/python-anomaly-detection From a Bayesian perspective I recomend Facebook Prophet. It gives very advanced results without the need of being a Time series expert. It has the options for working on months, days etc, and "uncertainty intervals" help with anomalies. Finally, I recomend this Uber blog about using Neural nets (LSTM) for anomaly detection, it has very goods insights: https://eng.uber.com/neural-networks/
Time Series Anomaly Detection with Python
There is plenty of options for anomaly detection, from a standard deviation using Pandas std deviation function, to a Bayesian method and many Machine learning methods in between like: clustering, SVM
Time Series Anomaly Detection with Python There is plenty of options for anomaly detection, from a standard deviation using Pandas std deviation function, to a Bayesian method and many Machine learning methods in between like: clustering, SVM, Gaussian Process, Neural networks. Take a look to this tutorial: https://www.datascience.com/blog/python-anomaly-detection From a Bayesian perspective I recomend Facebook Prophet. It gives very advanced results without the need of being a Time series expert. It has the options for working on months, days etc, and "uncertainty intervals" help with anomalies. Finally, I recomend this Uber blog about using Neural nets (LSTM) for anomaly detection, it has very goods insights: https://eng.uber.com/neural-networks/
Time Series Anomaly Detection with Python There is plenty of options for anomaly detection, from a standard deviation using Pandas std deviation function, to a Bayesian method and many Machine learning methods in between like: clustering, SVM
26,853
Time Series Anomaly Detection with Python
If you are willing to assume that your dataset is normally distributed, then you can estimate quantiles of this this distribution and see if it falls outside e.g 95%, 80%, etc quantile. I'm not too familiar with Python libraries but I'm sure there are already built functions for it.
Time Series Anomaly Detection with Python
If you are willing to assume that your dataset is normally distributed, then you can estimate quantiles of this this distribution and see if it falls outside e.g 95%, 80%, etc quantile. I'm not too fa
Time Series Anomaly Detection with Python If you are willing to assume that your dataset is normally distributed, then you can estimate quantiles of this this distribution and see if it falls outside e.g 95%, 80%, etc quantile. I'm not too familiar with Python libraries but I'm sure there are already built functions for it.
Time Series Anomaly Detection with Python If you are willing to assume that your dataset is normally distributed, then you can estimate quantiles of this this distribution and see if it falls outside e.g 95%, 80%, etc quantile. I'm not too fa
26,854
Bayesian modeling using multivariate normal with covariate
Question 1: Given your joint probability model $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) \sim N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), \begin{bmatrix} \boldsymbol\Sigma_{11} & \boldsymbol\Sigma_{12} \\ \boldsymbol\Sigma_{21} & \boldsymbol\Sigma_{22} \end{bmatrix} \right)=N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), T\otimes H(\phi)\right)$$ the conditional distribution of $\bf{Y}$ given $\bf{X}$ is also Normal, with mean $$\boldsymbol\mu_2 + \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \left( \mathbf{X} - \boldsymbol\mu_1\right)$$ and variance-covariance matrix $$\boldsymbol\Sigma_{22} - \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \boldsymbol\Sigma_{21}.$$ (Those formulas are copied verbatim from the Wikipedia page on multivariate normals.) The same applies to $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ since $(y(s_0), x(s_0), {\bf{X}}, {\bf{Y}})$ is another Normal vector. Question 2: The predictive $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ is defined as $$ p(y(s_0) | x(s_0), {\bf{X}}, {\bf{Y}})=\int p(y(s_0)| x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi| x(s_0), {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,, $$ i.e., by integrating out the parameters using the posterior distribution of those posteriors, given the current data $({\bf{X}}, {\bf{Y}},x(s_0))$. So there is a little bit more to the full answer. Obviously, if you only need to simulate from the predictive, your notion of simulating jointly from $p(\mu, T, \phi\mid {\bf{X}}, x(s_0), {\bf{Y}})$ and then from $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)$ is valid. Question 3: In the event that $x(s_0)$ is not observed, the pair $(x(s_0),y(s_0))$ can be predicted from another predictive $$ p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}})=\int p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi\mid {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,. $$ When simulating from this predictive, because it is not available in a manageable form, a Gibbs sampler can be run that iteratively simulates $\mu\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\phi$ $T\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),\mu,\phi$ $\phi\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\mu$ $x(s_0)\mid {\bf{X}}, {\bf{Y}},y(s_0),\phi,T,\mu$ $y(s_0)\mid {\bf{X}}, {\bf{Y}},x(s_0),\phi,T,\mu$ or else merge steps 4 and 5 into a single step $x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\phi,T,\mu$
Bayesian modeling using multivariate normal with covariate
Question 1: Given your joint probability model $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) \sim N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end
Bayesian modeling using multivariate normal with covariate Question 1: Given your joint probability model $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) \sim N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), \begin{bmatrix} \boldsymbol\Sigma_{11} & \boldsymbol\Sigma_{12} \\ \boldsymbol\Sigma_{21} & \boldsymbol\Sigma_{22} \end{bmatrix} \right)=N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), T\otimes H(\phi)\right)$$ the conditional distribution of $\bf{Y}$ given $\bf{X}$ is also Normal, with mean $$\boldsymbol\mu_2 + \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \left( \mathbf{X} - \boldsymbol\mu_1\right)$$ and variance-covariance matrix $$\boldsymbol\Sigma_{22} - \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \boldsymbol\Sigma_{21}.$$ (Those formulas are copied verbatim from the Wikipedia page on multivariate normals.) The same applies to $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ since $(y(s_0), x(s_0), {\bf{X}}, {\bf{Y}})$ is another Normal vector. Question 2: The predictive $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ is defined as $$ p(y(s_0) | x(s_0), {\bf{X}}, {\bf{Y}})=\int p(y(s_0)| x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi| x(s_0), {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,, $$ i.e., by integrating out the parameters using the posterior distribution of those posteriors, given the current data $({\bf{X}}, {\bf{Y}},x(s_0))$. So there is a little bit more to the full answer. Obviously, if you only need to simulate from the predictive, your notion of simulating jointly from $p(\mu, T, \phi\mid {\bf{X}}, x(s_0), {\bf{Y}})$ and then from $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)$ is valid. Question 3: In the event that $x(s_0)$ is not observed, the pair $(x(s_0),y(s_0))$ can be predicted from another predictive $$ p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}})=\int p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi\mid {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,. $$ When simulating from this predictive, because it is not available in a manageable form, a Gibbs sampler can be run that iteratively simulates $\mu\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\phi$ $T\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),\mu,\phi$ $\phi\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\mu$ $x(s_0)\mid {\bf{X}}, {\bf{Y}},y(s_0),\phi,T,\mu$ $y(s_0)\mid {\bf{X}}, {\bf{Y}},x(s_0),\phi,T,\mu$ or else merge steps 4 and 5 into a single step $x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\phi,T,\mu$
Bayesian modeling using multivariate normal with covariate Question 1: Given your joint probability model $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) \sim N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end
26,855
How to evaluate goodness of fit for negative binomial regression
Generally speaking, a good fitting model means does a good job generalizing to data not captured in your sample. A good way to mimic this is through cross-validation (CV). To do this, you subset your data into two parts: a testing data set and a training data set. Based on your sample size, I would recommend randomly putting 70% of your data into a testing data set and the remaining 30% in a training data set. Now, build both the Poisson model and the negative binomial model based on your training data set. Calculate the predicted values for the data in your testing data set and compare it to the actual values in the following way: $\sum_{i=1}^{n_2} (Y_i - \hat{Y}_i)^2$ where $n_2$ is the sample size of your training data set, $Y_i$ is the actual value of the dependent variable, and $\hat{Y}_i$ is the predicted value of the dependent variable. Whichever model provides a lower value for the above expression is the preferred model. Now, there is a modification of this called k-folds CV. What it will do is split your data into $k$ approximately equal subsets (called "fold") and will predict each fold using the remaining folds as training data. Setting $k=4$ seems reasonable to me. The relevant R function for this is cv.glm() in the boot package. More information here: http://stat.ethz.ch/R-manual/R-patched/library/boot/html/cv.glm.html
How to evaluate goodness of fit for negative binomial regression
Generally speaking, a good fitting model means does a good job generalizing to data not captured in your sample. A good way to mimic this is through cross-validation (CV). To do this, you subset your
How to evaluate goodness of fit for negative binomial regression Generally speaking, a good fitting model means does a good job generalizing to data not captured in your sample. A good way to mimic this is through cross-validation (CV). To do this, you subset your data into two parts: a testing data set and a training data set. Based on your sample size, I would recommend randomly putting 70% of your data into a testing data set and the remaining 30% in a training data set. Now, build both the Poisson model and the negative binomial model based on your training data set. Calculate the predicted values for the data in your testing data set and compare it to the actual values in the following way: $\sum_{i=1}^{n_2} (Y_i - \hat{Y}_i)^2$ where $n_2$ is the sample size of your training data set, $Y_i$ is the actual value of the dependent variable, and $\hat{Y}_i$ is the predicted value of the dependent variable. Whichever model provides a lower value for the above expression is the preferred model. Now, there is a modification of this called k-folds CV. What it will do is split your data into $k$ approximately equal subsets (called "fold") and will predict each fold using the remaining folds as training data. Setting $k=4$ seems reasonable to me. The relevant R function for this is cv.glm() in the boot package. More information here: http://stat.ethz.ch/R-manual/R-patched/library/boot/html/cv.glm.html
How to evaluate goodness of fit for negative binomial regression Generally speaking, a good fitting model means does a good job generalizing to data not captured in your sample. A good way to mimic this is through cross-validation (CV). To do this, you subset your
26,856
How to evaluate goodness of fit for negative binomial regression
I would suggest to use approaches such as the Akaike information criterion or Bayesian information criterion and compare the returned values of your two models (GLM vs. NBR). Also, using cross-validation to see which model performs worse could be an option and is commonly used, at least to get an impression about how a learned model performs.
How to evaluate goodness of fit for negative binomial regression
I would suggest to use approaches such as the Akaike information criterion or Bayesian information criterion and compare the returned values of your two models (GLM vs. NBR). Also, using cross-validat
How to evaluate goodness of fit for negative binomial regression I would suggest to use approaches such as the Akaike information criterion or Bayesian information criterion and compare the returned values of your two models (GLM vs. NBR). Also, using cross-validation to see which model performs worse could be an option and is commonly used, at least to get an impression about how a learned model performs.
How to evaluate goodness of fit for negative binomial regression I would suggest to use approaches such as the Akaike information criterion or Bayesian information criterion and compare the returned values of your two models (GLM vs. NBR). Also, using cross-validat
26,857
How to evaluate goodness of fit for negative binomial regression
So, if you just want to know if your fit is significant, you can compute the p-value. First, find a good metric for your problem. For distributions a typically used is the Kolmogorov-Smirnov distance: $KS(f,g)=max|f(x)-g(x)|$. Now, call $E$ the cdf of your data and $P$ the analytical cdf of your fit, then $KS_0=KS(E,P)$. Now, we want to compute the probability of obtaining a $KS>KS_0$, given that we assume that your fit is correct. We can easily do that by sampling $n$ times a set of points of size 4000 from your fitted distribution; then we fit the set, and we compute the $KS$ between the sampled set and the fit of the sampled. The p-value is simply the proportion of the $n$ sets where this $KS>KS_0$. If the resulted p-value$>0.05$ (or some significance level that you need to set) then your data is compatible with your fit.
How to evaluate goodness of fit for negative binomial regression
So, if you just want to know if your fit is significant, you can compute the p-value. First, find a good metric for your problem. For distributions a typically used is the Kolmogorov-Smirnov distance:
How to evaluate goodness of fit for negative binomial regression So, if you just want to know if your fit is significant, you can compute the p-value. First, find a good metric for your problem. For distributions a typically used is the Kolmogorov-Smirnov distance: $KS(f,g)=max|f(x)-g(x)|$. Now, call $E$ the cdf of your data and $P$ the analytical cdf of your fit, then $KS_0=KS(E,P)$. Now, we want to compute the probability of obtaining a $KS>KS_0$, given that we assume that your fit is correct. We can easily do that by sampling $n$ times a set of points of size 4000 from your fitted distribution; then we fit the set, and we compute the $KS$ between the sampled set and the fit of the sampled. The p-value is simply the proportion of the $n$ sets where this $KS>KS_0$. If the resulted p-value$>0.05$ (or some significance level that you need to set) then your data is compatible with your fit.
How to evaluate goodness of fit for negative binomial regression So, if you just want to know if your fit is significant, you can compute the p-value. First, find a good metric for your problem. For distributions a typically used is the Kolmogorov-Smirnov distance:
26,858
What is the relationship between scale reliability measures (Cronbach's alpha etc.) and component/factor loadings?
I am going to add an answer here even though the question was asked a year ago. Most people who are concerned with measurement error will tell you that using factor scores from a CFA is not the best way to move forward. Doing a CFA is good. Estimating factor scores is ok as long as you correct for the amount of measurement error associated with those factor scores in subsequent analyses (a SEM program is the best place to do this). To get the reliability of the factor score, you need to first calculate the latent construct's reliability from your CFA (or rho): rho = Factor score variance/(Factor score variance + Factor score standard error^2). Note that the factor score standard error^2 is the error variance of the factor score. This information can be had in MPlus by requesting the PLOT3 output as part of your CFA program. To calculate overall reliability of the factor score, you use the following formula: (1-rho)*(FS variance+FS error variance). The resulting value is the error variance of the factor score. If you were using MPlus for subsequent analyses, you create a latent variable defined by a single item (the factor score) and then specify the factor score's reliability: LatentF BY FScore@1; FScore@(calculated reliability value of factor score) Hope this is helpful! A great resource for this issue are the lecture notes (lecture 11, in particular) from Lesa Hoffman's SEM class at the University of Nebraska, Lincoln.
What is the relationship between scale reliability measures (Cronbach's alpha etc.) and component/fa
I am going to add an answer here even though the question was asked a year ago. Most people who are concerned with measurement error will tell you that using factor scores from a CFA is not the best w
What is the relationship between scale reliability measures (Cronbach's alpha etc.) and component/factor loadings? I am going to add an answer here even though the question was asked a year ago. Most people who are concerned with measurement error will tell you that using factor scores from a CFA is not the best way to move forward. Doing a CFA is good. Estimating factor scores is ok as long as you correct for the amount of measurement error associated with those factor scores in subsequent analyses (a SEM program is the best place to do this). To get the reliability of the factor score, you need to first calculate the latent construct's reliability from your CFA (or rho): rho = Factor score variance/(Factor score variance + Factor score standard error^2). Note that the factor score standard error^2 is the error variance of the factor score. This information can be had in MPlus by requesting the PLOT3 output as part of your CFA program. To calculate overall reliability of the factor score, you use the following formula: (1-rho)*(FS variance+FS error variance). The resulting value is the error variance of the factor score. If you were using MPlus for subsequent analyses, you create a latent variable defined by a single item (the factor score) and then specify the factor score's reliability: LatentF BY FScore@1; FScore@(calculated reliability value of factor score) Hope this is helpful! A great resource for this issue are the lecture notes (lecture 11, in particular) from Lesa Hoffman's SEM class at the University of Nebraska, Lincoln.
What is the relationship between scale reliability measures (Cronbach's alpha etc.) and component/fa I am going to add an answer here even though the question was asked a year ago. Most people who are concerned with measurement error will tell you that using factor scores from a CFA is not the best w
26,859
Bayesian analysis of contingency tables: How to describe effect size
One way to study effect size in ANOVA model is by looking at "super population" and "finite population" standard deviations. You have a two way table, so this is 3 variance components (2 main effects and 1 interaction). This is based on mcmc analysis. You calculate the standard deviation for each effect for each mcmc sample. $$ s_k=\sqrt{\frac{1}{d_k-1}\sum_{j=1}^{d_k}(\beta_{k, j}-\overline {\beta}_k)^2}$$ Where $ k $ indexes the "row" of the ANOVA table. Simple boxplots of the mcmc samples of $ s_k $ vs $ k $ are quite instructive on effect sizes. Andrew Gelman advocated this approach. See his 2005 paper "analysis of variance: why it is more important than ever"
Bayesian analysis of contingency tables: How to describe effect size
One way to study effect size in ANOVA model is by looking at "super population" and "finite population" standard deviations. You have a two way table, so this is 3 variance components (2 main effects
Bayesian analysis of contingency tables: How to describe effect size One way to study effect size in ANOVA model is by looking at "super population" and "finite population" standard deviations. You have a two way table, so this is 3 variance components (2 main effects and 1 interaction). This is based on mcmc analysis. You calculate the standard deviation for each effect for each mcmc sample. $$ s_k=\sqrt{\frac{1}{d_k-1}\sum_{j=1}^{d_k}(\beta_{k, j}-\overline {\beta}_k)^2}$$ Where $ k $ indexes the "row" of the ANOVA table. Simple boxplots of the mcmc samples of $ s_k $ vs $ k $ are quite instructive on effect sizes. Andrew Gelman advocated this approach. See his 2005 paper "analysis of variance: why it is more important than ever"
Bayesian analysis of contingency tables: How to describe effect size One way to study effect size in ANOVA model is by looking at "super population" and "finite population" standard deviations. You have a two way table, so this is 3 variance components (2 main effects
26,860
Bayesian analysis of contingency tables: How to describe effect size
Per the index, Kruschke only mentions effect size twice, and both times are in the context of a metric predicted variable. But there's this bit on p. 601: If the researcher is interested in violations of independence, then interest is on the magnitudes of the $\beta_{rc}$. The model is especially convenient for this purpose, because arbitrary interaction contrasts can be investigated to determine where nonindependence is arising. So, I gather that $\beta_{1,2}$ is the parameter to interpret. Let $S$ equal the sum of products of all coefficients and their corresponding x elements, excluding $\beta_{1,2}$ and $x_{1,2}$. Since $y_i {\raise.17ex\hbox{$\scriptstyle\sim$}} Pois(\lambda_i)$ and $\lambda_i = e^{\beta_{1,2} x_{1,2} + S} = e^{\beta_{1,2} x_{1,2}} e^S$. When $x_{1,2}$ = 1, then $\lambda_i$ grows or shrinks by a factor of $e^{\beta_{1,2}}$, no?
Bayesian analysis of contingency tables: How to describe effect size
Per the index, Kruschke only mentions effect size twice, and both times are in the context of a metric predicted variable. But there's this bit on p. 601: If the researcher is interested in violation
Bayesian analysis of contingency tables: How to describe effect size Per the index, Kruschke only mentions effect size twice, and both times are in the context of a metric predicted variable. But there's this bit on p. 601: If the researcher is interested in violations of independence, then interest is on the magnitudes of the $\beta_{rc}$. The model is especially convenient for this purpose, because arbitrary interaction contrasts can be investigated to determine where nonindependence is arising. So, I gather that $\beta_{1,2}$ is the parameter to interpret. Let $S$ equal the sum of products of all coefficients and their corresponding x elements, excluding $\beta_{1,2}$ and $x_{1,2}$. Since $y_i {\raise.17ex\hbox{$\scriptstyle\sim$}} Pois(\lambda_i)$ and $\lambda_i = e^{\beta_{1,2} x_{1,2} + S} = e^{\beta_{1,2} x_{1,2}} e^S$. When $x_{1,2}$ = 1, then $\lambda_i$ grows or shrinks by a factor of $e^{\beta_{1,2}}$, no?
Bayesian analysis of contingency tables: How to describe effect size Per the index, Kruschke only mentions effect size twice, and both times are in the context of a metric predicted variable. But there's this bit on p. 601: If the researcher is interested in violation
26,861
Trying to understand Gaussian Process
and the above is what we do in Bayesian inference for parametric models, right? The book is using Bayesian model averaging, which is the same for parametric models or any other Bayesian method, given that you have posterior over your parameters. Now I have a noise-free training data set It doesn't need to be 'noise-free'. See later pages. HOWEVER, that's not what the book does! I mean, after specifying the prior p(f), it doesn't compute the likelihood and posterior, but just go straight forward to the predictive prediction. See this: https://people.cs.umass.edu/~wallach/talks/gp_intro.pdf I believe, in page 17 we have the prior, and later the likelihood. I believe if you write the derivations, and find the posterior, and then average over the posterior for prediction (like in the weight-space view) it will result in the same equations as in page 19 for mean and covariance.
Trying to understand Gaussian Process
and the above is what we do in Bayesian inference for parametric models, right? The book is using Bayesian model averaging, which is the same for parametric models or any other Bayesian method, giv
Trying to understand Gaussian Process and the above is what we do in Bayesian inference for parametric models, right? The book is using Bayesian model averaging, which is the same for parametric models or any other Bayesian method, given that you have posterior over your parameters. Now I have a noise-free training data set It doesn't need to be 'noise-free'. See later pages. HOWEVER, that's not what the book does! I mean, after specifying the prior p(f), it doesn't compute the likelihood and posterior, but just go straight forward to the predictive prediction. See this: https://people.cs.umass.edu/~wallach/talks/gp_intro.pdf I believe, in page 17 we have the prior, and later the likelihood. I believe if you write the derivations, and find the posterior, and then average over the posterior for prediction (like in the weight-space view) it will result in the same equations as in page 19 for mean and covariance.
Trying to understand Gaussian Process and the above is what we do in Bayesian inference for parametric models, right? The book is using Bayesian model averaging, which is the same for parametric models or any other Bayesian method, giv
26,862
What's the fundamental difference between these two regression models?
First, I will introduce yet a fourth model for the discussion in my answer: fit1.5 <- lm(y_2 ~ x_1 + x_2 + y_1) Part 0 The difference between fit1 and fit1.5 is best summarized as the difference between a constrained difference vs. an optimal difference. I am going to use a simpler example to explain this than the one provided above. Let's start with fit1.5. A simpler version of the model would be $$y_2 = b_0 + b_1·x + b_2·y_1$$ Of course, when we obtain an OLS estimate, it will find the "optimal" choice for $b_2$. And, though it seems strange to write is as such, we could rewrite the formula as $$y_2 - b_2·y_1 = b_0 + b_1·x$$ We can think of this as the "optimal" difference between the two $y$ variables. Now, if we decide to constraint $b_2=1$, then the formula/model becomes $$y_2 - y_1 = b_0 + b_1·x$$ which is just the (constrained) difference. Note, in the above demonstration, if you let $x$ be a dichotomous variable, and $y_1$ be a pre-test and $y_2$ a post test score pairing, then the constrained difference model would just be the independent samples $t$-test for the gain in scores, whereas the optimal difference model would be the ANCOVA test with the pre-test scores being used as covariates. Part 1 The model for fit2 can best be thought of in a similar fashion to the the difference approach used above. Though this is an oversimplification (as I am purposefully leaving out the error terms), the model could be presented as $$y = b_0 + b_1 · x + b_2 · t$$ where $t=0$ for the $y_1$ values and $t=1$ for the $y_2$ values. Here is the oversimplification...this let's us write $$\begin{align}y_1 & = b_0 + b_1 · x \\ y_2 & = b_0 + b_1 · x + b_2\end{align}$$ Written another way, $y_2 - y_1 = b_2$. Whereas model fit1.5 had $b_2$ as the value to make the optimal difference for the OLS analysis, here $b_2$ is essentially just the average difference between the $y$ values (after controlling for the other covariates). Part 2 So what is the difference between models fit2 and fit3...actually, very little. The fit3 model does account for correlation in error terms, but this only changes the estimation process, and thus the differences between the two model outputs will be minimal (beyond the fact that the fit3 estimates the autoregressive factor). Part 2.5 And I will include yet one more model in this discussion fit4 <- lmer(y~time+x1+x2 + (1|id),data=df.long) This mixed-effects model does a slightly different version of the autoregressive approach. If we were to include the time coefficient in the random effects, this would be comparable to calculating the difference between the $y$s for each subject. (But, this won't work...and the model won't run.)
What's the fundamental difference between these two regression models?
First, I will introduce yet a fourth model for the discussion in my answer: fit1.5 <- lm(y_2 ~ x_1 + x_2 + y_1) Part 0 The difference between fit1 and fit1.5 is best summarized as the difference bet
What's the fundamental difference between these two regression models? First, I will introduce yet a fourth model for the discussion in my answer: fit1.5 <- lm(y_2 ~ x_1 + x_2 + y_1) Part 0 The difference between fit1 and fit1.5 is best summarized as the difference between a constrained difference vs. an optimal difference. I am going to use a simpler example to explain this than the one provided above. Let's start with fit1.5. A simpler version of the model would be $$y_2 = b_0 + b_1·x + b_2·y_1$$ Of course, when we obtain an OLS estimate, it will find the "optimal" choice for $b_2$. And, though it seems strange to write is as such, we could rewrite the formula as $$y_2 - b_2·y_1 = b_0 + b_1·x$$ We can think of this as the "optimal" difference between the two $y$ variables. Now, if we decide to constraint $b_2=1$, then the formula/model becomes $$y_2 - y_1 = b_0 + b_1·x$$ which is just the (constrained) difference. Note, in the above demonstration, if you let $x$ be a dichotomous variable, and $y_1$ be a pre-test and $y_2$ a post test score pairing, then the constrained difference model would just be the independent samples $t$-test for the gain in scores, whereas the optimal difference model would be the ANCOVA test with the pre-test scores being used as covariates. Part 1 The model for fit2 can best be thought of in a similar fashion to the the difference approach used above. Though this is an oversimplification (as I am purposefully leaving out the error terms), the model could be presented as $$y = b_0 + b_1 · x + b_2 · t$$ where $t=0$ for the $y_1$ values and $t=1$ for the $y_2$ values. Here is the oversimplification...this let's us write $$\begin{align}y_1 & = b_0 + b_1 · x \\ y_2 & = b_0 + b_1 · x + b_2\end{align}$$ Written another way, $y_2 - y_1 = b_2$. Whereas model fit1.5 had $b_2$ as the value to make the optimal difference for the OLS analysis, here $b_2$ is essentially just the average difference between the $y$ values (after controlling for the other covariates). Part 2 So what is the difference between models fit2 and fit3...actually, very little. The fit3 model does account for correlation in error terms, but this only changes the estimation process, and thus the differences between the two model outputs will be minimal (beyond the fact that the fit3 estimates the autoregressive factor). Part 2.5 And I will include yet one more model in this discussion fit4 <- lmer(y~time+x1+x2 + (1|id),data=df.long) This mixed-effects model does a slightly different version of the autoregressive approach. If we were to include the time coefficient in the random effects, this would be comparable to calculating the difference between the $y$s for each subject. (But, this won't work...and the model won't run.)
What's the fundamental difference between these two regression models? First, I will introduce yet a fourth model for the discussion in my answer: fit1.5 <- lm(y_2 ~ x_1 + x_2 + y_1) Part 0 The difference between fit1 and fit1.5 is best summarized as the difference bet
26,863
What do I do when values of AIC are low and approximately equal?
It's true that if you have multiple AIC values approximately equal selecting the lowest value may be not the best option. A sensible alternative would be performing model averaging. This way you are able to use not just the best model for inference, but a set of "most supported" models each one weighted according to their AIC value. You have a short introduction by Vincent Calcagno here
What do I do when values of AIC are low and approximately equal?
It's true that if you have multiple AIC values approximately equal selecting the lowest value may be not the best option. A sensible alternative would be performing model averaging. This way you are a
What do I do when values of AIC are low and approximately equal? It's true that if you have multiple AIC values approximately equal selecting the lowest value may be not the best option. A sensible alternative would be performing model averaging. This way you are able to use not just the best model for inference, but a set of "most supported" models each one weighted according to their AIC value. You have a short introduction by Vincent Calcagno here
What do I do when values of AIC are low and approximately equal? It's true that if you have multiple AIC values approximately equal selecting the lowest value may be not the best option. A sensible alternative would be performing model averaging. This way you are a
26,864
Generalized Linear Mixed Models: Diagnostics
The diagnostic methods are indeed different for generalized linear mixed models. A reasonable one that I have seen that is based on residuals from a GLMM is due to Pan and Lin (2005, DOI: 10.1111/j.1541-0420.2005.00365.x). They have been using cumulative sums of residuals where the ordering is imposed either by the explanatory variables or by the linear predictor, thus testing either the specification of the functional form of a given predictor or the link function as a whole. The null distributions are based on simulations from the design space from the null distribution of correct specifications, and they demonstrated decent size and power properties of this test. They did not discuss outliers specifically, but I can imagine that outliers should probably throw off at least the link function by curving it too much towards the influential observation.
Generalized Linear Mixed Models: Diagnostics
The diagnostic methods are indeed different for generalized linear mixed models. A reasonable one that I have seen that is based on residuals from a GLMM is due to Pan and Lin (2005, DOI: 10.1111/j.15
Generalized Linear Mixed Models: Diagnostics The diagnostic methods are indeed different for generalized linear mixed models. A reasonable one that I have seen that is based on residuals from a GLMM is due to Pan and Lin (2005, DOI: 10.1111/j.1541-0420.2005.00365.x). They have been using cumulative sums of residuals where the ordering is imposed either by the explanatory variables or by the linear predictor, thus testing either the specification of the functional form of a given predictor or the link function as a whole. The null distributions are based on simulations from the design space from the null distribution of correct specifications, and they demonstrated decent size and power properties of this test. They did not discuss outliers specifically, but I can imagine that outliers should probably throw off at least the link function by curving it too much towards the influential observation.
Generalized Linear Mixed Models: Diagnostics The diagnostic methods are indeed different for generalized linear mixed models. A reasonable one that I have seen that is based on residuals from a GLMM is due to Pan and Lin (2005, DOI: 10.1111/j.15
26,865
Generalized Linear Mixed Models: Diagnostics
There are a lot of different opinions on what the best way to look at diagnostics for mixed models is. Generally, you will want to look at both the residuals and the standard aspects that would be examined for a non-repeated-measures model. In addition to those, typically, you will also want to look at the random effects themselves. Methods often involve plotting the random effects by various covariates and looking for non-normality in the random effects distribution. There are many more methods (some mentioned in the prior comments), but this is usually a good start.
Generalized Linear Mixed Models: Diagnostics
There are a lot of different opinions on what the best way to look at diagnostics for mixed models is. Generally, you will want to look at both the residuals and the standard aspects that would be ex
Generalized Linear Mixed Models: Diagnostics There are a lot of different opinions on what the best way to look at diagnostics for mixed models is. Generally, you will want to look at both the residuals and the standard aspects that would be examined for a non-repeated-measures model. In addition to those, typically, you will also want to look at the random effects themselves. Methods often involve plotting the random effects by various covariates and looking for non-normality in the random effects distribution. There are many more methods (some mentioned in the prior comments), but this is usually a good start.
Generalized Linear Mixed Models: Diagnostics There are a lot of different opinions on what the best way to look at diagnostics for mixed models is. Generally, you will want to look at both the residuals and the standard aspects that would be ex
26,866
How to model month to month effects in daily time series data?
What does the CCF plot look like for lags 29 to 31? Are the spikes frequent enough that it shows up? You can use a Granger test to check which lagged values are statistically significant.
How to model month to month effects in daily time series data?
What does the CCF plot look like for lags 29 to 31? Are the spikes frequent enough that it shows up? You can use a Granger test to check which lagged values are statistically significant.
How to model month to month effects in daily time series data? What does the CCF plot look like for lags 29 to 31? Are the spikes frequent enough that it shows up? You can use a Granger test to check which lagged values are statistically significant.
How to model month to month effects in daily time series data? What does the CCF plot look like for lags 29 to 31? Are the spikes frequent enough that it shows up? You can use a Granger test to check which lagged values are statistically significant.
26,867
How to model month to month effects in daily time series data?
Month level models You should capture the month level variations in the propensity to terminate (say signups during Christmas holidays are more likely to terminate than signups during April). Let's say your usual time series model is: $$terminations_{t}=\beta_{1} signups_{t-1}+ \beta_{2} signups_{t-2} +..$$ . Now if you believe that the parameters $\beta_{1}$ etc. are month specific you can interact the month indicator flag with the remaining predictors. Thus your new functional form will be $$terminations_{t}=\beta'_{1} signups_{t-1} MonthFlag_{t-1}+ \beta'_{2} signups_{t-2} MonthFlag_{t-1}+..$$ This is akin to building month-level models allowing greater flexibility in capturing month specific variations in tendency to terminate
How to model month to month effects in daily time series data?
Month level models You should capture the month level variations in the propensity to terminate (say signups during Christmas holidays are more likely to terminate than signups during April). Let's sa
How to model month to month effects in daily time series data? Month level models You should capture the month level variations in the propensity to terminate (say signups during Christmas holidays are more likely to terminate than signups during April). Let's say your usual time series model is: $$terminations_{t}=\beta_{1} signups_{t-1}+ \beta_{2} signups_{t-2} +..$$ . Now if you believe that the parameters $\beta_{1}$ etc. are month specific you can interact the month indicator flag with the remaining predictors. Thus your new functional form will be $$terminations_{t}=\beta'_{1} signups_{t-1} MonthFlag_{t-1}+ \beta'_{2} signups_{t-2} MonthFlag_{t-1}+..$$ This is akin to building month-level models allowing greater flexibility in capturing month specific variations in tendency to terminate
How to model month to month effects in daily time series data? Month level models You should capture the month level variations in the propensity to terminate (say signups during Christmas holidays are more likely to terminate than signups during April). Let's sa
26,868
How does one apply Kalman smoothing with irregular time steps?
Yes. In fact, this is how the Kalman Filter (KF) is also set up, at least implicitly. The assumptions in place when choosing the KF model, are that the movements and measurements compose a linear dynamical system. The transition matrix, $F_{t-1}$, (in the equation: $\hat{x}_{t|t-1} = F_tx_{t-1} + ...$, where $\hat{x}$ is the predicted state estimate) is in fact indexed by time, so irregular observations shouldn't be an issue. For a more mathematically rigorous explanation of the KF, Max Welling has a really good tutorial that I highly recommend.
How does one apply Kalman smoothing with irregular time steps?
Yes. In fact, this is how the Kalman Filter (KF) is also set up, at least implicitly. The assumptions in place when choosing the KF model, are that the movements and measurements compose a linear dyna
How does one apply Kalman smoothing with irregular time steps? Yes. In fact, this is how the Kalman Filter (KF) is also set up, at least implicitly. The assumptions in place when choosing the KF model, are that the movements and measurements compose a linear dynamical system. The transition matrix, $F_{t-1}$, (in the equation: $\hat{x}_{t|t-1} = F_tx_{t-1} + ...$, where $\hat{x}$ is the predicted state estimate) is in fact indexed by time, so irregular observations shouldn't be an issue. For a more mathematically rigorous explanation of the KF, Max Welling has a really good tutorial that I highly recommend.
How does one apply Kalman smoothing with irregular time steps? Yes. In fact, this is how the Kalman Filter (KF) is also set up, at least implicitly. The assumptions in place when choosing the KF model, are that the movements and measurements compose a linear dyna
26,869
How does one apply Kalman smoothing with irregular time steps?
The process (model) noise in a Kalman filter is assumed to be zero-mean Gaussian white noise. Under this assumption, the process noise at time t is independent from the process noise at t + dt. (Now this certainly may not be a valid assumption for the system one is actually attempting to model, but notwithstanding that, this is the assumption made for a Kalman filter.) Fans of noise will recognize the time integral of Guassian white noise as a Wiener process, or Brownian motion. There are some heavy-hitting (for me, but thankfully not for Einstein and Norbert Wiener) mathematical derivations involved that boil down to roughly the same conclusion: The covariance of a white Gaussian noise distribution scales with the square root of time. To use this for our Kalman Filter prediction step, let's start with Wikipedia's notation for a discrete-time Kalman Filter: Let $\textbf{w}_k$ be process noise at time $k$,assumed to be drawn from a white noise (zero mean, time-uncorrelated) multivatiate Gaussian distribution with covariance $\textbf{Q}_k$. The state covariance $\textbf{P}$ is increased during the prediction step according to $\textbf{P}_{k|k-1}$ is the a priori state covariance estimate at timestep $k$ given all observations up to $k-1$, $\textbf{F}_k$ is the model dynamics at timestep $k$, and $\textbf{P}_{k-1|k-1}$ is the a posteriori state covariance estimate at timestep $k-1$ given all observations up to timestep $k-1$. Intuitively, the process noise covariance $\textbf{Q}$ depends on time. Over an infinitesimal time, the noise itself cannot change more than infinitesimally, and likewise neither can the process noise covariance. Implicitly (in discrete time KF notation), $\textbf{Q}_k$ represents the Wiener Process for an independent standard normal noise variable evaluated over the time of one timestep. But we can also evaluate the Wiener Process over an arbitrary time window. Let's express this by letting $\textbf{Q}(\Delta)$ represent the process noise covariance for an arbitrary time difference $\Delta$, or the shift in the Wiener process over time. Now we run into a bit of a notation shift, because a "timestep" is a discrete unit of time whereas we prefer a continuous expression. To circumvent this, let $\Delta_k$ be the time difference between two consecutive timesteps, say $k$ and $k-1$. When $\Delta$ = $\Delta_k$, then $\textbf{Q}(\Delta)$ = $\textbf{Q}_k$ Let's move briefly into the notation adopted by Wiener Process: The following expression holds for a Wiener Process $W$ (here evaluated at times $t_2$ and $t_1$ ), where $Z$ is an independent standard normal noise variable. Rearranging, we obtain an expression for the change in a Wiener process over time: $W_{t2} - W_{t1} = \sqrt{t_2 - t_1} \cdot Z$ Now, shifting for the last time back to Kalman filter notation, we note that $\textbf{Q}(\Delta)$ also represents the shift in a Wiener Process over time. We can thus write the expression for $\textbf{Q}(\Delta)$ as: $\textbf{Q}(\Delta) = \sqrt{\Delta} \cdot Z$ Final Takeaways: Intuitively, it makes sense that the integration of white process noise over time would not result in linear scaling because at each infinitesimal timestep the process noise is equally likely to be positive or negative. We would instead expect the scaling to approach linearity if the noise was highly time-correlated (i.e. if the process noise at time t is positive, the process noise at time t+dt is likely also positive). From an implementation standpoint, this is all fairly straightforward. You'll likely have to do a fair amount of process and measurement noise fitting anyways, so to properly incorporate measurements at irregular time intervals, all you really need to do is to ensure that your process noise covariance is scaled by a factor of the square root of the difference in time between the last filter state and the current measurement time ($\Delta$), which should look something along the lines of: $\textbf{P}_{k|k-1} = \textbf{F}_k\textbf{P}_{k-1|k-1}\textbf{F}_k^{T} + \textbf{Q}(\Delta_k)$. or $\textbf{P}_{k|k-1} = \textbf{F}_k\textbf{P}_{k-1|k-1}\textbf{F}_k^{T} + \sqrt{\Delta_k} \cdot Z$. Perhaps obvious but I'll point it out just in case, more than likely your model dynamics $\textbf{F}$ are also affected by a non-uniform timestep (i.e. if you have any sort of velocity / acceleration components). For simplicity this is not addressed here.
How does one apply Kalman smoothing with irregular time steps?
The process (model) noise in a Kalman filter is assumed to be zero-mean Gaussian white noise. Under this assumption, the process noise at time t is independent from the process noise at t + dt. (Now t
How does one apply Kalman smoothing with irregular time steps? The process (model) noise in a Kalman filter is assumed to be zero-mean Gaussian white noise. Under this assumption, the process noise at time t is independent from the process noise at t + dt. (Now this certainly may not be a valid assumption for the system one is actually attempting to model, but notwithstanding that, this is the assumption made for a Kalman filter.) Fans of noise will recognize the time integral of Guassian white noise as a Wiener process, or Brownian motion. There are some heavy-hitting (for me, but thankfully not for Einstein and Norbert Wiener) mathematical derivations involved that boil down to roughly the same conclusion: The covariance of a white Gaussian noise distribution scales with the square root of time. To use this for our Kalman Filter prediction step, let's start with Wikipedia's notation for a discrete-time Kalman Filter: Let $\textbf{w}_k$ be process noise at time $k$,assumed to be drawn from a white noise (zero mean, time-uncorrelated) multivatiate Gaussian distribution with covariance $\textbf{Q}_k$. The state covariance $\textbf{P}$ is increased during the prediction step according to $\textbf{P}_{k|k-1}$ is the a priori state covariance estimate at timestep $k$ given all observations up to $k-1$, $\textbf{F}_k$ is the model dynamics at timestep $k$, and $\textbf{P}_{k-1|k-1}$ is the a posteriori state covariance estimate at timestep $k-1$ given all observations up to timestep $k-1$. Intuitively, the process noise covariance $\textbf{Q}$ depends on time. Over an infinitesimal time, the noise itself cannot change more than infinitesimally, and likewise neither can the process noise covariance. Implicitly (in discrete time KF notation), $\textbf{Q}_k$ represents the Wiener Process for an independent standard normal noise variable evaluated over the time of one timestep. But we can also evaluate the Wiener Process over an arbitrary time window. Let's express this by letting $\textbf{Q}(\Delta)$ represent the process noise covariance for an arbitrary time difference $\Delta$, or the shift in the Wiener process over time. Now we run into a bit of a notation shift, because a "timestep" is a discrete unit of time whereas we prefer a continuous expression. To circumvent this, let $\Delta_k$ be the time difference between two consecutive timesteps, say $k$ and $k-1$. When $\Delta$ = $\Delta_k$, then $\textbf{Q}(\Delta)$ = $\textbf{Q}_k$ Let's move briefly into the notation adopted by Wiener Process: The following expression holds for a Wiener Process $W$ (here evaluated at times $t_2$ and $t_1$ ), where $Z$ is an independent standard normal noise variable. Rearranging, we obtain an expression for the change in a Wiener process over time: $W_{t2} - W_{t1} = \sqrt{t_2 - t_1} \cdot Z$ Now, shifting for the last time back to Kalman filter notation, we note that $\textbf{Q}(\Delta)$ also represents the shift in a Wiener Process over time. We can thus write the expression for $\textbf{Q}(\Delta)$ as: $\textbf{Q}(\Delta) = \sqrt{\Delta} \cdot Z$ Final Takeaways: Intuitively, it makes sense that the integration of white process noise over time would not result in linear scaling because at each infinitesimal timestep the process noise is equally likely to be positive or negative. We would instead expect the scaling to approach linearity if the noise was highly time-correlated (i.e. if the process noise at time t is positive, the process noise at time t+dt is likely also positive). From an implementation standpoint, this is all fairly straightforward. You'll likely have to do a fair amount of process and measurement noise fitting anyways, so to properly incorporate measurements at irregular time intervals, all you really need to do is to ensure that your process noise covariance is scaled by a factor of the square root of the difference in time between the last filter state and the current measurement time ($\Delta$), which should look something along the lines of: $\textbf{P}_{k|k-1} = \textbf{F}_k\textbf{P}_{k-1|k-1}\textbf{F}_k^{T} + \textbf{Q}(\Delta_k)$. or $\textbf{P}_{k|k-1} = \textbf{F}_k\textbf{P}_{k-1|k-1}\textbf{F}_k^{T} + \sqrt{\Delta_k} \cdot Z$. Perhaps obvious but I'll point it out just in case, more than likely your model dynamics $\textbf{F}$ are also affected by a non-uniform timestep (i.e. if you have any sort of velocity / acceleration components). For simplicity this is not addressed here.
How does one apply Kalman smoothing with irregular time steps? The process (model) noise in a Kalman filter is assumed to be zero-mean Gaussian white noise. Under this assumption, the process noise at time t is independent from the process noise at t + dt. (Now t
26,870
"Central limit theorem" for weighted sum of correlated random variables
In David Brillinger's "Time Series Data Analysis and Theory" 1975 Holt, Rinehart and Winston Publishers page 94 Theroem 4.4.1 states under certain condition the discrete fourier transform for an r vector-valued series at frequencies λ$_j$(N) are asymptotically independent r dimensional complex normal variates with mean vector 0 where λ$_j$(N)=2π s$_j$(N)/N. This happens to be a very important theorem in the development of estimates for the spectral density of stationary time series.
"Central limit theorem" for weighted sum of correlated random variables
In David Brillinger's "Time Series Data Analysis and Theory" 1975 Holt, Rinehart and Winston Publishers page 94 Theroem 4.4.1 states under certain condition the discrete fourier transform for an r vec
"Central limit theorem" for weighted sum of correlated random variables In David Brillinger's "Time Series Data Analysis and Theory" 1975 Holt, Rinehart and Winston Publishers page 94 Theroem 4.4.1 states under certain condition the discrete fourier transform for an r vector-valued series at frequencies λ$_j$(N) are asymptotically independent r dimensional complex normal variates with mean vector 0 where λ$_j$(N)=2π s$_j$(N)/N. This happens to be a very important theorem in the development of estimates for the spectral density of stationary time series.
"Central limit theorem" for weighted sum of correlated random variables In David Brillinger's "Time Series Data Analysis and Theory" 1975 Holt, Rinehart and Winston Publishers page 94 Theroem 4.4.1 states under certain condition the discrete fourier transform for an r vec
26,871
Generating random variables satisfying constraints
This paper and R package completely solved my problem. It uses the Markov Chain Monte Carlo method, which relies on the fact that if you can find an initial solution of the constraint, through linear programming, you can find an arbitrary number of them by using a matrix that when multiplied by $E$, the constraints, gives zero. Read about it here: http://www.vliz.be/imisdocs/publications/149403.pdf and here is the package: http://cran.r-project.org/web/packages/limSolve/index.html
Generating random variables satisfying constraints
This paper and R package completely solved my problem. It uses the Markov Chain Monte Carlo method, which relies on the fact that if you can find an initial solution of the constraint, through linear
Generating random variables satisfying constraints This paper and R package completely solved my problem. It uses the Markov Chain Monte Carlo method, which relies on the fact that if you can find an initial solution of the constraint, through linear programming, you can find an arbitrary number of them by using a matrix that when multiplied by $E$, the constraints, gives zero. Read about it here: http://www.vliz.be/imisdocs/publications/149403.pdf and here is the package: http://cran.r-project.org/web/packages/limSolve/index.html
Generating random variables satisfying constraints This paper and R package completely solved my problem. It uses the Markov Chain Monte Carlo method, which relies on the fact that if you can find an initial solution of the constraint, through linear
26,872
Generating random variables satisfying constraints
Might seem trivial (and not terribly machine efficient), but consider repeating the process until you get a suitable answer? Preferably only modifying a smaller subset each time. Can you create a "distance" measure for how far you are away from your ideal answer? It might help you "optimize"?
Generating random variables satisfying constraints
Might seem trivial (and not terribly machine efficient), but consider repeating the process until you get a suitable answer? Preferably only modifying a smaller subset each time. Can you create a "di
Generating random variables satisfying constraints Might seem trivial (and not terribly machine efficient), but consider repeating the process until you get a suitable answer? Preferably only modifying a smaller subset each time. Can you create a "distance" measure for how far you are away from your ideal answer? It might help you "optimize"?
Generating random variables satisfying constraints Might seem trivial (and not terribly machine efficient), but consider repeating the process until you get a suitable answer? Preferably only modifying a smaller subset each time. Can you create a "di
26,873
Automatic feature selection for anomaly detection
One practical approach (in case of supervised learning at least) is to include all possibly relevant features and use a (generalized) linear model (logistic regression, linear svm etc.) with regularization (L1 and/or L2). There are open source tools (e.g. Vowpal Wabbit) that can deal with trillions of example/feature combinations for these types of models so scalability is not an issue (besides, one can always use sub-sampling). The regularization helps to deal with feature selection.
Automatic feature selection for anomaly detection
One practical approach (in case of supervised learning at least) is to include all possibly relevant features and use a (generalized) linear model (logistic regression, linear svm etc.) with regulariz
Automatic feature selection for anomaly detection One practical approach (in case of supervised learning at least) is to include all possibly relevant features and use a (generalized) linear model (logistic regression, linear svm etc.) with regularization (L1 and/or L2). There are open source tools (e.g. Vowpal Wabbit) that can deal with trillions of example/feature combinations for these types of models so scalability is not an issue (besides, one can always use sub-sampling). The regularization helps to deal with feature selection.
Automatic feature selection for anomaly detection One practical approach (in case of supervised learning at least) is to include all possibly relevant features and use a (generalized) linear model (logistic regression, linear svm etc.) with regulariz
26,874
Robust multivariate Gaussian fit in R
There's also mclust: http://www.stat.washington.edu/research/reports/2012/tr597.pdf http://cran.r-project.org/web/packages/mclust/index.html One caution, though: mixture modelling in high dimensional space can get pretty CPU and memory intensive if your cloud of points is large. About four years ago I was doing a batch of 11-dimensional, 50-200K point data, and it was tending to run into 4-11GB of RAM and take up to a week to compute for each case (and I had 400). This is certainly possible, but can be a headache if you're using a shared compute cluster or have limited resources available.
Robust multivariate Gaussian fit in R
There's also mclust: http://www.stat.washington.edu/research/reports/2012/tr597.pdf http://cran.r-project.org/web/packages/mclust/index.html One caution, though: mixture modelling in high dimensional
Robust multivariate Gaussian fit in R There's also mclust: http://www.stat.washington.edu/research/reports/2012/tr597.pdf http://cran.r-project.org/web/packages/mclust/index.html One caution, though: mixture modelling in high dimensional space can get pretty CPU and memory intensive if your cloud of points is large. About four years ago I was doing a batch of 11-dimensional, 50-200K point data, and it was tending to run into 4-11GB of RAM and take up to a week to compute for each case (and I had 400). This is certainly possible, but can be a headache if you're using a shared compute cluster or have limited resources available.
Robust multivariate Gaussian fit in R There's also mclust: http://www.stat.washington.edu/research/reports/2012/tr597.pdf http://cran.r-project.org/web/packages/mclust/index.html One caution, though: mixture modelling in high dimensional
26,875
Robust multivariate Gaussian fit in R
This sounds like a classic multivariate Gaussian Mixture Model. I think that the BayesM package might work. Here are some multivariate Gaussian Mixture packages bayesm: cran.r-project.org/web/packages/bayesm/index.html mixtools: www.jstatsoft.org/v32/i06/paper
Robust multivariate Gaussian fit in R
This sounds like a classic multivariate Gaussian Mixture Model. I think that the BayesM package might work. Here are some multivariate Gaussian Mixture packages bayesm: cran.r-project.org/web/packag
Robust multivariate Gaussian fit in R This sounds like a classic multivariate Gaussian Mixture Model. I think that the BayesM package might work. Here are some multivariate Gaussian Mixture packages bayesm: cran.r-project.org/web/packages/bayesm/index.html mixtools: www.jstatsoft.org/v32/i06/paper
Robust multivariate Gaussian fit in R This sounds like a classic multivariate Gaussian Mixture Model. I think that the BayesM package might work. Here are some multivariate Gaussian Mixture packages bayesm: cran.r-project.org/web/packag
26,876
Recommended method for finding archetypes or clusters
Using exemplars, i.e. data points which could best describe the dataset as a whole, should be a reasonable first step. The most common exemplar clustering method is the Affinity Propagation (AP) methodology put forward by Frey & Dueck (2007) Clustering by Passing Messages Between Data Points; it is considered somewhat more robust to noise than standard $k$-means but usually quite slower too. AP allows us "making these (dependency structures) explicitly visible" by looking at the fitted availabilities and responsibilities matrices; roughly speaking these matrices encode how suitable is candidate instance $j$ is to be cluster centre (i.e. overall examplar) for point $i$ and how well the point $i$ will do to choose point $j$ as its exemplar, respectively. The R package apcluster is actually much more faithful to the original MATLAB implementation of the algorithm than the Python sklearn implementation of the Affinity Propagation clustering methodology so I would suggest familiarising oneself first with the R version.
Recommended method for finding archetypes or clusters
Using exemplars, i.e. data points which could best describe the dataset as a whole, should be a reasonable first step. The most common exemplar clustering method is the Affinity Propagation (AP) metho
Recommended method for finding archetypes or clusters Using exemplars, i.e. data points which could best describe the dataset as a whole, should be a reasonable first step. The most common exemplar clustering method is the Affinity Propagation (AP) methodology put forward by Frey & Dueck (2007) Clustering by Passing Messages Between Data Points; it is considered somewhat more robust to noise than standard $k$-means but usually quite slower too. AP allows us "making these (dependency structures) explicitly visible" by looking at the fitted availabilities and responsibilities matrices; roughly speaking these matrices encode how suitable is candidate instance $j$ is to be cluster centre (i.e. overall examplar) for point $i$ and how well the point $i$ will do to choose point $j$ as its exemplar, respectively. The R package apcluster is actually much more faithful to the original MATLAB implementation of the algorithm than the Python sklearn implementation of the Affinity Propagation clustering methodology so I would suggest familiarising oneself first with the R version.
Recommended method for finding archetypes or clusters Using exemplars, i.e. data points which could best describe the dataset as a whole, should be a reasonable first step. The most common exemplar clustering method is the Affinity Propagation (AP) metho
26,877
Two years of data describing occurence of violence- testing association with number of patients on ward
Here is an idea that connects your binary dependent variable to a continuous, unobserved variable; a connection that may let you leverage the power of time series models for continuous variables. Define: $V_{w,t} = 1$ if violent incident happened in ward $w$ during time period $t$ and 0 otherwise $P_{w,t}$ : Propensity for violence in ward $w$ at time $t$. $P_{w,t}$ is assumed to be a continuous variable that in some sense represents 'pent-up' feelings of the inmates which boil over at some time and results in violence. Following this reasoning, we have: $V_{w,t} = \begin{cases} 1 & \mbox{if } P_{w,t} \ge \tau \\ 0 & \mbox{otherwise} \end{cases}$ where, $\tau$ is an unobserved threshold which triggers a violent act. You can then use a time series model for $P_{w,t}$ and estimate the relevant parameters. For example, you could model $P_{w,t}$ as: $P_{w,t} = \alpha_0 + \alpha_1 P_{w,t-1} + ... + \alpha_p P_{w,t-p}+ \beta n_{w,t} + \epsilon_t$ where, $n_{w,t}$ is the number of patients in ward $w$ at time $t$. You could see if $\beta$ is significantly different from 0 to test your hypothesis that "more patients lead to an increase in probability of violence". The challenge of the above model specification is that you do not really observe $P_{w,t}$ and thus the above is not your usual time series model. I do not know anything about R so perhaps someone else will chip in if there is a package that would let you estimate models like the above.
Two years of data describing occurence of violence- testing association with number of patients on w
Here is an idea that connects your binary dependent variable to a continuous, unobserved variable; a connection that may let you leverage the power of time series models for continuous variables. Def
Two years of data describing occurence of violence- testing association with number of patients on ward Here is an idea that connects your binary dependent variable to a continuous, unobserved variable; a connection that may let you leverage the power of time series models for continuous variables. Define: $V_{w,t} = 1$ if violent incident happened in ward $w$ during time period $t$ and 0 otherwise $P_{w,t}$ : Propensity for violence in ward $w$ at time $t$. $P_{w,t}$ is assumed to be a continuous variable that in some sense represents 'pent-up' feelings of the inmates which boil over at some time and results in violence. Following this reasoning, we have: $V_{w,t} = \begin{cases} 1 & \mbox{if } P_{w,t} \ge \tau \\ 0 & \mbox{otherwise} \end{cases}$ where, $\tau$ is an unobserved threshold which triggers a violent act. You can then use a time series model for $P_{w,t}$ and estimate the relevant parameters. For example, you could model $P_{w,t}$ as: $P_{w,t} = \alpha_0 + \alpha_1 P_{w,t-1} + ... + \alpha_p P_{w,t-p}+ \beta n_{w,t} + \epsilon_t$ where, $n_{w,t}$ is the number of patients in ward $w$ at time $t$. You could see if $\beta$ is significantly different from 0 to test your hypothesis that "more patients lead to an increase in probability of violence". The challenge of the above model specification is that you do not really observe $P_{w,t}$ and thus the above is not your usual time series model. I do not know anything about R so perhaps someone else will chip in if there is a package that would let you estimate models like the above.
Two years of data describing occurence of violence- testing association with number of patients on w Here is an idea that connects your binary dependent variable to a continuous, unobserved variable; a connection that may let you leverage the power of time series models for continuous variables. Def
26,878
What are some good frameworks for method selection?
John, I am not sure my suggestion may be of help. But, in any case the book Intuitive Biostatistics by Harvey Motulsky may be of assistance. Chapter 37 'Choosing a Test' has a pretty good table on page 298 that tells you given the nature of the data set and problem you are addressing what statistical method you should use. Amazon lets you search through this book. Good luck.
What are some good frameworks for method selection?
John, I am not sure my suggestion may be of help. But, in any case the book Intuitive Biostatistics by Harvey Motulsky may be of assistance. Chapter 37 'Choosing a Test' has a pretty good table on p
What are some good frameworks for method selection? John, I am not sure my suggestion may be of help. But, in any case the book Intuitive Biostatistics by Harvey Motulsky may be of assistance. Chapter 37 'Choosing a Test' has a pretty good table on page 298 that tells you given the nature of the data set and problem you are addressing what statistical method you should use. Amazon lets you search through this book. Good luck.
What are some good frameworks for method selection? John, I am not sure my suggestion may be of help. But, in any case the book Intuitive Biostatistics by Harvey Motulsky may be of assistance. Chapter 37 'Choosing a Test' has a pretty good table on p
26,879
Why convert spectrogram to RGB for machine learning?
A less trivial explanation can be that converting gray-scale to RGB is effectively adding a layer of ReLU neurons with fixed parameters. For example converting an image to RGB using the viridis colour-map is using something similar to three piecewise linear functions that can be composed out of ReLU functions. This addition has the effect of increasing the depth (extra layer) and width (potential extra neurons in subsequent layers) of the neural network. Both effects can potentially improve the performance of the model (if it's current depth and/or width was not sufficient). Width A simple example is converting a single grayscale channel to three rgb channels by simply copying the image three times. This can be effectively like performing some ensemble learning. Your neural network or decision tree may converge to different patterns on the different channels which can be later on merged in an average with a final layer or classification boundary. You could also see it alternatively as effectively making several of the hidden layers three times wider (but not fully connecting them, and adding only three times more connections). This can create some potential for different training and convergence which is potentially better. Depth The additional color mapping layer may allow to create patterns that are not possible with less connections. The flexibility is increased. The simplest example is an image of a single pixel that passes through a layer with a single neuron with a step function (so this is an example where even the number of neurons remain the same and the width of the subsequent network is not changed). For BW, this is a two parameter function (weight $w_1$ and bias $b$) that effectively makes a classification based on whether or not the input is above or below some level. For RGB, then we get two additional parameters, $w_2$ and $w_3$, for the extra channels, and this makes it possible to create more patterns. For example we can make a classification when the grayscale pixel has either a high or either low value. Obviously one can achieve the same when not converting to rgb, and instead add more neurons or an additional layer. But possibly the cases where the rgb performed better did not test this out. Also the conversion to rgb, using some useful scale, is making a hardcoded seperation into shadows, middle tones and highlights, which a NN needs training and extra neurons for. (So in a way it is adding an extra layer which is regularised. And also it is adding pre-trained information because the human decision to choose a particular colour map instead of another; ie the human chooses the trigger points of the ReLU layer and the conversion to rgb is additional information). Anyway, this simple example is a case where it is possible to prove that rgb can perform better (if we compare with a limited model, like only a fixed number of neurons and layers).
Why convert spectrogram to RGB for machine learning?
A less trivial explanation can be that converting gray-scale to RGB is effectively adding a layer of ReLU neurons with fixed parameters. For example converting an image to RGB using the viridis colour
Why convert spectrogram to RGB for machine learning? A less trivial explanation can be that converting gray-scale to RGB is effectively adding a layer of ReLU neurons with fixed parameters. For example converting an image to RGB using the viridis colour-map is using something similar to three piecewise linear functions that can be composed out of ReLU functions. This addition has the effect of increasing the depth (extra layer) and width (potential extra neurons in subsequent layers) of the neural network. Both effects can potentially improve the performance of the model (if it's current depth and/or width was not sufficient). Width A simple example is converting a single grayscale channel to three rgb channels by simply copying the image three times. This can be effectively like performing some ensemble learning. Your neural network or decision tree may converge to different patterns on the different channels which can be later on merged in an average with a final layer or classification boundary. You could also see it alternatively as effectively making several of the hidden layers three times wider (but not fully connecting them, and adding only three times more connections). This can create some potential for different training and convergence which is potentially better. Depth The additional color mapping layer may allow to create patterns that are not possible with less connections. The flexibility is increased. The simplest example is an image of a single pixel that passes through a layer with a single neuron with a step function (so this is an example where even the number of neurons remain the same and the width of the subsequent network is not changed). For BW, this is a two parameter function (weight $w_1$ and bias $b$) that effectively makes a classification based on whether or not the input is above or below some level. For RGB, then we get two additional parameters, $w_2$ and $w_3$, for the extra channels, and this makes it possible to create more patterns. For example we can make a classification when the grayscale pixel has either a high or either low value. Obviously one can achieve the same when not converting to rgb, and instead add more neurons or an additional layer. But possibly the cases where the rgb performed better did not test this out. Also the conversion to rgb, using some useful scale, is making a hardcoded seperation into shadows, middle tones and highlights, which a NN needs training and extra neurons for. (So in a way it is adding an extra layer which is regularised. And also it is adding pre-trained information because the human decision to choose a particular colour map instead of another; ie the human chooses the trigger points of the ReLU layer and the conversion to rgb is additional information). Anyway, this simple example is a case where it is possible to prove that rgb can perform better (if we compare with a limited model, like only a fixed number of neurons and layers).
Why convert spectrogram to RGB for machine learning? A less trivial explanation can be that converting gray-scale to RGB is effectively adding a layer of ReLU neurons with fixed parameters. For example converting an image to RGB using the viridis colour
26,880
Why convert spectrogram to RGB for machine learning?
I do not have very "hard" evidence, but I have a publication under review where we have trained ResNet50 to regress some values from noisy spectrograms. Pretraining in ImageNet is better than starting from random initialization For pretrained networks, using color spectrograms is better than grayscale spectrograms (normalized to 0-1) All that I have is comparative experiments in a couple of datasets, so take it or leave it :)
Why convert spectrogram to RGB for machine learning?
I do not have very "hard" evidence, but I have a publication under review where we have trained ResNet50 to regress some values from noisy spectrograms. Pretraining in ImageNet is better than startin
Why convert spectrogram to RGB for machine learning? I do not have very "hard" evidence, but I have a publication under review where we have trained ResNet50 to regress some values from noisy spectrograms. Pretraining in ImageNet is better than starting from random initialization For pretrained networks, using color spectrograms is better than grayscale spectrograms (normalized to 0-1) All that I have is comparative experiments in a couple of datasets, so take it or leave it :)
Why convert spectrogram to RGB for machine learning? I do not have very "hard" evidence, but I have a publication under review where we have trained ResNet50 to regress some values from noisy spectrograms. Pretraining in ImageNet is better than startin
26,881
Why convert spectrogram to RGB for machine learning?
Colormapping is nonlinear filtering. A color map is simply a transform; the breakup into three dimensions further interprets it as filtering and decomposition. turbo is preferable to jet for inspection (1 -- 2 -- 3) - which is to say, it's not arbitrary, and the human visual system favors it. In turbo (or jet), as one use case, we can quickly skim an image for peaks, which will be red, and we may wish to focus only on those - that's identical to the "R" channel. "Image" involves efficient (and nonlinear) compression. The standard approach to STFT compression is direct subsampling (i.e. hop_size), which aliases. An improvement is decimation, i.e. lowpass filtering + subsampling, which is a linear compression. If something so simple was effective, there'd be no need for all the sophistication of JPEG. In ML terms, we can view "save as JPEG" as a highly effective autoencoder, also effective dimensionality reduction. There's more to say but I'll just share the main points for now. Note that this is completely separate from using image-excelling NNs on STFT images. That can be detrimental. Also, @Ghostpunk's answer is mistaken and misleading, as I commented. It may be owed to the popular "windowed Fourier transform" interpretation of STFT. Spectrogram losses can also be measured. Relevant posts: Equivalence between "windowed Fourier transform" and STFT as convolutions/filtering Role of window length and overlap in uncertainty principle? Note I realized the question, and my answer, are ill-suited for this network, and I may not be developing my answer further here. If I develop is elsewhere, I'll link it. In the meantime, refer to my discussion with @SextusEmpiricus. Still self-accepting since, though elaboration is due, my answer can be understood with the right (mainly signal processing + feature engineering) background, and I believe it contains the most pertinent explanation.
Why convert spectrogram to RGB for machine learning?
Colormapping is nonlinear filtering. A color map is simply a transform; the breakup into three dimensions further interprets it as filtering and decomposition. turbo is preferable to jet for inspectio
Why convert spectrogram to RGB for machine learning? Colormapping is nonlinear filtering. A color map is simply a transform; the breakup into three dimensions further interprets it as filtering and decomposition. turbo is preferable to jet for inspection (1 -- 2 -- 3) - which is to say, it's not arbitrary, and the human visual system favors it. In turbo (or jet), as one use case, we can quickly skim an image for peaks, which will be red, and we may wish to focus only on those - that's identical to the "R" channel. "Image" involves efficient (and nonlinear) compression. The standard approach to STFT compression is direct subsampling (i.e. hop_size), which aliases. An improvement is decimation, i.e. lowpass filtering + subsampling, which is a linear compression. If something so simple was effective, there'd be no need for all the sophistication of JPEG. In ML terms, we can view "save as JPEG" as a highly effective autoencoder, also effective dimensionality reduction. There's more to say but I'll just share the main points for now. Note that this is completely separate from using image-excelling NNs on STFT images. That can be detrimental. Also, @Ghostpunk's answer is mistaken and misleading, as I commented. It may be owed to the popular "windowed Fourier transform" interpretation of STFT. Spectrogram losses can also be measured. Relevant posts: Equivalence between "windowed Fourier transform" and STFT as convolutions/filtering Role of window length and overlap in uncertainty principle? Note I realized the question, and my answer, are ill-suited for this network, and I may not be developing my answer further here. If I develop is elsewhere, I'll link it. In the meantime, refer to my discussion with @SextusEmpiricus. Still self-accepting since, though elaboration is due, my answer can be understood with the right (mainly signal processing + feature engineering) background, and I believe it contains the most pertinent explanation.
Why convert spectrogram to RGB for machine learning? Colormapping is nonlinear filtering. A color map is simply a transform; the breakup into three dimensions further interprets it as filtering and decomposition. turbo is preferable to jet for inspectio
26,882
Intuitive way to connect gamma and chi-squared distributions [duplicate]
Intuition has to be trained through arduous application to become other than misleading. There are too many implied questions here for a single post. However, addressing those here does provide a series of links summarizing some of the properties of the gamma distribution, so the implied questions posited may have some value to the potential reader. Q1 Chi-squared and gamma distribution A1 See end of answer here, and even more explicitly here. In particular in that latter answer, note the following statement: "Unique properties are scattered around all over Mathematics, and most of the time, they don't reflect some "deeper intuition" or "structure" - they just exist (thankfully)." Q2 "is there a way to connect the Poisson process interpretation of the gamma" A2 See answer No, there is no Poisson process interpretation. Poisson is a subset of gamma for positive integers only. Thus, there is a gamma simplification that leads to Poisson not the obverse. Q3 An implied question: "I intuit the gamma distribution as the waiting time for the 𝑘-th arrival in a Poisson process. This supports the idea that independent gamma r.v.s (with same rate parameter) can be summed to another gamma distribution." A3 Backwards again. Although the special case of same rate parameter has closure under convolution (sum of rv's), this is not the case without having the same rate parameter, where the sum (convolution) of two gamma distributions is not closed by a gamma distribution. For what it actually is see this link for the sum of two gammas, and for more than two see this other link. Second, as a Poisson distribution is a gamma distribution subset one can at most "suspect" that a gamma distribution might have something to do with wait times. What is lacking is any such determination for anything beyond that special case simplification. Statement "A gamma distribution with a large shape parameter can be thought of as the sum of many independent gamma r.v.s with smaller (Sic, iff identical) shape parameters." It can, but to what end? Q4 "By CLT, the gamma converges to a normal distribution as the shape parameter grows." A4 The shape does but the mean would grow without bound to do so. To maintain stationarity, a lot more has to be done. See this answer.
Intuitive way to connect gamma and chi-squared distributions [duplicate]
Intuition has to be trained through arduous application to become other than misleading. There are too many implied questions here for a single post. However, addressing those here does provide a seri
Intuitive way to connect gamma and chi-squared distributions [duplicate] Intuition has to be trained through arduous application to become other than misleading. There are too many implied questions here for a single post. However, addressing those here does provide a series of links summarizing some of the properties of the gamma distribution, so the implied questions posited may have some value to the potential reader. Q1 Chi-squared and gamma distribution A1 See end of answer here, and even more explicitly here. In particular in that latter answer, note the following statement: "Unique properties are scattered around all over Mathematics, and most of the time, they don't reflect some "deeper intuition" or "structure" - they just exist (thankfully)." Q2 "is there a way to connect the Poisson process interpretation of the gamma" A2 See answer No, there is no Poisson process interpretation. Poisson is a subset of gamma for positive integers only. Thus, there is a gamma simplification that leads to Poisson not the obverse. Q3 An implied question: "I intuit the gamma distribution as the waiting time for the 𝑘-th arrival in a Poisson process. This supports the idea that independent gamma r.v.s (with same rate parameter) can be summed to another gamma distribution." A3 Backwards again. Although the special case of same rate parameter has closure under convolution (sum of rv's), this is not the case without having the same rate parameter, where the sum (convolution) of two gamma distributions is not closed by a gamma distribution. For what it actually is see this link for the sum of two gammas, and for more than two see this other link. Second, as a Poisson distribution is a gamma distribution subset one can at most "suspect" that a gamma distribution might have something to do with wait times. What is lacking is any such determination for anything beyond that special case simplification. Statement "A gamma distribution with a large shape parameter can be thought of as the sum of many independent gamma r.v.s with smaller (Sic, iff identical) shape parameters." It can, but to what end? Q4 "By CLT, the gamma converges to a normal distribution as the shape parameter grows." A4 The shape does but the mean would grow without bound to do so. To maintain stationarity, a lot more has to be done. See this answer.
Intuitive way to connect gamma and chi-squared distributions [duplicate] Intuition has to be trained through arduous application to become other than misleading. There are too many implied questions here for a single post. However, addressing those here does provide a seri
26,883
Is there a way to recover a temporal dependence structure in a time series from a regression against time?
It is important to distinguish between data generating process and mathematical relationship. It may be possible that there is a mapping (and possibly non-unique), $\hat{X}(t) = f(t)$ $\rightarrow$ $\hat{X}_i = f(X_{i-1},X_{i-2},...)$. However, this does not mean that both can be considered as same data generating process. When we model a time series (or a process) by $\hat{X}_i = f(X_{i-1},X_{i-2},...)$, we assume that each new observation by design is generated by this process. In polynomial modeling the innovations from previous periods play no role in influencing the realized value of current period. In dependence structure modeling, innovations from previous periods are directly part of current observation. So you see, there is a very significant difference in data generation process. On the other hand, there may be a mathematical relationship that can give a non-unique mapping. Consider this: $\hat{X}(t) = a_0+a_1t+a_2t^2$ $\implies \hat{X}(t-1) = a_0+a_1(t-1)+a_2(t-1)^2$ $\implies \hat{X}(t-1) = \hat{X}(t)-a_1+a_2-2a_2t$ $\implies \Delta\hat{X}(t) \equiv \hat{X}(t)-\hat{X}(t-1) = a_1-a_2+2a_2t$ $\implies \Delta\hat{X}(t)-\Delta\hat{X}(t-1)=2a_2$ Therefore, $\hat{X}(t)=2\hat{X}(t-1)-\hat{X}(t-2)+2a_2$ So, from $\hat{X}(t) = f(t)$, we have found $\hat{X}_i = f(X_{i-1},X_{i-2})$. What's fishy in this? We have actually found $\hat{X}_i = f(\hat{X}_{i-1},\hat{X}_{i-2})$. But interestingly, we can still model $\hat{X}(t) = a_0+a_1t+a_2t^2$ by $X(t)=2X(t-1)-X(t-2)+\epsilon_t$. Just the innovations will be completely different now. Further, the latter relationship is will hold irrespective of the value of $a_0$ and $a_1$. So the relationship will not be unique.
Is there a way to recover a temporal dependence structure in a time series from a regression against
It is important to distinguish between data generating process and mathematical relationship. It may be possible that there is a mapping (and possibly non-unique), $\hat{X}(t) = f(t)$ $\rightarrow$ $\
Is there a way to recover a temporal dependence structure in a time series from a regression against time? It is important to distinguish between data generating process and mathematical relationship. It may be possible that there is a mapping (and possibly non-unique), $\hat{X}(t) = f(t)$ $\rightarrow$ $\hat{X}_i = f(X_{i-1},X_{i-2},...)$. However, this does not mean that both can be considered as same data generating process. When we model a time series (or a process) by $\hat{X}_i = f(X_{i-1},X_{i-2},...)$, we assume that each new observation by design is generated by this process. In polynomial modeling the innovations from previous periods play no role in influencing the realized value of current period. In dependence structure modeling, innovations from previous periods are directly part of current observation. So you see, there is a very significant difference in data generation process. On the other hand, there may be a mathematical relationship that can give a non-unique mapping. Consider this: $\hat{X}(t) = a_0+a_1t+a_2t^2$ $\implies \hat{X}(t-1) = a_0+a_1(t-1)+a_2(t-1)^2$ $\implies \hat{X}(t-1) = \hat{X}(t)-a_1+a_2-2a_2t$ $\implies \Delta\hat{X}(t) \equiv \hat{X}(t)-\hat{X}(t-1) = a_1-a_2+2a_2t$ $\implies \Delta\hat{X}(t)-\Delta\hat{X}(t-1)=2a_2$ Therefore, $\hat{X}(t)=2\hat{X}(t-1)-\hat{X}(t-2)+2a_2$ So, from $\hat{X}(t) = f(t)$, we have found $\hat{X}_i = f(X_{i-1},X_{i-2})$. What's fishy in this? We have actually found $\hat{X}_i = f(\hat{X}_{i-1},\hat{X}_{i-2})$. But interestingly, we can still model $\hat{X}(t) = a_0+a_1t+a_2t^2$ by $X(t)=2X(t-1)-X(t-2)+\epsilon_t$. Just the innovations will be completely different now. Further, the latter relationship is will hold irrespective of the value of $a_0$ and $a_1$. So the relationship will not be unique.
Is there a way to recover a temporal dependence structure in a time series from a regression against It is important to distinguish between data generating process and mathematical relationship. It may be possible that there is a mapping (and possibly non-unique), $\hat{X}(t) = f(t)$ $\rightarrow$ $\
26,884
Is there a way to recover a temporal dependence structure in a time series from a regression against time?
Great Question: There is nonsense and there is nonsense but the most non-sensical nonsense of them all is statistical nonsense as promoted by PROPHET promoting polynomials in time rather than level\step shifts (intercept changes ) and time trends with possible break points. Please see my answer/comments Why is my high degree polynomial regression model suddenly unfit for the data? and furthermore for an intelligent assessment of anachronistic polynomial fitting see @huber's insightful reflections in Does the p-value in the incremental F-test determine how many trials I expect to get correct? . Forming an ARIMA model with possible deterministic trends and/or levels is much more approriate and legions beyond Prophet's capabilities. I have fully researched Prophet and on my opinion find the only thing of value is the creative choice of the name. Their treatment of daily data is particularly wanting. ANSWER : Not to my knowledge since any sufficient ARIMA model might contain not only ARIMA structure bur Pulses , Level/Step shifts , Seasonal Pulses and deterministic time trends.
Is there a way to recover a temporal dependence structure in a time series from a regression against
Great Question: There is nonsense and there is nonsense but the most non-sensical nonsense of them all is statistical nonsense as promoted by PROPHET promoting polynomials in time rather than level\st
Is there a way to recover a temporal dependence structure in a time series from a regression against time? Great Question: There is nonsense and there is nonsense but the most non-sensical nonsense of them all is statistical nonsense as promoted by PROPHET promoting polynomials in time rather than level\step shifts (intercept changes ) and time trends with possible break points. Please see my answer/comments Why is my high degree polynomial regression model suddenly unfit for the data? and furthermore for an intelligent assessment of anachronistic polynomial fitting see @huber's insightful reflections in Does the p-value in the incremental F-test determine how many trials I expect to get correct? . Forming an ARIMA model with possible deterministic trends and/or levels is much more approriate and legions beyond Prophet's capabilities. I have fully researched Prophet and on my opinion find the only thing of value is the creative choice of the name. Their treatment of daily data is particularly wanting. ANSWER : Not to my knowledge since any sufficient ARIMA model might contain not only ARIMA structure bur Pulses , Level/Step shifts , Seasonal Pulses and deterministic time trends.
Is there a way to recover a temporal dependence structure in a time series from a regression against Great Question: There is nonsense and there is nonsense but the most non-sensical nonsense of them all is statistical nonsense as promoted by PROPHET promoting polynomials in time rather than level\st
26,885
Is there a way to recover a temporal dependence structure in a time series from a regression against time?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Have you tried asymmetric eigenvector maps (AEM)? It is useful for summarizing temporal auto-correlation in orthogonal vectors that can be used as predictive variables. Also, you have the same approach for spatial auto-correlation but is called Moran Eigenvector maps (MEM). I hope this will help you, Best José.
Is there a way to recover a temporal dependence structure in a time series from a regression against
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Is there a way to recover a temporal dependence structure in a time series from a regression against time? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Have you tried asymmetric eigenvector maps (AEM)? It is useful for summarizing temporal auto-correlation in orthogonal vectors that can be used as predictive variables. Also, you have the same approach for spatial auto-correlation but is called Moran Eigenvector maps (MEM). I hope this will help you, Best José.
Is there a way to recover a temporal dependence structure in a time series from a regression against Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
26,886
How to test a linear relationship between log odds and predictors before performing logistic regression?
Nice question. In practice, very few people pretest this assumption, or test it at all. To do so you could divide each independent variable (X) into perhaps 8 or 10 or 15 equal-interval categories. Then compute log-odds as ln(p/[1-p]) within each category, where p = the proportion of cases for which the dependent variable = 1 rather than 0. Finally, use ANOVA or, informally, view a scatterplot to assess the linearity of the relationship between log-odds and this X.
How to test a linear relationship between log odds and predictors before performing logistic regress
Nice question. In practice, very few people pretest this assumption, or test it at all. To do so you could divide each independent variable (X) into perhaps 8 or 10 or 15 equal-interval categories.
How to test a linear relationship between log odds and predictors before performing logistic regression? Nice question. In practice, very few people pretest this assumption, or test it at all. To do so you could divide each independent variable (X) into perhaps 8 or 10 or 15 equal-interval categories. Then compute log-odds as ln(p/[1-p]) within each category, where p = the proportion of cases for which the dependent variable = 1 rather than 0. Finally, use ANOVA or, informally, view a scatterplot to assess the linearity of the relationship between log-odds and this X.
How to test a linear relationship between log odds and predictors before performing logistic regress Nice question. In practice, very few people pretest this assumption, or test it at all. To do so you could divide each independent variable (X) into perhaps 8 or 10 or 15 equal-interval categories.
26,887
Why is a large choice of K lowering my cross validation score?
r^2 score is undefined when applied to a single sample (e.g. leave-one-out CV). r^2 is not good for evaluation of small test sets: when it's used to evaluate a sufficiently-small test set, the score can be far into the negatives despite good predictions. Given a single sample, a good prediction for a given domain may appear terrible: from sklearn.metrics import r2_score true = [1] predicted = [1.01] # prediction of a single value, off by 1% print(r2_score(true, predicted)) # 0.0 Increase the size of the test set (keeping the accuracy of predictions the same), and suddenly the r^2 score appears near-perfect: true = [1, 2, 3] predicted = [1.01, 2.02, 3.03] print(r2_score(true, predicted)) # 0.9993 Taken to the other extreme, if the test size is 2 samples, and we happen to be evaluating 2 samples that are close to each other by chance, this will have substantial impact on the r^2 score, even if the predictions are quite good: true = [20.2, 20.1] # actual target values from the Boston Housing dataset predicted = [19, 21] print(r2_score(true, predicted)) # -449.0
Why is a large choice of K lowering my cross validation score?
r^2 score is undefined when applied to a single sample (e.g. leave-one-out CV). r^2 is not good for evaluation of small test sets: when it's used to evaluate a sufficiently-small test set, the score
Why is a large choice of K lowering my cross validation score? r^2 score is undefined when applied to a single sample (e.g. leave-one-out CV). r^2 is not good for evaluation of small test sets: when it's used to evaluate a sufficiently-small test set, the score can be far into the negatives despite good predictions. Given a single sample, a good prediction for a given domain may appear terrible: from sklearn.metrics import r2_score true = [1] predicted = [1.01] # prediction of a single value, off by 1% print(r2_score(true, predicted)) # 0.0 Increase the size of the test set (keeping the accuracy of predictions the same), and suddenly the r^2 score appears near-perfect: true = [1, 2, 3] predicted = [1.01, 2.02, 3.03] print(r2_score(true, predicted)) # 0.9993 Taken to the other extreme, if the test size is 2 samples, and we happen to be evaluating 2 samples that are close to each other by chance, this will have substantial impact on the r^2 score, even if the predictions are quite good: true = [20.2, 20.1] # actual target values from the Boston Housing dataset predicted = [19, 21] print(r2_score(true, predicted)) # -449.0
Why is a large choice of K lowering my cross validation score? r^2 score is undefined when applied to a single sample (e.g. leave-one-out CV). r^2 is not good for evaluation of small test sets: when it's used to evaluate a sufficiently-small test set, the score
26,888
Should I use a seasonal arima or stl decomposition and model residuals only?
Outliers Outliers should be easily detected by plotting a box-plot. "In order to be an outlier, the data value must be larger than Q3 by at least 1.5 times the interquartile range (IQR), or. smaller than Q1 by at least 1.5 times the IQR". For a more detailed way of detecting outliers please refer to: https://stackoverflow.com/questions/24750819/outlier-detection-of-time-series-data-in-r Anomalies To detect anomalies check this RPubs, it seems quite simple to perform: https://www.rpubs.com/vmez/409672 STL vs seasonal adjustment of arima From what I know, which is not a lot, the differentiating(d) term of Sarima simply the difference between consecutive observations is computed, where it accounts for the trend. The D component or seasonal differentiating is the difference between an observation and the previous observation from the same season(for monthly data it is y_t-y_t-12). These differentiation techniques are relatively simple comparing to the mathematical computations behind stl(). This thread here will answer your question better: Is stl a good technique for forecasting, instead of Arima? To sum it up: "STL can deal with phenomena such as multiple seasonalities, high-frequency seasonalities better than arima", so it basically depends on your data.
Should I use a seasonal arima or stl decomposition and model residuals only?
Outliers Outliers should be easily detected by plotting a box-plot. "In order to be an outlier, the data value must be larger than Q3 by at least 1.5 times the interquartile range (IQR), or. smaller
Should I use a seasonal arima or stl decomposition and model residuals only? Outliers Outliers should be easily detected by plotting a box-plot. "In order to be an outlier, the data value must be larger than Q3 by at least 1.5 times the interquartile range (IQR), or. smaller than Q1 by at least 1.5 times the IQR". For a more detailed way of detecting outliers please refer to: https://stackoverflow.com/questions/24750819/outlier-detection-of-time-series-data-in-r Anomalies To detect anomalies check this RPubs, it seems quite simple to perform: https://www.rpubs.com/vmez/409672 STL vs seasonal adjustment of arima From what I know, which is not a lot, the differentiating(d) term of Sarima simply the difference between consecutive observations is computed, where it accounts for the trend. The D component or seasonal differentiating is the difference between an observation and the previous observation from the same season(for monthly data it is y_t-y_t-12). These differentiation techniques are relatively simple comparing to the mathematical computations behind stl(). This thread here will answer your question better: Is stl a good technique for forecasting, instead of Arima? To sum it up: "STL can deal with phenomena such as multiple seasonalities, high-frequency seasonalities better than arima", so it basically depends on your data.
Should I use a seasonal arima or stl decomposition and model residuals only? Outliers Outliers should be easily detected by plotting a box-plot. "In order to be an outlier, the data value must be larger than Q3 by at least 1.5 times the interquartile range (IQR), or. smaller
26,889
Is there a Monte Carlo/MCMC sampler implemented which can deal with isolated local maxima of posterior distribution?
Neither of the strategies above is particularly suitable for multiple optima. A better choice are Differential Evolution MCMC and derived MCMCs such as DREAM. These algorithms work with several MCMC chains that are mixed to generate proposals. If you have at least one chain in each optima, they will be able to jump efficiently between optima. An implementation in R is available here https://cran.r-project.org/web/packages/BayesianTools/index.html
Is there a Monte Carlo/MCMC sampler implemented which can deal with isolated local maxima of posteri
Neither of the strategies above is particularly suitable for multiple optima. A better choice are Differential Evolution MCMC and derived MCMCs such as DREAM. These algorithms work with several MCMC
Is there a Monte Carlo/MCMC sampler implemented which can deal with isolated local maxima of posterior distribution? Neither of the strategies above is particularly suitable for multiple optima. A better choice are Differential Evolution MCMC and derived MCMCs such as DREAM. These algorithms work with several MCMC chains that are mixed to generate proposals. If you have at least one chain in each optima, they will be able to jump efficiently between optima. An implementation in R is available here https://cran.r-project.org/web/packages/BayesianTools/index.html
Is there a Monte Carlo/MCMC sampler implemented which can deal with isolated local maxima of posteri Neither of the strategies above is particularly suitable for multiple optima. A better choice are Differential Evolution MCMC and derived MCMCs such as DREAM. These algorithms work with several MCMC
26,890
Plotting predicted values in ARIMA time series in R
That is why you shouldn't do ARIMA or anything on non stationary data. Answer to a question why ARIMA forecast is getting flat is pretty obvious after looking at ARIMA equation and one of assumptions. This is simplified explanation, do not treat it as a math proof. Let's consider AR(1) model, but that is true for any ARIMA(p,d,q). Equation of AR(1) is: $$ y_t = \beta y_{t-1} + \alpha + \epsilon$$ and assumption about $ \beta $ is that $|\beta| \le 1$. With such a β every next point is closer to 0 than the previous until $ \beta y_{t-1} =0 $, and $y_t = const = \alpha $. In that case, how to deal with such a data? You have to make it stationary by differentiation ($new.data=y_t-y_{t-1}$) or calculating % change ($new.data=y_t/y_{t-1} -1$). You are modeling differences, not a data itself. Differences are getting constant with time, that is your trend. require(tseries) require(forecast) require(astsa) dif<-diff(gtemp) fit = auto.arima(dif) pred = predict(fit, n.ahead = 50) ts.plot(dif, pred$pred, lty = c(1,3), col=c(5,2)) gtemp_pred<-gtemp[length(gtemp)] for(i in 1:length(pred$pred)){ gtemp_pred[i+1]<-gtemp_pred[i]+pred$pred[i] } plot(c(gtemp,gtemp_pred),type="l")
Plotting predicted values in ARIMA time series in R
That is why you shouldn't do ARIMA or anything on non stationary data. Answer to a question why ARIMA forecast is getting flat is pretty obvious after looking at ARIMA equation and one of assumptions.
Plotting predicted values in ARIMA time series in R That is why you shouldn't do ARIMA or anything on non stationary data. Answer to a question why ARIMA forecast is getting flat is pretty obvious after looking at ARIMA equation and one of assumptions. This is simplified explanation, do not treat it as a math proof. Let's consider AR(1) model, but that is true for any ARIMA(p,d,q). Equation of AR(1) is: $$ y_t = \beta y_{t-1} + \alpha + \epsilon$$ and assumption about $ \beta $ is that $|\beta| \le 1$. With such a β every next point is closer to 0 than the previous until $ \beta y_{t-1} =0 $, and $y_t = const = \alpha $. In that case, how to deal with such a data? You have to make it stationary by differentiation ($new.data=y_t-y_{t-1}$) or calculating % change ($new.data=y_t/y_{t-1} -1$). You are modeling differences, not a data itself. Differences are getting constant with time, that is your trend. require(tseries) require(forecast) require(astsa) dif<-diff(gtemp) fit = auto.arima(dif) pred = predict(fit, n.ahead = 50) ts.plot(dif, pred$pred, lty = c(1,3), col=c(5,2)) gtemp_pred<-gtemp[length(gtemp)] for(i in 1:length(pred$pred)){ gtemp_pred[i+1]<-gtemp_pred[i]+pred$pred[i] } plot(c(gtemp,gtemp_pred),type="l")
Plotting predicted values in ARIMA time series in R That is why you shouldn't do ARIMA or anything on non stationary data. Answer to a question why ARIMA forecast is getting flat is pretty obvious after looking at ARIMA equation and one of assumptions.
26,891
Plotting predicted values in ARIMA time series in R
One popular reason to why predictions look that "linear" is because you may not have made it as stationary as it needs to be. I would check for seasonality etc. In other words, just because a ADF and KPSS test says the data is stationary it is worth while to inspect the data visually or think intuitively about the trend to make further adjustments before you predict it. This may avoid that upward "linear" trend look.
Plotting predicted values in ARIMA time series in R
One popular reason to why predictions look that "linear" is because you may not have made it as stationary as it needs to be. I would check for seasonality etc. In other words, just because a ADF and
Plotting predicted values in ARIMA time series in R One popular reason to why predictions look that "linear" is because you may not have made it as stationary as it needs to be. I would check for seasonality etc. In other words, just because a ADF and KPSS test says the data is stationary it is worth while to inspect the data visually or think intuitively about the trend to make further adjustments before you predict it. This may avoid that upward "linear" trend look.
Plotting predicted values in ARIMA time series in R One popular reason to why predictions look that "linear" is because you may not have made it as stationary as it needs to be. I would check for seasonality etc. In other words, just because a ADF and
26,892
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov Models
Have you solved this? If not, perhaps you could try: sum(forwardbackward(setpars(depmix(list(var~1), data=newData, nstates=3,family=list(gaussian())), getpars(originalModel)))[["alpha"]][nrow(data),]) This one-liner gets the probability of new data by running the forward algorithm on your original model. Please let me know if you arrived at a better solution, as I am tackling this problem myself.
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov M
Have you solved this? If not, perhaps you could try: sum(forwardbackward(setpars(depmix(list(var~1), data=newData, nstates=3,family=list(gaussian())), getpars(originalModel)))[["alpha"]][nrow(data),])
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov Models Have you solved this? If not, perhaps you could try: sum(forwardbackward(setpars(depmix(list(var~1), data=newData, nstates=3,family=list(gaussian())), getpars(originalModel)))[["alpha"]][nrow(data),]) This one-liner gets the probability of new data by running the forward algorithm on your original model. Please let me know if you arrived at a better solution, as I am tackling this problem myself.
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov M Have you solved this? If not, perhaps you could try: sum(forwardbackward(setpars(depmix(list(var~1), data=newData, nstates=3,family=list(gaussian())), getpars(originalModel)))[["alpha"]][nrow(data),])
26,893
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov Models
Fit the new model and then call posterior(). modNew <- depmix(EventTime~1,data=data2,transition=~Count,nstates=2, family=multinomial("identity")) modNew <- setpars(modNew,getpars(fm)) modNew <- fit(modNew) predStates <- posterior(modNew) predStates$state
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov M
Fit the new model and then call posterior(). modNew <- depmix(EventTime~1,data=data2,transition=~Count,nstates=2, family=multinomial("identity")) modNew <- setpars(modNew,getpars(fm)) modNew <- fit(
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov Models Fit the new model and then call posterior(). modNew <- depmix(EventTime~1,data=data2,transition=~Count,nstates=2, family=multinomial("identity")) modNew <- setpars(modNew,getpars(fm)) modNew <- fit(modNew) predStates <- posterior(modNew) predStates$state
How to predict state probabilities or states for new data with DepmixS4 package, for Hidden Markov M Fit the new model and then call posterior(). modNew <- depmix(EventTime~1,data=data2,transition=~Count,nstates=2, family=multinomial("identity")) modNew <- setpars(modNew,getpars(fm)) modNew <- fit(
26,894
Unbiased estimator for AR($p$) model
This is of course not a rigorous answer to your question 1, but since you asked the question in general, evidence for a counterexample already indicates that the answer is no. So here is a little simulation study using exact ML estimation from arima0 to argue that there is at least one case where there is bias: reps <- 10000 n <- 30 true.ar1.coef <- 0.9 ar1.coefs <- rep(NA, reps) for (i in 1:reps){ y <- arima.sim(list(ar=true.ar1.coef), n) ar1.coefs[i] <- arima0(y, order=c(1,0,0), include.mean = F)$coef } mean(ar1.coefs) - true.ar1.coef
Unbiased estimator for AR($p$) model
This is of course not a rigorous answer to your question 1, but since you asked the question in general, evidence for a counterexample already indicates that the answer is no. So here is a little sim
Unbiased estimator for AR($p$) model This is of course not a rigorous answer to your question 1, but since you asked the question in general, evidence for a counterexample already indicates that the answer is no. So here is a little simulation study using exact ML estimation from arima0 to argue that there is at least one case where there is bias: reps <- 10000 n <- 30 true.ar1.coef <- 0.9 ar1.coefs <- rep(NA, reps) for (i in 1:reps){ y <- arima.sim(list(ar=true.ar1.coef), n) ar1.coefs[i] <- arima0(y, order=c(1,0,0), include.mean = F)$coef } mean(ar1.coefs) - true.ar1.coef
Unbiased estimator for AR($p$) model This is of course not a rigorous answer to your question 1, but since you asked the question in general, evidence for a counterexample already indicates that the answer is no. So here is a little sim
26,895
Unbiased estimator for AR($p$) model
I happen to be reading the same book that you are reading and found the answer to both of your questions. The biasness of the autoregression betas is mentioned in the book on page 215. The book also mentions a way to correct the bias on page 223. The way to proceed is through a iterative two step approach. Hope this helps.
Unbiased estimator for AR($p$) model
I happen to be reading the same book that you are reading and found the answer to both of your questions. The biasness of the autoregression betas is mentioned in the book on page 215. The book also
Unbiased estimator for AR($p$) model I happen to be reading the same book that you are reading and found the answer to both of your questions. The biasness of the autoregression betas is mentioned in the book on page 215. The book also mentions a way to correct the bias on page 223. The way to proceed is through a iterative two step approach. Hope this helps.
Unbiased estimator for AR($p$) model I happen to be reading the same book that you are reading and found the answer to both of your questions. The biasness of the autoregression betas is mentioned in the book on page 215. The book also
26,896
Linear regression with Laplace errors
The residuals (actually called errors) are assumed to be randomly distributed with a double-exponential distribution (Laplace distribution). If you are fitting this x and y data points, do it numerically. You first calculate beta-hat_ML for these points as a whole using the formula you posted above. This will determine a line through the points. Then subtract each point's y value from the y value of the line at that x value. This is is the residual for that point. The residuals of all points can be used to construct a histogram that will give you the distribution of the residuals. There is a good mathematical article on it by Yang (2014). --Lee
Linear regression with Laplace errors
The residuals (actually called errors) are assumed to be randomly distributed with a double-exponential distribution (Laplace distribution). If you are fitting this x and y data points, do it numeric
Linear regression with Laplace errors The residuals (actually called errors) are assumed to be randomly distributed with a double-exponential distribution (Laplace distribution). If you are fitting this x and y data points, do it numerically. You first calculate beta-hat_ML for these points as a whole using the formula you posted above. This will determine a line through the points. Then subtract each point's y value from the y value of the line at that x value. This is is the residual for that point. The residuals of all points can be used to construct a histogram that will give you the distribution of the residuals. There is a good mathematical article on it by Yang (2014). --Lee
Linear regression with Laplace errors The residuals (actually called errors) are assumed to be randomly distributed with a double-exponential distribution (Laplace distribution). If you are fitting this x and y data points, do it numeric
26,897
Linear regression with Laplace errors
I think this is equivalent to Robust Regression. In Robust Regression you minimize the 1-norm, instead of the 2-norm - and try to find ${\arg\min }_{\boldsymbol \beta \in \mathbb R^m} \sum _{i=1}^n |\mathbf x_i \cdot \boldsymbol \beta - y_i|$ as you wrote. One way to solve it is to approximate the 1-norm with a surrogate, like the Huber loss: i.e. $h_\eta(x)=\sqrt {x^2+\eta}$ for some small smoothing parameter $\eta$. So now the loss is $\sum _{i=1}^n \sqrt{(\mathbf x_i \cdot \boldsymbol \beta - y_i)^2+\eta}$ and you can use something like Gradient-Descent on this (now differentiable) function. Here's some code I wrote for an HW exercise in MATLAB: fun_g = @(u) sum( sqrt (u.^ 2 + eta^ 2 )); fun_f = @(w) fun_g(X*w-z); grad_g = @(u) u.*( 1. /( sqrt (u.^ 2 + eta^ 2 ))); grad_f = @(w) X'*grad_g(X*w-z); grad = grad_f(w); while (norm(grad) > epsilon) w = w - t*grad; fun_val = fun_f(w); grad = grad_f(w); fprintf( 'iter_number = %3d norm_grad = %2.6f fun_val = %2.6f\n' ,iter,norm(grad),fun_val); end
Linear regression with Laplace errors
I think this is equivalent to Robust Regression. In Robust Regression you minimize the 1-norm, instead of the 2-norm - and try to find ${\arg\min }_{\boldsymbol \beta \in \mathbb R^m} \sum _{i=1}^n |\
Linear regression with Laplace errors I think this is equivalent to Robust Regression. In Robust Regression you minimize the 1-norm, instead of the 2-norm - and try to find ${\arg\min }_{\boldsymbol \beta \in \mathbb R^m} \sum _{i=1}^n |\mathbf x_i \cdot \boldsymbol \beta - y_i|$ as you wrote. One way to solve it is to approximate the 1-norm with a surrogate, like the Huber loss: i.e. $h_\eta(x)=\sqrt {x^2+\eta}$ for some small smoothing parameter $\eta$. So now the loss is $\sum _{i=1}^n \sqrt{(\mathbf x_i \cdot \boldsymbol \beta - y_i)^2+\eta}$ and you can use something like Gradient-Descent on this (now differentiable) function. Here's some code I wrote for an HW exercise in MATLAB: fun_g = @(u) sum( sqrt (u.^ 2 + eta^ 2 )); fun_f = @(w) fun_g(X*w-z); grad_g = @(u) u.*( 1. /( sqrt (u.^ 2 + eta^ 2 ))); grad_f = @(w) X'*grad_g(X*w-z); grad = grad_f(w); while (norm(grad) > epsilon) w = w - t*grad; fun_val = fun_f(w); grad = grad_f(w); fprintf( 'iter_number = %3d norm_grad = %2.6f fun_val = %2.6f\n' ,iter,norm(grad),fun_val); end
Linear regression with Laplace errors I think this is equivalent to Robust Regression. In Robust Regression you minimize the 1-norm, instead of the 2-norm - and try to find ${\arg\min }_{\boldsymbol \beta \in \mathbb R^m} \sum _{i=1}^n |\
26,898
Difference between training and test data distribution
Ordinarily, you would obtain your training data as a simple random sample of your total dataset. This allows you to take advantage of all the known properties of random samples, including the fact that the training and test data then have the same underlying distributions. Indeed, the main purpose of this split is to use one set of data to "train" your model (i.e., fit the model) and the other set of data to set hypotheses of interest in that model. If you do not randomly sample your training data then you get all sorts of problems arising from the fact that there may be systematic differences between the two parts of your data.
Difference between training and test data distribution
Ordinarily, you would obtain your training data as a simple random sample of your total dataset. This allows you to take advantage of all the known properties of random samples, including the fact th
Difference between training and test data distribution Ordinarily, you would obtain your training data as a simple random sample of your total dataset. This allows you to take advantage of all the known properties of random samples, including the fact that the training and test data then have the same underlying distributions. Indeed, the main purpose of this split is to use one set of data to "train" your model (i.e., fit the model) and the other set of data to set hypotheses of interest in that model. If you do not randomly sample your training data then you get all sorts of problems arising from the fact that there may be systematic differences between the two parts of your data.
Difference between training and test data distribution Ordinarily, you would obtain your training data as a simple random sample of your total dataset. This allows you to take advantage of all the known properties of random samples, including the fact th
26,899
Difference between training and test data distribution
I think you're confusing the underlying distribution from which both training and test distributions are drawn, with the distributions of the specific train and test draws. Unless the underlying distribution is eg time-sensitive, changed during the time between eg drawing the training and the testing samples, the underlying distribution is identical each time. The goal in learning a machine learning model is typically not to learn the training distribution, but to learn the latent underlying distribution, of which the training distribution is only a sample. Of course, you cannot actually see the underlying distribution, but eg, if you only really cared about learning the training samples, you could simply memorize the training samples in a lookup table, end of story. In reality, you are using the training sample as a proxy into the underlying distribution. "Generalization" is a somewhat synonym for "try to learn the underlying distribution, rather than just overfitting to the training samples". To estimate how well the training data, and your fitted model, matches the underlying distribution, one approach is to draw one training set, one test set. Train on the training set, test on the test set. In reality, since you're most likely fitting a bunch of hyperparameters, you'll overfit these against the test set, think you're getting some super awesome mega accuracy, then fail horribly when you put the model into production. A better approach is to use cross-fold validation: draw a bunch of training data split it randomly into 80% training data, 20% valdiation/dev data run training/test on this, note down the accuracy etc redo the split, eg using a different random seed re-run train/evaluate redo eg 5, 10, 20 times, depending on how much variance you are seeing this will give you a fairly realistic insight into how well your training sets and model are fitting the underlying distribution it's pretty general. You can use this approach for any i.i.d datasets
Difference between training and test data distribution
I think you're confusing the underlying distribution from which both training and test distributions are drawn, with the distributions of the specific train and test draws. Unless the underlying distr
Difference between training and test data distribution I think you're confusing the underlying distribution from which both training and test distributions are drawn, with the distributions of the specific train and test draws. Unless the underlying distribution is eg time-sensitive, changed during the time between eg drawing the training and the testing samples, the underlying distribution is identical each time. The goal in learning a machine learning model is typically not to learn the training distribution, but to learn the latent underlying distribution, of which the training distribution is only a sample. Of course, you cannot actually see the underlying distribution, but eg, if you only really cared about learning the training samples, you could simply memorize the training samples in a lookup table, end of story. In reality, you are using the training sample as a proxy into the underlying distribution. "Generalization" is a somewhat synonym for "try to learn the underlying distribution, rather than just overfitting to the training samples". To estimate how well the training data, and your fitted model, matches the underlying distribution, one approach is to draw one training set, one test set. Train on the training set, test on the test set. In reality, since you're most likely fitting a bunch of hyperparameters, you'll overfit these against the test set, think you're getting some super awesome mega accuracy, then fail horribly when you put the model into production. A better approach is to use cross-fold validation: draw a bunch of training data split it randomly into 80% training data, 20% valdiation/dev data run training/test on this, note down the accuracy etc redo the split, eg using a different random seed re-run train/evaluate redo eg 5, 10, 20 times, depending on how much variance you are seeing this will give you a fairly realistic insight into how well your training sets and model are fitting the underlying distribution it's pretty general. You can use this approach for any i.i.d datasets
Difference between training and test data distribution I think you're confusing the underlying distribution from which both training and test distributions are drawn, with the distributions of the specific train and test draws. Unless the underlying distr
26,900
How to construct "reference priors"?
The reference prior improves upon Jeffrey's prior technique for finding a multiparameter prior by decomposing the problem into a series of conditional lower dimensional problems, for which reasonable noninformative priors can be computed. The goal is to obtain a noninformative prior. It requires further proof to show that the reference prior for a specific setting results in a proper posterior distribution (https://www.jstor.org/stable/3085905). I will give you an example in terms of a linear regression model with a patterned variance-covariance matrix and normal errors. Let $\boldsymbol{\theta} = (\boldsymbol{\beta}, \boldsymbol{\phi})$, where $\boldsymbol{\beta}$ represents the parameters in the mean function and $\boldsymbol{\phi}$ represents the parameters in the variance function. Given $\boldsymbol{\phi}$, a noninformative prior for $\boldsymbol{\beta}$ would be proportional to 1 (i.e. uniform over the real line). Thus we can decompose the prior and construct the reference prior, $\pi^R\left(\boldsymbol{\theta}\right)=\pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right)\pi^R\left(\boldsymbol{\phi}\right),$ where $\pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right)=1$. Next, $\pi^R\left(\boldsymbol{\phi}\right)$ is computed using the Jefferys-rule prior, but for the marginal model defined via the integrated likelihood \begin{eqnarray*} L^1 \left(\boldsymbol{\phi}\right) = \int_{\mathbb{R}^p} L(\boldsymbol{\theta}) \pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right) d \boldsymbol{\beta}. \end{eqnarray*} Note, that a closed-form solution for $L^1 \left(\boldsymbol{\phi}\right)$ exists for this model so that obtaining the reference prior is not difficult. The difficulty is in proving that the posterior density will always be proper and hence you have an "automatic" noninformative prior at your disposal to perform Bayesian analysis.
How to construct "reference priors"?
The reference prior improves upon Jeffrey's prior technique for finding a multiparameter prior by decomposing the problem into a series of conditional lower dimensional problems, for which reasonable
How to construct "reference priors"? The reference prior improves upon Jeffrey's prior technique for finding a multiparameter prior by decomposing the problem into a series of conditional lower dimensional problems, for which reasonable noninformative priors can be computed. The goal is to obtain a noninformative prior. It requires further proof to show that the reference prior for a specific setting results in a proper posterior distribution (https://www.jstor.org/stable/3085905). I will give you an example in terms of a linear regression model with a patterned variance-covariance matrix and normal errors. Let $\boldsymbol{\theta} = (\boldsymbol{\beta}, \boldsymbol{\phi})$, where $\boldsymbol{\beta}$ represents the parameters in the mean function and $\boldsymbol{\phi}$ represents the parameters in the variance function. Given $\boldsymbol{\phi}$, a noninformative prior for $\boldsymbol{\beta}$ would be proportional to 1 (i.e. uniform over the real line). Thus we can decompose the prior and construct the reference prior, $\pi^R\left(\boldsymbol{\theta}\right)=\pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right)\pi^R\left(\boldsymbol{\phi}\right),$ where $\pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right)=1$. Next, $\pi^R\left(\boldsymbol{\phi}\right)$ is computed using the Jefferys-rule prior, but for the marginal model defined via the integrated likelihood \begin{eqnarray*} L^1 \left(\boldsymbol{\phi}\right) = \int_{\mathbb{R}^p} L(\boldsymbol{\theta}) \pi^R\left(\boldsymbol{\beta}|\boldsymbol{\phi}\right) d \boldsymbol{\beta}. \end{eqnarray*} Note, that a closed-form solution for $L^1 \left(\boldsymbol{\phi}\right)$ exists for this model so that obtaining the reference prior is not difficult. The difficulty is in proving that the posterior density will always be proper and hence you have an "automatic" noninformative prior at your disposal to perform Bayesian analysis.
How to construct "reference priors"? The reference prior improves upon Jeffrey's prior technique for finding a multiparameter prior by decomposing the problem into a series of conditional lower dimensional problems, for which reasonable