idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
47,801
|
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution?
|
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well as carrying out Fisher's Exact Test, (1) also calculates the conditional maximum-likelihood estimate of, & confidence intervals for, the odds ratio, & (2) does have an argument to set the odds ratio under the null to values other than 1; explaining the need to bring up non-centrality. It's worth noting, despite the manual's saying "given all marginal totals fixed", that it's Fisher's non-central hypergeometric distribution that's used in these calculations; which can arise from conditioning on marginal totals under several sampling schemes, but which is inapplicable when they are in fact fixed by design. See Fog (2015), "Biased Urn Theory".
|
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom
|
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well
|
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution?
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well as carrying out Fisher's Exact Test, (1) also calculates the conditional maximum-likelihood estimate of, & confidence intervals for, the odds ratio, & (2) does have an argument to set the odds ratio under the null to values other than 1; explaining the need to bring up non-centrality. It's worth noting, despite the manual's saying "given all marginal totals fixed", that it's Fisher's non-central hypergeometric distribution that's used in these calculations; which can arise from conditioning on marginal totals under several sampling schemes, but which is inapplicable when they are in fact fixed by design. See Fog (2015), "Biased Urn Theory".
|
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well
|
47,802
|
coxme proportional hazard assumption
|
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation.
Oka suggested using frailty in connection with coxph in order to test the proportional hazard assumption. I believe it is worth noting that the documentation for frailty mentions, "the coxme package has superseded this [frailty] method." For this reason, the original question about "how to test the PH assumption on mixed effect cox model coxme," has strong justification to stay within the scope of coxme.
|
coxme proportional hazard assumption
|
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation.
Oka suggested using frailty in connection with coxph in order to test the proportional hazard
|
coxme proportional hazard assumption
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation.
Oka suggested using frailty in connection with coxph in order to test the proportional hazard assumption. I believe it is worth noting that the documentation for frailty mentions, "the coxme package has superseded this [frailty] method." For this reason, the original question about "how to test the PH assumption on mixed effect cox model coxme," has strong justification to stay within the scope of coxme.
|
coxme proportional hazard assumption
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation.
Oka suggested using frailty in connection with coxph in order to test the proportional hazard
|
47,803
|
coxme proportional hazard assumption
|
However, I cannot find the equivalent for coxme models.
Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can do it like this:
# making the model
myfit <- coxph( Surv(Time, Censor) ~ fixed + frailty(random) , data = mydata )
# assessing the proportionality of hazards
cox.zph(myfit)
How to test PH assumption on mixed effect cox model coxme?
With coxme you could probably use model residuals for diagnostics. The objects produced by lmekin or lmer functions have been mentioned to have methods for residuals. And when you get them, you can plot them and examine graphically or otherwise.
If there is no equivalent of the cox.zph for coxme models, is it valid for publication in scientific article to run mixed effect coxme model but test the PH assumption on a cox.ph model identical to the coxme model but without random effect?
There is no reason for that, since you can test the PH with random effects with cox.zph/cox.ph and get more accurate results.
|
coxme proportional hazard assumption
|
However, I cannot find the equivalent for coxme models.
Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can
|
coxme proportional hazard assumption
However, I cannot find the equivalent for coxme models.
Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can do it like this:
# making the model
myfit <- coxph( Surv(Time, Censor) ~ fixed + frailty(random) , data = mydata )
# assessing the proportionality of hazards
cox.zph(myfit)
How to test PH assumption on mixed effect cox model coxme?
With coxme you could probably use model residuals for diagnostics. The objects produced by lmekin or lmer functions have been mentioned to have methods for residuals. And when you get them, you can plot them and examine graphically or otherwise.
If there is no equivalent of the cox.zph for coxme models, is it valid for publication in scientific article to run mixed effect coxme model but test the PH assumption on a cox.ph model identical to the coxme model but without random effect?
There is no reason for that, since you can test the PH with random effects with cox.zph/cox.ph and get more accurate results.
|
coxme proportional hazard assumption
However, I cannot find the equivalent for coxme models.
Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can
|
47,804
|
coxme proportional hazard assumption
|
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argument is "the result of fitting a Cox regression model, using the coxph or coxme functions." (Emphasis added.)
As another answer notes, the random effects in a coxme model cannot be evaluated for PH, as the modeling process treats the random effects as fixed offsets. I suspect that failures of PH with respect to random effects (whatever that might mean in this context) would thus show up as PH violations in fixed effects.
This answer illustrates the successful use of cox.zph() on a coxme object, at least with fairly recent versions of the software (survival_3.1-11 and coxme_2.2-16).
|
coxme proportional hazard assumption
|
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argu
|
coxme proportional hazard assumption
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argument is "the result of fitting a Cox regression model, using the coxph or coxme functions." (Emphasis added.)
As another answer notes, the random effects in a coxme model cannot be evaluated for PH, as the modeling process treats the random effects as fixed offsets. I suspect that failures of PH with respect to random effects (whatever that might mean in this context) would thus show up as PH violations in fixed effects.
This answer illustrates the successful use of cox.zph() on a coxme object, at least with fairly recent versions of the software (survival_3.1-11 and coxme_2.2-16).
|
coxme proportional hazard assumption
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argu
|
47,805
|
coxme proportional hazard assumption
|
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function.
Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg suggested above.
Random effects terms such a frailty or random effects in a coxme model are not checked for proportional hazards, rather they are treated as a fixed offset in model.
|
coxme proportional hazard assumption
|
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function.
Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg sug
|
coxme proportional hazard assumption
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function.
Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg suggested above.
Random effects terms such a frailty or random effects in a coxme model are not checked for proportional hazards, rather they are treated as a fixed offset in model.
|
coxme proportional hazard assumption
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function.
Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg sug
|
47,806
|
Expectation of a matrix for variance-covariance
|
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like an average of it's column. In this sense you are right. However covariance matrix is about covariation between this variables. So for example cell (1,3) is covariation between the third and the first variables and so on. Since you have 3 variables it can be 9 covariations (including covariation of variable with itself that is variation) that is how you get $3x3$ matrix.
So covariance matrix is:
$$\begin{bmatrix}Cov(X_{1},X_{1})& Cov(X_{1},X_{2}) &Cov(X_{1},X_{3})\\Cov(X_{2},X_{1})& Cov(X_{2},X_{2}) &Cov(X_{2},X_{3})\\Cov(X_{3},X_{1})& Cov(X_{3},X_{2}) &Cov(X_{3},X_{3})\end{bmatrix}.$$
Finally one more thing. Expectation of random matrix (i.e. matrix which elements are random variables) is a matrix of expectations of it's cells. That is if:
$$X^{*}=\begin{bmatrix}X& Y\\Z& D\\\end{bmatrix},$$
where $X$, $Y$, $Z$ and $D$ are random varibles, then
$$E(X^{*})=\begin{bmatrix}E(X)& E(Y)\\E(Z)& E(D)\\\end{bmatrix}$$
|
Expectation of a matrix for variance-covariance
|
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like
|
Expectation of a matrix for variance-covariance
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like an average of it's column. In this sense you are right. However covariance matrix is about covariation between this variables. So for example cell (1,3) is covariation between the third and the first variables and so on. Since you have 3 variables it can be 9 covariations (including covariation of variable with itself that is variation) that is how you get $3x3$ matrix.
So covariance matrix is:
$$\begin{bmatrix}Cov(X_{1},X_{1})& Cov(X_{1},X_{2}) &Cov(X_{1},X_{3})\\Cov(X_{2},X_{1})& Cov(X_{2},X_{2}) &Cov(X_{2},X_{3})\\Cov(X_{3},X_{1})& Cov(X_{3},X_{2}) &Cov(X_{3},X_{3})\end{bmatrix}.$$
Finally one more thing. Expectation of random matrix (i.e. matrix which elements are random variables) is a matrix of expectations of it's cells. That is if:
$$X^{*}=\begin{bmatrix}X& Y\\Z& D\\\end{bmatrix},$$
where $X$, $Y$, $Z$ and $D$ are random varibles, then
$$E(X^{*})=\begin{bmatrix}E(X)& E(Y)\\E(Z)& E(D)\\\end{bmatrix}$$
|
Expectation of a matrix for variance-covariance
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like
|
47,807
|
Applying an ARIMA model with exogenous variables for forecasting
|
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds the xreg value as a covariate to the RHS of the equation, see here. This means the value needs to be explicitly provided for every time period you are trying to forecast for.
Your programming issue here can be solved by using the following template:
ModeloX3 <- arima(carg2, order=c(1,0,1), xreg=chuv,
seasonal=list(order=c(0,0,0), period=NA))
chuvNext5 <- forecast(chuv, h = 5) #Vector containing the next 5 values of chuv time series. It is up to you how you make this, This is just a quick example.
fcast <- forecast(ModeloX3 , h=5, xreg=chuvNext5 ))
|
Applying an ARIMA model with exogenous variables for forecasting
|
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds
|
Applying an ARIMA model with exogenous variables for forecasting
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds the xreg value as a covariate to the RHS of the equation, see here. This means the value needs to be explicitly provided for every time period you are trying to forecast for.
Your programming issue here can be solved by using the following template:
ModeloX3 <- arima(carg2, order=c(1,0,1), xreg=chuv,
seasonal=list(order=c(0,0,0), period=NA))
chuvNext5 <- forecast(chuv, h = 5) #Vector containing the next 5 values of chuv time series. It is up to you how you make this, This is just a quick example.
fcast <- forecast(ModeloX3 , h=5, xreg=chuvNext5 ))
|
Applying an ARIMA model with exogenous variables for forecasting
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds
|
47,808
|
Kalman filter has a frequentist or bayesian origin?
|
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dimensional sufficient statistics which do not grow in time$^*$. So I would say that it is pretty Bayesian and as you stated it is considered in Bayesian context in general.
However, the origins of Kalman filtering can be traced up to Gauss. He invented recursive least squares for prediction of orbits (Gauss, C. F.
Theory of the Combination of Observations Least Subject to Errors (translated by G. W. Stewart). Philadelphia: SIAM Publishers, 1995.) which I assume can be considered frequentist or classical in some sense.
$^*$(btw other exact finite-dimensional nonlinear filters exist like Benes, Daum filters but there is no Fisher-Koopman-Darmois-Pitman theorem for filtering). For general models your best bet is sequential Monte Carlo.
|
Kalman filter has a frequentist or bayesian origin?
|
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dim
|
Kalman filter has a frequentist or bayesian origin?
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dimensional sufficient statistics which do not grow in time$^*$. So I would say that it is pretty Bayesian and as you stated it is considered in Bayesian context in general.
However, the origins of Kalman filtering can be traced up to Gauss. He invented recursive least squares for prediction of orbits (Gauss, C. F.
Theory of the Combination of Observations Least Subject to Errors (translated by G. W. Stewart). Philadelphia: SIAM Publishers, 1995.) which I assume can be considered frequentist or classical in some sense.
$^*$(btw other exact finite-dimensional nonlinear filters exist like Benes, Daum filters but there is no Fisher-Koopman-Darmois-Pitman theorem for filtering). For general models your best bet is sequential Monte Carlo.
|
Kalman filter has a frequentist or bayesian origin?
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dim
|
47,809
|
Variance estimation for regression coefficients with complex survey data
|
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the correlation structure within the survey instrument, yield variance estimates that scale inversely with the number of survey questions. Thus, doubling the length of the survey (i.e., the number of items) would be thought to halve the variance of $\hat{\beta}$. But it is easy to appreciate that eventually you must run out of interesting (i.e., informative) new questions to ask a respondent, and thus that estimators ignorant of this fact will systematically overestimate the precision of $\hat{\beta}$.
To be clear, I'm advancing this argument on informational grounds, and in particular am making no appeal to such sheer practicalities as respondent fatigue, which are of course irrelevant to the theoretical content of your question. Any survey designer appreciates intuitively that eventually one exhausts the potential for novelty. Whereas one can 'interrogate' a coin repeatedly, with each flip yielding the same amount of information (about the coin's fairness) as every other flip in a sequence, this is not true for people. At some point, you would be able to predict quite accurately an individual's response to question $n+1$ from his/her responses to questions $1,\dots,n$. Thus, the rate at which new information arrives as you administer a survey to an individual person is monotone decreasing and has an asymptote at zero in the limit of an infinitely long questionnaire. Consequently, it is inconceivable that the precision of $\hat{\beta}$ from a survey should scale in the same way (i.e., linearly) as that of $\hat{p}_{heads}$ in a coin-flipping experiment, where the rate of information arrival is constant.
|
Variance estimation for regression coefficients with complex survey data
|
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the corr
|
Variance estimation for regression coefficients with complex survey data
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the correlation structure within the survey instrument, yield variance estimates that scale inversely with the number of survey questions. Thus, doubling the length of the survey (i.e., the number of items) would be thought to halve the variance of $\hat{\beta}$. But it is easy to appreciate that eventually you must run out of interesting (i.e., informative) new questions to ask a respondent, and thus that estimators ignorant of this fact will systematically overestimate the precision of $\hat{\beta}$.
To be clear, I'm advancing this argument on informational grounds, and in particular am making no appeal to such sheer practicalities as respondent fatigue, which are of course irrelevant to the theoretical content of your question. Any survey designer appreciates intuitively that eventually one exhausts the potential for novelty. Whereas one can 'interrogate' a coin repeatedly, with each flip yielding the same amount of information (about the coin's fairness) as every other flip in a sequence, this is not true for people. At some point, you would be able to predict quite accurately an individual's response to question $n+1$ from his/her responses to questions $1,\dots,n$. Thus, the rate at which new information arrives as you administer a survey to an individual person is monotone decreasing and has an asymptote at zero in the limit of an infinitely long questionnaire. Consequently, it is inconceivable that the precision of $\hat{\beta}$ from a survey should scale in the same way (i.e., linearly) as that of $\hat{p}_{heads}$ in a coin-flipping experiment, where the rate of information arrival is constant.
|
Variance estimation for regression coefficients with complex survey data
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the corr
|
47,810
|
Variance estimation for regression coefficients with complex survey data
|
Here are some explicit ways that the model-based estimator can be biased
Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test (the equal-variance t-test), and we know that if the variance of Y is different between the X groups that the t-test has the wrong level. If the smaller group has larger variance, the t-test is anticonservative; if the smaller group has smaller variance, the t-test is conservative. That means the standard error is wrong: too small or too large depending on the group sizes.
Some moderately tedious linear algebra shows that the Satterthwaite/Welch t-statistic is what you get by using the sandwich variance estimator in the regression of Y on X. We know the Welch t-test (in not-too-small samples) has correct size even when the variance of Y differs by X, so it must be using the correct standard error.
Pseudoreplication. Suppose you have N observations of X and Y, and you take M identical copies of each one. The model-based variance is too small exactly by a factor of M. The sandwich variance is still correct: there's a factor of M^2 in the middle term and factors of 1/M in each outer term.
Precision vs sampling weights. You can think of these as related to the difference between replication and pseudoreplication. The classical derivation of WLS is that an observation (X,Y) with a weight of W arises when you have W independent observations with the same value of X, and you take Y to be the average. It's real replication W times. A sampling weight of W is equivalent to pseudoreplication: you have one observation, but you replicate it W times to correspond to the W individuals in the population it represents. The correlations between residuals for pseudoreplicates are all 1; the correlations between residuals for true replicates are basically 0. The model-based variance estimator treats them the same, but the sandwich estimator doesn't.
|
Variance estimation for regression coefficients with complex survey data
|
Here are some explicit ways that the model-based estimator can be biased
Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test
|
Variance estimation for regression coefficients with complex survey data
Here are some explicit ways that the model-based estimator can be biased
Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test (the equal-variance t-test), and we know that if the variance of Y is different between the X groups that the t-test has the wrong level. If the smaller group has larger variance, the t-test is anticonservative; if the smaller group has smaller variance, the t-test is conservative. That means the standard error is wrong: too small or too large depending on the group sizes.
Some moderately tedious linear algebra shows that the Satterthwaite/Welch t-statistic is what you get by using the sandwich variance estimator in the regression of Y on X. We know the Welch t-test (in not-too-small samples) has correct size even when the variance of Y differs by X, so it must be using the correct standard error.
Pseudoreplication. Suppose you have N observations of X and Y, and you take M identical copies of each one. The model-based variance is too small exactly by a factor of M. The sandwich variance is still correct: there's a factor of M^2 in the middle term and factors of 1/M in each outer term.
Precision vs sampling weights. You can think of these as related to the difference between replication and pseudoreplication. The classical derivation of WLS is that an observation (X,Y) with a weight of W arises when you have W independent observations with the same value of X, and you take Y to be the average. It's real replication W times. A sampling weight of W is equivalent to pseudoreplication: you have one observation, but you replicate it W times to correspond to the W individuals in the population it represents. The correlations between residuals for pseudoreplicates are all 1; the correlations between residuals for true replicates are basically 0. The model-based variance estimator treats them the same, but the sandwich estimator doesn't.
|
Variance estimation for regression coefficients with complex survey data
Here are some explicit ways that the model-based estimator can be biased
Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test
|
47,811
|
Markov Random Fields vs Hidden Markov Model
|
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields are undirected graphical models (i.e., they describe a factorization of a Gibbs distribution in terms of the clique potentials of some underlying graph). Hidden Markov Models, on the other hand, are a subclass of directed graphical models (i.e., they describe a factorization in terms of a product of conditional probability distributions) with a specific structure that describes some dynamic process with long-term dependencies. Both types of models can be converted into so-called factor graphs, so that the same algorithms can be used to perform inference tasks in them (e.g., compute marginal distributions or a MAP estimate).
|
Markov Random Fields vs Hidden Markov Model
|
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields
|
Markov Random Fields vs Hidden Markov Model
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields are undirected graphical models (i.e., they describe a factorization of a Gibbs distribution in terms of the clique potentials of some underlying graph). Hidden Markov Models, on the other hand, are a subclass of directed graphical models (i.e., they describe a factorization in terms of a product of conditional probability distributions) with a specific structure that describes some dynamic process with long-term dependencies. Both types of models can be converted into so-called factor graphs, so that the same algorithms can be used to perform inference tasks in them (e.g., compute marginal distributions or a MAP estimate).
|
Markov Random Fields vs Hidden Markov Model
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields
|
47,812
|
Markov Random Fields vs Hidden Markov Model
|
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
So yes, you can use Markov Random Fields to represent an HMM.
|
Markov Random Fields vs Hidden Markov Model
|
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
|
Markov Random Fields vs Hidden Markov Model
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
So yes, you can use Markov Random Fields to represent an HMM.
|
Markov Random Fields vs Hidden Markov Model
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
|
47,813
|
Intuition as to why estimates of a covariance matrix are numerically unstable
|
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem presented in the covariance matrix $C$ (where $C = \frac{1}{N-1}X_0^T X_0$, $X_0$ being the zero-centred version of the original matrix $X$) has a higher condition number than the corresponding problem presented by the original data matrix $X$. In short, the condition number of a matrix quantifies the sensitivity of the solution of a system of linear equations defined by that matrix to errors in the original data. The condition number strongly suggests (but does not fully determine) the quality of the system of linear equations' solution.
In particular as the covariance matrix $C$ is calculated by the cross-product of $X_0$ with itself, the ratio of the largest singular value of $X_0$ to the smallest singular value of $X_0$ is squared. That ratio is the condition number; values that are close to unity or generally below a few hundreds suggest a rather stable system. This is easy to see as follows:
Assume that $X_0 = USV^T$ where $U$ are the right singular vectors, $V$ are the left singular vectors and $S$ is the diagonal matrix holding the singular values of $X_0$, as $C = \frac{1}{N-1}X_0^TX_0$ then we can write: $C = \frac{1}{N-1} VS^TU^T USV^T = \frac{1}{N-1} V S^T S V^T = \frac{1}{N-1} V \Sigma V^T$. (Remember that the matrix $U$ is orthonormal so $U^TU = I$). ie. the singular values of $X_0^TX_0$ represented in $\Sigma$ are the square of the singular values of $X_0$ represented in $S$.
As you see while seemingly innocuous the cross-product $X_0^TX_0$ squares the condition number of the system you try to solve and thus makes the resulting system of equations (more) prone to numerical instability issues.
Some additional clarification particular to the paper linked: the estimate of the covariance matrix $C$ is immediately rank-degenerate in cases where $N < p $ which are the main focus of that paper; that's why the authors initially draw attention of the Marcenko–Pastur law about the distribution of singular values) and regularisation and banding techniques. Without such notions, working with $C$ or the inverse of $C$ (in the form of Cholesky factor of the inverse of $C$ as the authors do) is numerically unstable. The rationale as to why these covariance matrices are degenerate is exactly the same as above in the case of very large matrices: the condition number is squared. This is even more prominent in the $N < p$ case: an $N\times p$ matrix $X$ has at most $N$ non-zero singular values, the crossproduct of it with itself can also have at most $N$ non-zero singular values leading to rank-degeneracy (and therefore a "infinite" condition number). The paper presents a way to band the estimated covariance matrix given some particular conditions (the estimated $C$ has a Toepliz structure, the oracle $k$ representing the banding parameter can properly estimated, etc.) such as it is numerically stable.
|
Intuition as to why estimates of a covariance matrix are numerically unstable
|
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem prese
|
Intuition as to why estimates of a covariance matrix are numerically unstable
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem presented in the covariance matrix $C$ (where $C = \frac{1}{N-1}X_0^T X_0$, $X_0$ being the zero-centred version of the original matrix $X$) has a higher condition number than the corresponding problem presented by the original data matrix $X$. In short, the condition number of a matrix quantifies the sensitivity of the solution of a system of linear equations defined by that matrix to errors in the original data. The condition number strongly suggests (but does not fully determine) the quality of the system of linear equations' solution.
In particular as the covariance matrix $C$ is calculated by the cross-product of $X_0$ with itself, the ratio of the largest singular value of $X_0$ to the smallest singular value of $X_0$ is squared. That ratio is the condition number; values that are close to unity or generally below a few hundreds suggest a rather stable system. This is easy to see as follows:
Assume that $X_0 = USV^T$ where $U$ are the right singular vectors, $V$ are the left singular vectors and $S$ is the diagonal matrix holding the singular values of $X_0$, as $C = \frac{1}{N-1}X_0^TX_0$ then we can write: $C = \frac{1}{N-1} VS^TU^T USV^T = \frac{1}{N-1} V S^T S V^T = \frac{1}{N-1} V \Sigma V^T$. (Remember that the matrix $U$ is orthonormal so $U^TU = I$). ie. the singular values of $X_0^TX_0$ represented in $\Sigma$ are the square of the singular values of $X_0$ represented in $S$.
As you see while seemingly innocuous the cross-product $X_0^TX_0$ squares the condition number of the system you try to solve and thus makes the resulting system of equations (more) prone to numerical instability issues.
Some additional clarification particular to the paper linked: the estimate of the covariance matrix $C$ is immediately rank-degenerate in cases where $N < p $ which are the main focus of that paper; that's why the authors initially draw attention of the Marcenko–Pastur law about the distribution of singular values) and regularisation and banding techniques. Without such notions, working with $C$ or the inverse of $C$ (in the form of Cholesky factor of the inverse of $C$ as the authors do) is numerically unstable. The rationale as to why these covariance matrices are degenerate is exactly the same as above in the case of very large matrices: the condition number is squared. This is even more prominent in the $N < p$ case: an $N\times p$ matrix $X$ has at most $N$ non-zero singular values, the crossproduct of it with itself can also have at most $N$ non-zero singular values leading to rank-degeneracy (and therefore a "infinite" condition number). The paper presents a way to band the estimated covariance matrix given some particular conditions (the estimated $C$ has a Toepliz structure, the oracle $k$ representing the banding parameter can properly estimated, etc.) such as it is numerically stable.
|
Intuition as to why estimates of a covariance matrix are numerically unstable
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem prese
|
47,814
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" :
"In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables."
Check up the Zeiler's paper in the Unpooling section.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" :
"In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recor
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" :
"In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables."
Check up the Zeiler's paper in the Unpooling section.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" :
"In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recor
|
47,815
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction?
Here we introduce a max-pooling layer that introduces sparsity over the hidden
representation by erasing all non-maximal values in non overlapping subregions.
Basically it's the same as alviur's answer. Since they used only one max pooling layer, instead of actually doing the down-sampling for each box, they just erased all the non-maximal values, and the sparse representation is used for reconstruction.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction?
Here we introduce a max-pooling layer that introduces sparsity over the hidden
representation b
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction?
Here we introduce a max-pooling layer that introduces sparsity over the hidden
representation by erasing all non-maximal values in non overlapping subregions.
Basically it's the same as alviur's answer. Since they used only one max pooling layer, instead of actually doing the down-sampling for each box, they just erased all the non-maximal values, and the sparse representation is used for reconstruction.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction?
Here we introduce a max-pooling layer that introduces sparsity over the hidden
representation b
|
47,816
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0:
MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
|
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0:
MaxUnpool2d takes in as input the
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0:
MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.
|
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0:
MaxUnpool2d takes in as input the
|
47,817
|
Modeling multivariate Time Series Count Data in R
|
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf.
The model is quite complicated, here is the state space representation:
So, let's say I have L different products I'm studying across 1,..,T time periods.
$Y_{l,t} \sim z*\delta_0 + (1-z)NB(exp(\widetilde{\eta}_{l,t}),alpha_l)$ is the distribution for product l at time t
$\widetilde{\eta}_{l,t} = \eta_{l,t} + X_{l,t}\theta_l$ this is the Log of the mean of product l sales at time t, guaranteeing that it is positive.
$\eta_{l,t} = \mu_l + \phi_l(\eta_{l,t-1}-\mu) + \epsilon_{l,t}$
$\epsilon_{l,t} \sim N(0,\frac{1}{\tau_l})$
The other priors and hyperpriors are in the next images:
P.S. Now I'm trying to write the JAGS code and any help would be much appreciated! ( https://stackoverflow.com/questions/40528715/runtime-error-in-jags )
Edit:
Here is the JAGS code:
model{
#hyperpriors 4
alpha_star ~ dunif(0.001,0.1)
tau_mu_star ~ dunif(1,10)
mu_star ~ dnorm(0,0.5)
beta_tau ~ dunif(2,25)
beta_0_tau ~ dunif(1,10)
beta_theta ~ dunif(2,25)
phiminus ~ dunif(1,50)
k_tau ~ dunif(5,10)
k_0_tau ~ dunif(1,5)
pointmass_0 ~ dnorm(0,10000)
k_theta ~ dunif(5,10)
phiplus ~ dunif(1,600)
theta_star ~ dmnorm(b0,B0)
#17
for (l in 1:L){
z[l] ~ dbeta(0.5,0.5)
phi[l] ~ dbeta(phiplus + phiminus, phiminus)
tau[l] ~ dgamma(k_tau,beta_tau)
tau_theta[l] ~ dgamma(k_tau,beta_tau)
mu[l] ~ dnorm(mu_star, tau_mu_star)
alpha[l] ~ dexp(alpha_star)
eps[1,l] ~ dnorm(0,tau[l])
eta[1,l] = mu_star + eps[1,l]
theta[l,1:8] ~ dmnorm(theta_star,thetavar*tau_theta[l])
#y[1,l] ~ inprod(1-z[l],dnegbin(exp(eta[1,l]),alpha[l]))
y[1,l] ~ dnegbin(exp(eta[1,l]),alpha[l])
#y[1,l] ~ dnegbin(exp(eta[1,l]),alpha[l])
ystar[1,l] ~ dnorm(z[l]*pointmass_0 + inprod((1-z[l]),y[1,l]),100000)
}
for (i in 2:N){
for (l in 1:L){
eps[i,l] ~ dnorm(0,tau[l])
}
for(l in 1:L){
eta[i,l] = mu[l]+ phi[l]*(eta[i-1,l]-mu[l]) + eps[i,l]
eta_star[i,l]= eta[i,l] + inprod(c(x[i,l],xshared[i,]),t(theta[l,]))
#observations
#kobe[i,l] ~ dnegbin(dexp(eta_star[i,l]),alpha[l])
# #y[i,l] = inprod(1-z[l],kobe[i,l])
#y[i,l] ~ inprod(1-z[l],dnegbin(exp(eta_star[i,l]),alpha[l]))
#y[i,l] ~ dnegbin(exp(eta_star[i,l]),alpha[l])
y[i,l] ~ dnegbin(exp(eta_star[i,l]),alpha[l])
ystar[i,l] ~ dnorm(z[l]*pointmass_0 + inprod((1-z[l]),y[i,l]),100000)
}
}
}
Which I call from R using runjags:
parsamples <- run.jags('jags_model.txt', data=forJags, monitor=c('y','theta'), sample=100, method='rjparallel')
|
Modeling multivariate Time Series Count Data in R
|
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf.
The model is quite complicated, here is the state space representation:
So, let's say I have L different products
|
Modeling multivariate Time Series Count Data in R
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf.
The model is quite complicated, here is the state space representation:
So, let's say I have L different products I'm studying across 1,..,T time periods.
$Y_{l,t} \sim z*\delta_0 + (1-z)NB(exp(\widetilde{\eta}_{l,t}),alpha_l)$ is the distribution for product l at time t
$\widetilde{\eta}_{l,t} = \eta_{l,t} + X_{l,t}\theta_l$ this is the Log of the mean of product l sales at time t, guaranteeing that it is positive.
$\eta_{l,t} = \mu_l + \phi_l(\eta_{l,t-1}-\mu) + \epsilon_{l,t}$
$\epsilon_{l,t} \sim N(0,\frac{1}{\tau_l})$
The other priors and hyperpriors are in the next images:
P.S. Now I'm trying to write the JAGS code and any help would be much appreciated! ( https://stackoverflow.com/questions/40528715/runtime-error-in-jags )
Edit:
Here is the JAGS code:
model{
#hyperpriors 4
alpha_star ~ dunif(0.001,0.1)
tau_mu_star ~ dunif(1,10)
mu_star ~ dnorm(0,0.5)
beta_tau ~ dunif(2,25)
beta_0_tau ~ dunif(1,10)
beta_theta ~ dunif(2,25)
phiminus ~ dunif(1,50)
k_tau ~ dunif(5,10)
k_0_tau ~ dunif(1,5)
pointmass_0 ~ dnorm(0,10000)
k_theta ~ dunif(5,10)
phiplus ~ dunif(1,600)
theta_star ~ dmnorm(b0,B0)
#17
for (l in 1:L){
z[l] ~ dbeta(0.5,0.5)
phi[l] ~ dbeta(phiplus + phiminus, phiminus)
tau[l] ~ dgamma(k_tau,beta_tau)
tau_theta[l] ~ dgamma(k_tau,beta_tau)
mu[l] ~ dnorm(mu_star, tau_mu_star)
alpha[l] ~ dexp(alpha_star)
eps[1,l] ~ dnorm(0,tau[l])
eta[1,l] = mu_star + eps[1,l]
theta[l,1:8] ~ dmnorm(theta_star,thetavar*tau_theta[l])
#y[1,l] ~ inprod(1-z[l],dnegbin(exp(eta[1,l]),alpha[l]))
y[1,l] ~ dnegbin(exp(eta[1,l]),alpha[l])
#y[1,l] ~ dnegbin(exp(eta[1,l]),alpha[l])
ystar[1,l] ~ dnorm(z[l]*pointmass_0 + inprod((1-z[l]),y[1,l]),100000)
}
for (i in 2:N){
for (l in 1:L){
eps[i,l] ~ dnorm(0,tau[l])
}
for(l in 1:L){
eta[i,l] = mu[l]+ phi[l]*(eta[i-1,l]-mu[l]) + eps[i,l]
eta_star[i,l]= eta[i,l] + inprod(c(x[i,l],xshared[i,]),t(theta[l,]))
#observations
#kobe[i,l] ~ dnegbin(dexp(eta_star[i,l]),alpha[l])
# #y[i,l] = inprod(1-z[l],kobe[i,l])
#y[i,l] ~ inprod(1-z[l],dnegbin(exp(eta_star[i,l]),alpha[l]))
#y[i,l] ~ dnegbin(exp(eta_star[i,l]),alpha[l])
y[i,l] ~ dnegbin(exp(eta_star[i,l]),alpha[l])
ystar[i,l] ~ dnorm(z[l]*pointmass_0 + inprod((1-z[l]),y[i,l]),100000)
}
}
}
Which I call from R using runjags:
parsamples <- run.jags('jags_model.txt', data=forJags, monitor=c('y','theta'), sample=100, method='rjparallel')
|
Modeling multivariate Time Series Count Data in R
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf.
The model is quite complicated, here is the state space representation:
So, let's say I have L different products
|
47,818
|
Comparing Perplexities With Different Data Set Sizes
|
Would comparing perplexities be invalidated by the different data set sizes?
No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\log$ is base 2):
In order to assess the quality of a language model, one needs to define evaluation metrics. One evaluation metric is the log-likelihood of a text, which is computed as follows, assuming that the language model is a trigram model, and the text contains $N$ words:
\begin{align}
l_{\text{corpus}} = \log \left ( \prod_{i=3}^{N} p(w_i | w_{i-2},w_{i-1}) \right ) = \sum_{i=3}^N \log p(w_i | w_{i-2},w_{i-1})
\end{align}
In order to make this metric independent from the size of the corpus, one can compute the average log-likelihood of the corpus on a per word basis, i.e. the log-likehood of the corpus normalized by the number of words:
\begin{align}
l_{\text{word_average}} = \frac{1}{N} \sum_{i=1}^N \log p(w_i | w_{i-2},w_{i-1})
\end{align}
The most common evaluation metric for a language model is the perplexity, which can be computed directly from the average log-likelihood of the corpus on a per word basis:
\begin{align}
\text{Perplexity} \ = \ 2^{-l_{\text{word_average}}}
\end{align}
Note that in general, to make a meaningful comparison between two different language models, one needs to use the same vocabulary.
Using unigram, bigram and trigram models trained on 38 million words from the Wall Street Journal, and using a vocabulary of size 19,979 one obtains a perplexity 962, 170, and 109, respectively, when tested on 1.5 million words from the same journal.
What is the intuitive meaning of perplexity? This measure can be interpreted as an actual branching factor of the model. Let's explore this intuition using a simple uniform model for unigrams: $P(w) = \frac{1}{|V|}, \forall w \in V$.
This means that:
\begin{align}
P(w_1, ..., w_N) = \prod_{i \in { [ 1, N ] }}{P(w_i}) = \left( \frac{1}{|V|} \right)^N \notag \\
\text{perplexity} = 2^{- \frac{1}{N} P(w_1, ..., w_N)} = 2^{- \frac{1}{N} \log \left( \left( \frac{1}{|V|} \right)^N \right)} = |V|
\end{align}
Under this uniform language model, the perplexity is equal to the size of the vocabulary. Generally, perplexity captures the effective vocabulary size under the model. For instance, a trigram model described above has a factual branching factor of 109, even though it operates over the vocabulary of 19,979.
|
Comparing Perplexities With Different Data Set Sizes
|
Would comparing perplexities be invalidated by the different data set sizes?
No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\lo
|
Comparing Perplexities With Different Data Set Sizes
Would comparing perplexities be invalidated by the different data set sizes?
No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\log$ is base 2):
In order to assess the quality of a language model, one needs to define evaluation metrics. One evaluation metric is the log-likelihood of a text, which is computed as follows, assuming that the language model is a trigram model, and the text contains $N$ words:
\begin{align}
l_{\text{corpus}} = \log \left ( \prod_{i=3}^{N} p(w_i | w_{i-2},w_{i-1}) \right ) = \sum_{i=3}^N \log p(w_i | w_{i-2},w_{i-1})
\end{align}
In order to make this metric independent from the size of the corpus, one can compute the average log-likelihood of the corpus on a per word basis, i.e. the log-likehood of the corpus normalized by the number of words:
\begin{align}
l_{\text{word_average}} = \frac{1}{N} \sum_{i=1}^N \log p(w_i | w_{i-2},w_{i-1})
\end{align}
The most common evaluation metric for a language model is the perplexity, which can be computed directly from the average log-likelihood of the corpus on a per word basis:
\begin{align}
\text{Perplexity} \ = \ 2^{-l_{\text{word_average}}}
\end{align}
Note that in general, to make a meaningful comparison between two different language models, one needs to use the same vocabulary.
Using unigram, bigram and trigram models trained on 38 million words from the Wall Street Journal, and using a vocabulary of size 19,979 one obtains a perplexity 962, 170, and 109, respectively, when tested on 1.5 million words from the same journal.
What is the intuitive meaning of perplexity? This measure can be interpreted as an actual branching factor of the model. Let's explore this intuition using a simple uniform model for unigrams: $P(w) = \frac{1}{|V|}, \forall w \in V$.
This means that:
\begin{align}
P(w_1, ..., w_N) = \prod_{i \in { [ 1, N ] }}{P(w_i}) = \left( \frac{1}{|V|} \right)^N \notag \\
\text{perplexity} = 2^{- \frac{1}{N} P(w_1, ..., w_N)} = 2^{- \frac{1}{N} \log \left( \left( \frac{1}{|V|} \right)^N \right)} = |V|
\end{align}
Under this uniform language model, the perplexity is equal to the size of the vocabulary. Generally, perplexity captures the effective vocabulary size under the model. For instance, a trigram model described above has a factual branching factor of 109, even though it operates over the vocabulary of 19,979.
|
Comparing Perplexities With Different Data Set Sizes
Would comparing perplexities be invalidated by the different data set sizes?
No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\lo
|
47,819
|
Defining a prior multinomial regression. Case study with `MCMCglmm`
|
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in notation.
always, and I mean always, standardise your continous variables to have mean=0 and sd=1 (or even sd=2). Look into some of Andrew Gelman's blog posts or articles about this. Just google Andrew Gelman standardization, there are a lot of good papers and posts.
Think of your coefficients as log(odds-ratios) (in reference to a reference category, explanation follows). For an in-depth discussion, see this answer. Andrew Gelman also has some recommendations on priors, such as the cauchy, or normal(0,1). His papers are about logistic regression, but I find that these recommendations also extend to multi-outcome regression.
The dimension of the prior indeed depends on the number of outcomes. If you have three outcomes, you have these three linear dependencies:
$ y_1 = \beta_{0,1} + \sum_i\beta_{i,1}*x_{i,1} $ and $ y_2 = \beta_{0,2} + \sum_i\beta_{i,2}*x_{i,2} $ and $ y_3 = \beta_{0,3} + \sum_i\beta_{i,3}*x_{i,3} $
The second subscript denotes the outcome. Normally you would put a prior on each of these coefficients. I'll illustrate with the intercept, $\beta_{0,k}$,
$\beta_{0,1} \sim \mathcal{N}(0,1)$ and $\beta_{0,2} \sim \mathcal{N}(0,1)$ and $\beta_{0,3} \sim \mathcal{N}(0,1)$
Just to complete the math, the probability of outcome $k$ is,
$p_k = \frac{y_k}{\sum_k y_k}$ and the likelihood is $y \sim \text{Multinomial}(p)$
Please note that you would often force one of the outcomes to be the "reference" outcome due to identifiability issues. Wiki has a detailed description of why. Practically, this means you force the coefficients of the reference category to be 0, and thus $p_{reference} = 1$, since $exp(0) = 1$.
I have never used MCMCglmm so I cannot answer anything specific to that, I'm afraid.
|
Defining a prior multinomial regression. Case study with `MCMCglmm`
|
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in nota
|
Defining a prior multinomial regression. Case study with `MCMCglmm`
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in notation.
always, and I mean always, standardise your continous variables to have mean=0 and sd=1 (or even sd=2). Look into some of Andrew Gelman's blog posts or articles about this. Just google Andrew Gelman standardization, there are a lot of good papers and posts.
Think of your coefficients as log(odds-ratios) (in reference to a reference category, explanation follows). For an in-depth discussion, see this answer. Andrew Gelman also has some recommendations on priors, such as the cauchy, or normal(0,1). His papers are about logistic regression, but I find that these recommendations also extend to multi-outcome regression.
The dimension of the prior indeed depends on the number of outcomes. If you have three outcomes, you have these three linear dependencies:
$ y_1 = \beta_{0,1} + \sum_i\beta_{i,1}*x_{i,1} $ and $ y_2 = \beta_{0,2} + \sum_i\beta_{i,2}*x_{i,2} $ and $ y_3 = \beta_{0,3} + \sum_i\beta_{i,3}*x_{i,3} $
The second subscript denotes the outcome. Normally you would put a prior on each of these coefficients. I'll illustrate with the intercept, $\beta_{0,k}$,
$\beta_{0,1} \sim \mathcal{N}(0,1)$ and $\beta_{0,2} \sim \mathcal{N}(0,1)$ and $\beta_{0,3} \sim \mathcal{N}(0,1)$
Just to complete the math, the probability of outcome $k$ is,
$p_k = \frac{y_k}{\sum_k y_k}$ and the likelihood is $y \sim \text{Multinomial}(p)$
Please note that you would often force one of the outcomes to be the "reference" outcome due to identifiability issues. Wiki has a detailed description of why. Practically, this means you force the coefficients of the reference category to be 0, and thus $p_{reference} = 1$, since $exp(0) = 1$.
I have never used MCMCglmm so I cannot answer anything specific to that, I'm afraid.
|
Defining a prior multinomial regression. Case study with `MCMCglmm`
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in nota
|
47,820
|
Understanding the spectral decomposition of a Markov matrix? [closed]
|
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix.
Detailed Balance
Let $P$ be your (say 2x2) transition matrix. It isn't symmetric. Let $\pi$ be the (1x2) stationary distribution vector. If the detailed balance equations hold, your Markov chain is reversible and this means that
$$
\left( \begin{array}{cc}
\pi_1 & 0 \\
0 & \pi_2 \end{array} \right) P = P' \left( \begin{array}{cc}
\pi_1 & 0 \\
0 & \pi_2 \end{array} \right).
$$
If we define
$$
\Lambda = \left( \begin{array}{cc}
\sqrt{\pi_1} & 0 \\
0 & \sqrt{\pi_2} \end{array} \right),
$$
then you can re-write the above equation as
$$
\Lambda^2P = P'\Lambda^2.
$$
Using the Spectral Decomposition Theorem
The link above claims $V = \Lambda P \Lambda^{-1}$ is symmetric. This can be verified using the previous formula, left multiplying both sides by by $\Lambda$ and right multiplying both sides by $\Lambda^{-1}$.
By the spectral decomposition theorem, $V$ is orthogonally diagonalizable. The link calls its eigenvectors $w_j$, and its eigenvalues $\lambda_j$ (for $j=1,2$ in this case). So $Vw_j = \lambda_j w_j$ and $w_j' V = w_j' \lambda_j$. Plugging in the definition of $V$, we get
$$
\Lambda P \Lambda^{-1} w_j = \lambda_j w_j
$$
and
$$
w_j' \Lambda P \Lambda^{-1} = w_j' \lambda_j.
$$
Premultiplying the former by $\Lambda^{-1}$ and the latter by $\Lambda$ we can verify the claim that $P$ has left eigenvectors $\psi_j = \Lambda w_j$ and right eigenvectors $\phi_j = \Lambda^{-1} w_j$. These could be written in terms of matrices as
$$
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right)
P
=
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right)
$$
$$
P
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right).
$$
Using the fact that $w_i'w_j = 0$ for $i \neq j$ and $1$ otherwise we can show that
\begin{align*}
P &=
P
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right) \\
&=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right) \\
&=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\phi_1' \Lambda^2 \\
\phi_2' \Lambda^2 \end{array} \right) \\
&= \sum_{k} \lambda_k \phi_k \phi_k' \Lambda^2.
\end{align*}
|
Understanding the spectral decomposition of a Markov matrix? [closed]
|
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix.
Detailed B
|
Understanding the spectral decomposition of a Markov matrix? [closed]
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix.
Detailed Balance
Let $P$ be your (say 2x2) transition matrix. It isn't symmetric. Let $\pi$ be the (1x2) stationary distribution vector. If the detailed balance equations hold, your Markov chain is reversible and this means that
$$
\left( \begin{array}{cc}
\pi_1 & 0 \\
0 & \pi_2 \end{array} \right) P = P' \left( \begin{array}{cc}
\pi_1 & 0 \\
0 & \pi_2 \end{array} \right).
$$
If we define
$$
\Lambda = \left( \begin{array}{cc}
\sqrt{\pi_1} & 0 \\
0 & \sqrt{\pi_2} \end{array} \right),
$$
then you can re-write the above equation as
$$
\Lambda^2P = P'\Lambda^2.
$$
Using the Spectral Decomposition Theorem
The link above claims $V = \Lambda P \Lambda^{-1}$ is symmetric. This can be verified using the previous formula, left multiplying both sides by by $\Lambda$ and right multiplying both sides by $\Lambda^{-1}$.
By the spectral decomposition theorem, $V$ is orthogonally diagonalizable. The link calls its eigenvectors $w_j$, and its eigenvalues $\lambda_j$ (for $j=1,2$ in this case). So $Vw_j = \lambda_j w_j$ and $w_j' V = w_j' \lambda_j$. Plugging in the definition of $V$, we get
$$
\Lambda P \Lambda^{-1} w_j = \lambda_j w_j
$$
and
$$
w_j' \Lambda P \Lambda^{-1} = w_j' \lambda_j.
$$
Premultiplying the former by $\Lambda^{-1}$ and the latter by $\Lambda$ we can verify the claim that $P$ has left eigenvectors $\psi_j = \Lambda w_j$ and right eigenvectors $\phi_j = \Lambda^{-1} w_j$. These could be written in terms of matrices as
$$
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right)
P
=
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right)
$$
$$
P
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right).
$$
Using the fact that $w_i'w_j = 0$ for $i \neq j$ and $1$ otherwise we can show that
\begin{align*}
P &=
P
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right) \\
&=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\psi_1' \\
\psi_2' \end{array} \right) \\
&=
\left(
\begin{array}{cc}\phi_1 & \phi_2\end{array}
\right)
\left( \begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \end{array} \right)
\left( \begin{array}{c}
\phi_1' \Lambda^2 \\
\phi_2' \Lambda^2 \end{array} \right) \\
&= \sum_{k} \lambda_k \phi_k \phi_k' \Lambda^2.
\end{align*}
|
Understanding the spectral decomposition of a Markov matrix? [closed]
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix.
Detailed B
|
47,821
|
Understanding the spectral decomposition of a Markov matrix? [closed]
|
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post
One related posts. Properties of spectral decomposition I think you will be clear if you read the accepted answer in this post.
In the particular example in the question, the properties of a symmetric matrix have been confused with those of a positive definite one, which explains the discrepancies noted.
In addition, I personally feel this paper is explaining eigenvalue and eigenvector and iterative methods in a intuitive way. Feel free to check
https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf
|
Understanding the spectral decomposition of a Markov matrix? [closed]
|
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post
One related posts. Properties
|
Understanding the spectral decomposition of a Markov matrix? [closed]
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post
One related posts. Properties of spectral decomposition I think you will be clear if you read the accepted answer in this post.
In the particular example in the question, the properties of a symmetric matrix have been confused with those of a positive definite one, which explains the discrepancies noted.
In addition, I personally feel this paper is explaining eigenvalue and eigenvector and iterative methods in a intuitive way. Feel free to check
https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf
|
Understanding the spectral decomposition of a Markov matrix? [closed]
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post
One related posts. Properties
|
47,822
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
|
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is:
$$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
Show that $E_{\mathcal{D}_n}[R_e(h)] = R(h)$
Start with the LHS:
$$E_{\mathcal{D}_n}[R_e(h)]$$
Plug in the expression for the empirical risk $R_e(h)$:
$$= E_{\mathcal{D}_n} \left [
\frac{1}{n} \sum_{i=1}^n \mathcal{L}(X_i, h(X_i))
\right ]$$
By linearity of expectation:
$$= \frac{1}{n} \sum_{i=1}^n E_{\mathcal{D}_n}[\mathcal{L}(X_i, h(X_i))]$$
Because $\mathcal{L}(X_i, h(X_i))$ only depends on $X_i$, the joint expectation (over datasets) is equal to the marginal expectation (over data point $X_i$):
$$= \frac{1}{n} \sum_{i=1}^n E_{X_i}[\mathcal{L}(X_i, h(X_i))]$$
The expected value is the same for all $X_i$ because they're identically distributed. So, we can replace $X_i$ with a generic variable $X$ drawn from the same distribution $f_X$:
$$= \frac{1}{n} \sum_{i=1}^n E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
Simplify:
$$= E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
This is equal to the true risk $R(h)$.
Alternative
Here's an equivalent way of proceeding, starting after step (3) above.
Explicitly write out the expected value over datasets. Because the data points are independent, the joint distribution of the dataset is equal to the product of the marginal distributions of the data points.
$$= \frac{1}{n} \sum_{i=1}^n \int \cdots \int
\left ( \prod_{j=1}^n f_X(x_j) \right )
\mathcal{L}(x_i, h(x_i))
\ dx_1 \cdots dx_n$$
Reorder the integrals (see Fubini's theorem) and pull terms involving $x_i$ to the outside:
$$= \frac{1}{n} \sum_{i=1}^n
\int f_X(x_i) \mathcal{L}(x_i, h(x_i)) \left [
\int \cdots \int
\left ( \prod_{j \ne i} f_X(x_j) \right )
\ dx_1 \cdots dx_{i-1} \ dx_{i+1} \cdots dx_n
\right ] dx_i$$
The expression inside the brackets is simply integrating a distribution, so it's equal to one:
$$= \frac{1}{n} \sum_{i=1}^n
\int f_X(x_i) \mathcal{L}(x_i, h(x_i)) dx_i$$
The integral is the expected value of $\mathcal{L}(\cdots)$ with respect to $f_X$:
$$= \frac{1}{n} \sum_{i=1}^n
E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
This is the same as the result of step (5) above, so proceed to (6).
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
|
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is:
$$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
S
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is:
$$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
Show that $E_{\mathcal{D}_n}[R_e(h)] = R(h)$
Start with the LHS:
$$E_{\mathcal{D}_n}[R_e(h)]$$
Plug in the expression for the empirical risk $R_e(h)$:
$$= E_{\mathcal{D}_n} \left [
\frac{1}{n} \sum_{i=1}^n \mathcal{L}(X_i, h(X_i))
\right ]$$
By linearity of expectation:
$$= \frac{1}{n} \sum_{i=1}^n E_{\mathcal{D}_n}[\mathcal{L}(X_i, h(X_i))]$$
Because $\mathcal{L}(X_i, h(X_i))$ only depends on $X_i$, the joint expectation (over datasets) is equal to the marginal expectation (over data point $X_i$):
$$= \frac{1}{n} \sum_{i=1}^n E_{X_i}[\mathcal{L}(X_i, h(X_i))]$$
The expected value is the same for all $X_i$ because they're identically distributed. So, we can replace $X_i$ with a generic variable $X$ drawn from the same distribution $f_X$:
$$= \frac{1}{n} \sum_{i=1}^n E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
Simplify:
$$= E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
This is equal to the true risk $R(h)$.
Alternative
Here's an equivalent way of proceeding, starting after step (3) above.
Explicitly write out the expected value over datasets. Because the data points are independent, the joint distribution of the dataset is equal to the product of the marginal distributions of the data points.
$$= \frac{1}{n} \sum_{i=1}^n \int \cdots \int
\left ( \prod_{j=1}^n f_X(x_j) \right )
\mathcal{L}(x_i, h(x_i))
\ dx_1 \cdots dx_n$$
Reorder the integrals (see Fubini's theorem) and pull terms involving $x_i$ to the outside:
$$= \frac{1}{n} \sum_{i=1}^n
\int f_X(x_i) \mathcal{L}(x_i, h(x_i)) \left [
\int \cdots \int
\left ( \prod_{j \ne i} f_X(x_j) \right )
\ dx_1 \cdots dx_{i-1} \ dx_{i+1} \cdots dx_n
\right ] dx_i$$
The expression inside the brackets is simply integrating a distribution, so it's equal to one:
$$= \frac{1}{n} \sum_{i=1}^n
\int f_X(x_i) \mathcal{L}(x_i, h(x_i)) dx_i$$
The integral is the expected value of $\mathcal{L}(\cdots)$ with respect to $f_X$:
$$= \frac{1}{n} \sum_{i=1}^n
E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
This is the same as the result of step (5) above, so proceed to (6).
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is:
$$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$
S
|
47,823
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
|
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introductory books to Machine Learning, where "datasets" are considered, we more properly consider a random vector $\mathbf{X}$ whose $n$ components are iid. The random vector has a probability distribution
$$p(\mathbf{X})=p(X_1,\dots,X_n)$$
Now, obviously $R_e(h(X_1),\dots,h(X_n))=f(\mathbf{X})$ is a random variable and we really want to compute its expectation:
$$\mathbb{E}_{\mathbf{X}\sim p(\mathbf{X})}[R_e(h)]$$
But this is immediate if we just notice that
$$f(\mathbf{X})=\frac{1}{n} \sum_{i=1}^n \mathcal{L}(X_i, h(X_i))=\frac{1}{n} \sum_{i=1}^n g(X_i)=\frac{1}{n} \sum_{i=1}^n Y_i$$
is nothing more than the Monte Carlo estimator for the mean of $Y=g(X)$, a random variable whose mean is nothing more than the true risk. Proof: all $Y_i$ are iid and we have
$$\mathbb{E}[Y]=\mathbb{E}_{X\sim p(X)}[g(X)]=\mathbb{E}_{X\sim p(X)}[\mathcal{L}(X, h(X))]=R(h)$$
Now, the Monte Carlo estimator has many interesting properties, but we only need two (actually one, but thanks to the second one I'll also show you an interesting property of Empirical Risk, you didn't ask about):
it is an unbiased estimator of true risk, i.e., its mean is equal to the mean of $Y$. As a matter of fact,
$$\mathbb{E}_{\mathbf{X}\sim p(\mathbf{X})}[R_e(h(X_1),\dots,h(X_n))]=\mathbb{E}[Y]=R(h)$$
it is a consistent estimator of true risk, i.e., the Monte Carlo estimator converges a.s. to the mean of $Y$ for the sample size $n\to\infty$. In other words
$$R_e(h)\overset{a.s.}\to R(h) \ \text{as} \ n\to\infty$$
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
|
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introducto
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introductory books to Machine Learning, where "datasets" are considered, we more properly consider a random vector $\mathbf{X}$ whose $n$ components are iid. The random vector has a probability distribution
$$p(\mathbf{X})=p(X_1,\dots,X_n)$$
Now, obviously $R_e(h(X_1),\dots,h(X_n))=f(\mathbf{X})$ is a random variable and we really want to compute its expectation:
$$\mathbb{E}_{\mathbf{X}\sim p(\mathbf{X})}[R_e(h)]$$
But this is immediate if we just notice that
$$f(\mathbf{X})=\frac{1}{n} \sum_{i=1}^n \mathcal{L}(X_i, h(X_i))=\frac{1}{n} \sum_{i=1}^n g(X_i)=\frac{1}{n} \sum_{i=1}^n Y_i$$
is nothing more than the Monte Carlo estimator for the mean of $Y=g(X)$, a random variable whose mean is nothing more than the true risk. Proof: all $Y_i$ are iid and we have
$$\mathbb{E}[Y]=\mathbb{E}_{X\sim p(X)}[g(X)]=\mathbb{E}_{X\sim p(X)}[\mathcal{L}(X, h(X))]=R(h)$$
Now, the Monte Carlo estimator has many interesting properties, but we only need two (actually one, but thanks to the second one I'll also show you an interesting property of Empirical Risk, you didn't ask about):
it is an unbiased estimator of true risk, i.e., its mean is equal to the mean of $Y$. As a matter of fact,
$$\mathbb{E}_{\mathbf{X}\sim p(\mathbf{X})}[R_e(h(X_1),\dots,h(X_n))]=\mathbb{E}[Y]=R(h)$$
it is a consistent estimator of true risk, i.e., the Monte Carlo estimator converges a.s. to the mean of $Y$ for the sample size $n\to\infty$. In other words
$$R_e(h)\overset{a.s.}\to R(h) \ \text{as} \ n\to\infty$$
|
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introducto
|
47,824
|
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain quantile?
|
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$, we need to find $\mu$ and $\sigma \gt 0$.
To this end, let $\Phi$ be the standard Normal distribution function. The two pieces of information are
$m = \exp(\mu + \sigma^2/2)$, whence $\mu + \sigma^2/2 = \log(m)$.
$\log(z) = \mu + \sigma \Phi^{-1}(\alpha).$
Subtracting the second from the first and multiplying by $2$ produces
$$\sigma^2 - 2\Phi^{-1}(\alpha)\sigma + 2(\log(z) - \log(m)) = 0.$$
This is a quadratic equation in $\sigma$, solved with the usual Quadratic Formula. There will be zero, one, or two solutions. Two solutions are likely to occur when $\alpha$ is close to $1$.
$\mu$ is then found in terms of $\sigma$ by using either of the original equations; for instance,
$$\mu = \log(m) - \sigma^2/2$$
will do nicely.
(A special case is when $\alpha=1/2$, corresponding to the median, where $\Phi^{-1}(\alpha) = 0$. The formula for $\sigma$ simplifies to $$\sigma^2 + 2(\log(z) - \log(m)) = 0.$$ That is the solution obtained by @Glen_b at Can I get the parameters of a lognormal distribution from the sample mean & median?, which uses "$\tilde{m}$" for "$z$".)
For fitting these estimates to data, consider measuring the goodness of fit for discriminating between two solutions when they are available. A $\chi^2$ statistic should do fine. This approach is illustrated in the following R code, which simulates data, performs the analysis, draws a histogram of the data, and overplots the solutions. When a solution fits poorly, its plot is faded out. Here is an example.
#
# Given a mean `m` and `alpha` quantile `z, find the matching parameters of any
# lognormal distributions.
#
f <- function(m, z, alpha) {
B <- -2 * qnorm(alpha)
C <- 2*(log(z) - log(m))
sigma <- (-B + c(-1,1)*sqrt(B^2 - 4*C)) / 2
sigma <- sigma[sigma > 0 & !is.na(sigma)]
mu <- log(m) - sigma^2 / 2
return(cbind(mu=mu, sigma=sigma))
}
#
# Compute a chi-squared statistic for data `x` corresponding to binning
# a lognormal distribution with parameter `theta` into `n` equal-size bins.
#
chi.squared <- function(theta, x, n=4) {
cutpoints <- exp(qnorm(seq(0, 1, length.out=n+1), theta[1], theta[2]))
counts <- table(cut(x, cutpoints))
expected <- length(x) / n
stat <- sum((counts - expected)^2 / expected)
}
#
# Simulate data, compute their statistics, and estimate matching lognormal
# distributions.
#
set.seed(17)
x <- exp(rnorm(20, sd=0.4))
m <- mean(x)
alpha <- 0.9
z <- quantile(x, alpha)
theta <- f(m, z, alpha)
stats <- apply(theta, 1, chi.squared, x=x)
#
# Plot the data and any matching lognormal density functions.
#
hist(x, freq=FALSE, breaks=12)
col <- "Red"
invisible(apply(theta, 1, function(q) {
stat <- chi.squared(q, x, min(length(x), 5))
curve(dnorm(log(x), q["mu"], q["sigma"])/x, add=TRUE, lwd=2,
col=hsv(0, min(1, 2/sqrt(1 + 10*stat/length(x))), 0.9))
}))
|
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain qua
|
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$,
|
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain quantile?
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$, we need to find $\mu$ and $\sigma \gt 0$.
To this end, let $\Phi$ be the standard Normal distribution function. The two pieces of information are
$m = \exp(\mu + \sigma^2/2)$, whence $\mu + \sigma^2/2 = \log(m)$.
$\log(z) = \mu + \sigma \Phi^{-1}(\alpha).$
Subtracting the second from the first and multiplying by $2$ produces
$$\sigma^2 - 2\Phi^{-1}(\alpha)\sigma + 2(\log(z) - \log(m)) = 0.$$
This is a quadratic equation in $\sigma$, solved with the usual Quadratic Formula. There will be zero, one, or two solutions. Two solutions are likely to occur when $\alpha$ is close to $1$.
$\mu$ is then found in terms of $\sigma$ by using either of the original equations; for instance,
$$\mu = \log(m) - \sigma^2/2$$
will do nicely.
(A special case is when $\alpha=1/2$, corresponding to the median, where $\Phi^{-1}(\alpha) = 0$. The formula for $\sigma$ simplifies to $$\sigma^2 + 2(\log(z) - \log(m)) = 0.$$ That is the solution obtained by @Glen_b at Can I get the parameters of a lognormal distribution from the sample mean & median?, which uses "$\tilde{m}$" for "$z$".)
For fitting these estimates to data, consider measuring the goodness of fit for discriminating between two solutions when they are available. A $\chi^2$ statistic should do fine. This approach is illustrated in the following R code, which simulates data, performs the analysis, draws a histogram of the data, and overplots the solutions. When a solution fits poorly, its plot is faded out. Here is an example.
#
# Given a mean `m` and `alpha` quantile `z, find the matching parameters of any
# lognormal distributions.
#
f <- function(m, z, alpha) {
B <- -2 * qnorm(alpha)
C <- 2*(log(z) - log(m))
sigma <- (-B + c(-1,1)*sqrt(B^2 - 4*C)) / 2
sigma <- sigma[sigma > 0 & !is.na(sigma)]
mu <- log(m) - sigma^2 / 2
return(cbind(mu=mu, sigma=sigma))
}
#
# Compute a chi-squared statistic for data `x` corresponding to binning
# a lognormal distribution with parameter `theta` into `n` equal-size bins.
#
chi.squared <- function(theta, x, n=4) {
cutpoints <- exp(qnorm(seq(0, 1, length.out=n+1), theta[1], theta[2]))
counts <- table(cut(x, cutpoints))
expected <- length(x) / n
stat <- sum((counts - expected)^2 / expected)
}
#
# Simulate data, compute their statistics, and estimate matching lognormal
# distributions.
#
set.seed(17)
x <- exp(rnorm(20, sd=0.4))
m <- mean(x)
alpha <- 0.9
z <- quantile(x, alpha)
theta <- f(m, z, alpha)
stats <- apply(theta, 1, chi.squared, x=x)
#
# Plot the data and any matching lognormal density functions.
#
hist(x, freq=FALSE, breaks=12)
col <- "Red"
invisible(apply(theta, 1, function(q) {
stat <- chi.squared(q, x, min(length(x), 5))
curve(dnorm(log(x), q["mu"], q["sigma"])/x, add=TRUE, lwd=2,
col=hsv(0, min(1, 2/sqrt(1 + 10*stat/length(x))), 0.9))
}))
|
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain qua
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$,
|
47,825
|
OHE vs Feature Hashing
|
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vector.
We might represent Afghanistan as [1,0,0,0], Belarus as [0,1,0,0], Canada as [0,0,1,0], and Denmark as [0,0,0,1]. We could make this vector large enough to hold a position for each country in the world. But what about words, where there are thousands, and some words that appear in your test set may not appear in your training set? A hash function maps data of arbitrary size to data of fixed size. You can use hash(string) mod n to return a number between 0 and n - 1, and then this is the index that you increment in the input vector.
An example from https://www.quora.com/Can-you-explain-feature-hashing-in-an-easily-understandable-way:
to represent "the quick brown fox":
h(the) mod 5 = 0
h(quick) mod 5 = 1
h(brown) mod 5 = 1
h(fox) mod 5 = 3
Once we have this we can simply construct our vector as:
(1,2,0,1,0)
Finally
Also, do we represent hashed features in Sparse format ?
With a sufficiently large vector, feature hashing will produce sparse vectors (where most of its values are zero).
|
OHE vs Feature Hashing
|
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vec
|
OHE vs Feature Hashing
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vector.
We might represent Afghanistan as [1,0,0,0], Belarus as [0,1,0,0], Canada as [0,0,1,0], and Denmark as [0,0,0,1]. We could make this vector large enough to hold a position for each country in the world. But what about words, where there are thousands, and some words that appear in your test set may not appear in your training set? A hash function maps data of arbitrary size to data of fixed size. You can use hash(string) mod n to return a number between 0 and n - 1, and then this is the index that you increment in the input vector.
An example from https://www.quora.com/Can-you-explain-feature-hashing-in-an-easily-understandable-way:
to represent "the quick brown fox":
h(the) mod 5 = 0
h(quick) mod 5 = 1
h(brown) mod 5 = 1
h(fox) mod 5 = 3
Once we have this we can simply construct our vector as:
(1,2,0,1,0)
Finally
Also, do we represent hashed features in Sparse format ?
With a sufficiently large vector, feature hashing will produce sparse vectors (where most of its values are zero).
|
OHE vs Feature Hashing
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vec
|
47,826
|
Selecting between ARMA, GARCH and ARMA-GARCH models
|
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise.
Strictly speaking, no. Failure to reject a null hypothesis (here: absence of autocorrelation) does not imply we can accept it. Also, absence of autocorrelation does not imply white noise (although it holds the other way around).
Would a GARCH model even add anything then?
Yes, why not? ARMA and GARCH have different targets so they are compatible (one may use none, either or both). Take a look at What is the difference between GARCH and ARMA?.
So if I use a model with ARMA+GARCH it will explain more variance (and therefore predict better) than the two models individually?
First, there is a question how well you are able to estimate the models. Models estimated on a finite sample may or may not be close to the "true" models (where by "true" I mean the best possible approximation within the ARMA-GARCH class of models of the real data generating process (DGP)).
Second, ARMA alone would explain more variance in sample than ARMA-GARCH (just as OLS would explain more than feasible GLS, regardless of which is closer to the true model in population). GARCH would not explain any variance if you leave the conditional mean part empty (without ARMA). And if the ARMA-GARCH model approximates the true DGP better than a plain ARMA and plain GARCH, the out of sample performance of ARMA-GARCH will be better -- as long as you can estimate the model sufficiently well. (And since ARMA-GARCH is a richer model than plain ARMA and plain GARCH, you would normally not be able to estimate it as precisely as plain ARMA and plain GARCH on any given dataset.)
So the answer is not clear cut, unfortunately. But if you discover conditional heteroskedasticity in the residuals of an ARMA model, it certainly makes sense to try appending a GARCH specification to ARMA and seeing what happens.
|
Selecting between ARMA, GARCH and ARMA-GARCH models
|
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise.
Strictly speaking, no. Failure to reject a null hypothesis (here: absence of auto
|
Selecting between ARMA, GARCH and ARMA-GARCH models
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise.
Strictly speaking, no. Failure to reject a null hypothesis (here: absence of autocorrelation) does not imply we can accept it. Also, absence of autocorrelation does not imply white noise (although it holds the other way around).
Would a GARCH model even add anything then?
Yes, why not? ARMA and GARCH have different targets so they are compatible (one may use none, either or both). Take a look at What is the difference between GARCH and ARMA?.
So if I use a model with ARMA+GARCH it will explain more variance (and therefore predict better) than the two models individually?
First, there is a question how well you are able to estimate the models. Models estimated on a finite sample may or may not be close to the "true" models (where by "true" I mean the best possible approximation within the ARMA-GARCH class of models of the real data generating process (DGP)).
Second, ARMA alone would explain more variance in sample than ARMA-GARCH (just as OLS would explain more than feasible GLS, regardless of which is closer to the true model in population). GARCH would not explain any variance if you leave the conditional mean part empty (without ARMA). And if the ARMA-GARCH model approximates the true DGP better than a plain ARMA and plain GARCH, the out of sample performance of ARMA-GARCH will be better -- as long as you can estimate the model sufficiently well. (And since ARMA-GARCH is a richer model than plain ARMA and plain GARCH, you would normally not be able to estimate it as precisely as plain ARMA and plain GARCH on any given dataset.)
So the answer is not clear cut, unfortunately. But if you discover conditional heteroskedasticity in the residuals of an ARMA model, it certainly makes sense to try appending a GARCH specification to ARMA and seeing what happens.
|
Selecting between ARMA, GARCH and ARMA-GARCH models
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise.
Strictly speaking, no. Failure to reject a null hypothesis (here: absence of auto
|
47,827
|
Weights to combine different models
|
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to maximize performance. Using the same data to train the base models and high level model would result in overfitting, so cross validation is used instead. Each base model is trained on the training set. The validation set is then fed through each base model to obtain inputs for training the high level model. This technique is sometimes called blending, when using a simple, held out validation set rather than cross validation. Stacking can be used for different types of problems (e.g. classification, regression, unsupervised learning). It works well in practice, and has become a popular tool in machine learning competitions.
In your case, the base models would be logistic regression, a random forest, and xgboost. Each of these models gives predicted class probabilities, which would be used as inputs to the high level model. In general, it's not necessary for base models to output class probabilities, but we can use them when available. A simple high level model might be a weighted average of the predicted class probabilities from each base model. In this case, you'd find the weights that minimize the log loss on the validation set (subject to the constraints that the weights are nonnegative and sum to one). An alternative high level model might be logistic or multinomial logistic regression (which will work even if the base models output scores rather than probabilities, like support vector machines). Fancier high level models are possible too (random forests, boosted classifiers, etc.).
In general, the best high level model will depend on the problem. It has been found that particular constraints on the high level model are helpful in certain settings. Be mindful that it's possible for the high level model to overfit the validation set (e.g. see here).
References:
Wolpert (1992). Stacked generalization.
Breiman (1996). Stacked regressions.
Ting and Witten (1999). Issues in stacked generalization.
Kaggle Ensembling Guide (blog post, 2015)
|
Weights to combine different models
|
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to
|
Weights to combine different models
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to maximize performance. Using the same data to train the base models and high level model would result in overfitting, so cross validation is used instead. Each base model is trained on the training set. The validation set is then fed through each base model to obtain inputs for training the high level model. This technique is sometimes called blending, when using a simple, held out validation set rather than cross validation. Stacking can be used for different types of problems (e.g. classification, regression, unsupervised learning). It works well in practice, and has become a popular tool in machine learning competitions.
In your case, the base models would be logistic regression, a random forest, and xgboost. Each of these models gives predicted class probabilities, which would be used as inputs to the high level model. In general, it's not necessary for base models to output class probabilities, but we can use them when available. A simple high level model might be a weighted average of the predicted class probabilities from each base model. In this case, you'd find the weights that minimize the log loss on the validation set (subject to the constraints that the weights are nonnegative and sum to one). An alternative high level model might be logistic or multinomial logistic regression (which will work even if the base models output scores rather than probabilities, like support vector machines). Fancier high level models are possible too (random forests, boosted classifiers, etc.).
In general, the best high level model will depend on the problem. It has been found that particular constraints on the high level model are helpful in certain settings. Be mindful that it's possible for the high level model to overfit the validation set (e.g. see here).
References:
Wolpert (1992). Stacked generalization.
Breiman (1996). Stacked regressions.
Ting and Witten (1999). Issues in stacked generalization.
Kaggle Ensembling Guide (blog post, 2015)
|
Weights to combine different models
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to
|
47,828
|
Is there any method for choosing the number of layers and neurons?
|
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches.
There exist more advanced techniques such as
1) Gaussian processes. Example:
Franck Dernoncourt, Ji Young Lee Optimizing Neural Network Hyperparameters with Gaussian Processes for Dialog Act Classification, IEEE SLT 2016.
2) Neuro-evolution. Examples:
Zaremba, Wojciech. Ilya Sutskever. Rafal Jozefowicz "An empirical exploration of recurrent network architectures." (2015): used evolutionary computation to find optimal RNN structures.
Franck Dernoncourt. "The medial Reticular Formation: a neural substrate for action selection? An evaluation via evolutionary computation.". Master's Thesis. École Normale
Supérieure Ulm. 2011. Used evolutionary computation to find connections in the ANN.
Bayer, Justin, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber. "Evolving memory cell structures for sequence learning." In International Conference on Artificial Neural Networks, pp. 755-764. Springer Berlin Heidelberg, 2009.: used evolutionary computation to find optimal RNN structures.
|
Is there any method for choosing the number of layers and neurons?
|
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches.
The
|
Is there any method for choosing the number of layers and neurons?
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches.
There exist more advanced techniques such as
1) Gaussian processes. Example:
Franck Dernoncourt, Ji Young Lee Optimizing Neural Network Hyperparameters with Gaussian Processes for Dialog Act Classification, IEEE SLT 2016.
2) Neuro-evolution. Examples:
Zaremba, Wojciech. Ilya Sutskever. Rafal Jozefowicz "An empirical exploration of recurrent network architectures." (2015): used evolutionary computation to find optimal RNN structures.
Franck Dernoncourt. "The medial Reticular Formation: a neural substrate for action selection? An evaluation via evolutionary computation.". Master's Thesis. École Normale
Supérieure Ulm. 2011. Used evolutionary computation to find connections in the ANN.
Bayer, Justin, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber. "Evolving memory cell structures for sequence learning." In International Conference on Artificial Neural Networks, pp. 755-764. Springer Berlin Heidelberg, 2009.: used evolutionary computation to find optimal RNN structures.
|
Is there any method for choosing the number of layers and neurons?
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches.
The
|
47,829
|
Joint distribution of dependent Binomial random variables
|
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to construct a joint distributions with such marginals.
|
Joint distribution of dependent Binomial random variables
|
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to con
|
Joint distribution of dependent Binomial random variables
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to construct a joint distributions with such marginals.
|
Joint distribution of dependent Binomial random variables
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to con
|
47,830
|
Is every L-estimator an M-estimator?
|
A classic example would be a trimmed mean.
For concreteness consider a 25% trimmed mean, where we average the middle half of the data.
That's an L-estimator, but not an M-estimator. It can in a sense be approximated* by a Huber-type M-estimator but they're not the same.
* (perhaps 'analogy' would be a better term than 'approximation' -- they might not always be very close together -- if the distribution is quite skew for example. In symmetric cases they're very alike.)
While many L-estimators can also be M-estimators, there are a great many that aren't.
|
Is every L-estimator an M-estimator?
|
A classic example would be a trimmed mean.
For concreteness consider a 25% trimmed mean, where we average the middle half of the data.
That's an L-estimator, but not an M-estimator. It can in a sense
|
Is every L-estimator an M-estimator?
A classic example would be a trimmed mean.
For concreteness consider a 25% trimmed mean, where we average the middle half of the data.
That's an L-estimator, but not an M-estimator. It can in a sense be approximated* by a Huber-type M-estimator but they're not the same.
* (perhaps 'analogy' would be a better term than 'approximation' -- they might not always be very close together -- if the distribution is quite skew for example. In symmetric cases they're very alike.)
While many L-estimators can also be M-estimators, there are a great many that aren't.
|
Is every L-estimator an M-estimator?
A classic example would be a trimmed mean.
For concreteness consider a 25% trimmed mean, where we average the middle half of the data.
That's an L-estimator, but not an M-estimator. It can in a sense
|
47,831
|
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.05$?
|
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from the ones which say $p > 0.05$ as the only safe option here is to convert them to $p = 1$, your suggestion of $p = 0.5$ cannot really be justified unless the authors have explicitly stated that the result was in the correct direction in which case the maximum value of a one--tailed $p$ would be $0.5$. Be careful with the method you use as some of them do not allow for $p=0$ or $p=1$ so you would need to use a value very close to the boundary.
|
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.
|
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from
|
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.05$?
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from the ones which say $p > 0.05$ as the only safe option here is to convert them to $p = 1$, your suggestion of $p = 0.5$ cannot really be justified unless the authors have explicitly stated that the result was in the correct direction in which case the maximum value of a one--tailed $p$ would be $0.5$. Be careful with the method you use as some of them do not allow for $p=0$ or $p=1$ so you would need to use a value very close to the boundary.
|
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from
|
47,832
|
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-treated
|
For ATU, the weights on $y_i$ would be
$$
w_i = \begin{cases}
\frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\
1 & \text{if}\ d_i=0,
\end{cases}
$$
where $d_i$ is the binary treatment indicator.
For ATT/ATET, the weights are
$$
w_i = \begin{cases}
1 & \text{if}\ d_i=1 \\
\frac{\hat p(x_i)}{1-\hat p(x_i)} & \text{if}\ d_i=0
\end{cases}
$$
For ATE, the weights are
$$
w_i = \begin{cases}
\frac{1}{\hat p(x_i)} & \text{if}\ d_i=1 \\
\frac{1}{1-\hat p(x_i)} & \text{if}\ d_i=0
\end{cases}
$$
You can find these formulas derived on pages 67-69 of Micro-Econometrics for Policy, Program and Treatment Effects by Myoung-jae Lee, except that I broke them into two pieces here.
Here's how I might do this in Stata, with native commands when possible and also by hand with a weighted regression of the outcome on a binary treatment dummy:
cls
set more off
webuse cattaneo2, clear
/* (0) Get the phats */
qui probit mbsmoke mmarried c.mage##c.mage fbaby medu
predict double phat, pr
/* (1a) ATE */
teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), ate
/* (1b) ATE By Hand */
gen double ate_w =cond(mbsmoke==1,1/phat,1/(1-phat))
reg bweight i.mbsmoke [pw=ate_w], vce(robust)
/* (2a) ATT */
teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), atet
/* (2b) ATT by Hand */
gen double att_w =cond(mbsmoke==1,1,phat/(1-phat))
reg bweight i.mbsmoke [pw=att_w], vce(robust)
/* (3) ATU by Hand Only */
gen double atu_w =cond(mbsmoke==1,(1-phat)/phat,1)
reg bweight i.mbsmoke [pw=atu_w], vce(robust)
This gives the following three effects of maternal smoking on newborn weight:
ATU = -231.8782 grams
ATT = -225.1773 grams
ATE = -230.6886 grams
|
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-
|
For ATU, the weights on $y_i$ would be
$$
w_i = \begin{cases}
\frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\
1 & \text{if}\ d_i=0,
\end{cases}
$$
where $d_i$ is the binary trea
|
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-treated
For ATU, the weights on $y_i$ would be
$$
w_i = \begin{cases}
\frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\
1 & \text{if}\ d_i=0,
\end{cases}
$$
where $d_i$ is the binary treatment indicator.
For ATT/ATET, the weights are
$$
w_i = \begin{cases}
1 & \text{if}\ d_i=1 \\
\frac{\hat p(x_i)}{1-\hat p(x_i)} & \text{if}\ d_i=0
\end{cases}
$$
For ATE, the weights are
$$
w_i = \begin{cases}
\frac{1}{\hat p(x_i)} & \text{if}\ d_i=1 \\
\frac{1}{1-\hat p(x_i)} & \text{if}\ d_i=0
\end{cases}
$$
You can find these formulas derived on pages 67-69 of Micro-Econometrics for Policy, Program and Treatment Effects by Myoung-jae Lee, except that I broke them into two pieces here.
Here's how I might do this in Stata, with native commands when possible and also by hand with a weighted regression of the outcome on a binary treatment dummy:
cls
set more off
webuse cattaneo2, clear
/* (0) Get the phats */
qui probit mbsmoke mmarried c.mage##c.mage fbaby medu
predict double phat, pr
/* (1a) ATE */
teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), ate
/* (1b) ATE By Hand */
gen double ate_w =cond(mbsmoke==1,1/phat,1/(1-phat))
reg bweight i.mbsmoke [pw=ate_w], vce(robust)
/* (2a) ATT */
teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), atet
/* (2b) ATT by Hand */
gen double att_w =cond(mbsmoke==1,1,phat/(1-phat))
reg bweight i.mbsmoke [pw=att_w], vce(robust)
/* (3) ATU by Hand Only */
gen double atu_w =cond(mbsmoke==1,(1-phat)/phat,1)
reg bweight i.mbsmoke [pw=atu_w], vce(robust)
This gives the following three effects of maternal smoking on newborn weight:
ATU = -231.8782 grams
ATT = -225.1773 grams
ATE = -230.6886 grams
|
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-
For ATU, the weights on $y_i$ would be
$$
w_i = \begin{cases}
\frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\
1 & \text{if}\ d_i=0,
\end{cases}
$$
where $d_i$ is the binary trea
|
47,833
|
What is the maximum entropy distribution given values for several quantiles of one sample?
|
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints
\begin{equation}
\int dx \, h_i(x) \, f(x) = c_i
\end{equation}
with $i=1\ldots N$ is
\begin{equation}
f(x) = e^{\mu + \sum_{i=1}^N \lambda_i h_i(x)} \;.
\end{equation}
The values of the parameter $\mu$ and $\lambda_i$ have to be found by imposing the fulfillment of the constraints and the fact the $f(x)$ integrates to one, that is it is a proper density. I left the boundaries of the integral in the constraints unspecified on purpose. The reason will become clear in a moment.
The quantile constraints $F(x_i)=q_i$ where $F$ is the distribution function (the integral of the $f(x)$), translate in having $h_i(x)=1-\theta_{x_i}(x)$ and $c_i=q_i$, where $\theta_z(x)$ is the Heaviside theta function, which is equal to $1$ if $x>z$ and zero otherwise. The problem is that now the expression for $f(x)$ given above is not an (improper) integrable function on the real line. Summarizing, your problem does not have a solution. If however you add further constraints, like a finite support or some specified moment, than the problem might become solvable.
|
What is the maximum entropy distribution given values for several quantiles of one sample?
|
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints
\begin{equation}
\int dx \, h_i(x) \, f(x) =
|
What is the maximum entropy distribution given values for several quantiles of one sample?
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints
\begin{equation}
\int dx \, h_i(x) \, f(x) = c_i
\end{equation}
with $i=1\ldots N$ is
\begin{equation}
f(x) = e^{\mu + \sum_{i=1}^N \lambda_i h_i(x)} \;.
\end{equation}
The values of the parameter $\mu$ and $\lambda_i$ have to be found by imposing the fulfillment of the constraints and the fact the $f(x)$ integrates to one, that is it is a proper density. I left the boundaries of the integral in the constraints unspecified on purpose. The reason will become clear in a moment.
The quantile constraints $F(x_i)=q_i$ where $F$ is the distribution function (the integral of the $f(x)$), translate in having $h_i(x)=1-\theta_{x_i}(x)$ and $c_i=q_i$, where $\theta_z(x)$ is the Heaviside theta function, which is equal to $1$ if $x>z$ and zero otherwise. The problem is that now the expression for $f(x)$ given above is not an (improper) integrable function on the real line. Summarizing, your problem does not have a solution. If however you add further constraints, like a finite support or some specified moment, than the problem might become solvable.
|
What is the maximum entropy distribution given values for several quantiles of one sample?
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints
\begin{equation}
\int dx \, h_i(x) \, f(x) =
|
47,834
|
Does a continuous censored predictor have to be treated as ordinal?
|
I think most articles on "censored variables" will be related to the response variable which is quite a different story.
Being a censored regressor is not automatically a problem. If you are not fully trusting this regressor or if the corresponding "residuals versus variable"-plot shows troubles in the two extreme values 21 and 60, then you can still decide to add dummy variables like
year60: 1 if 60 or above, 0 otherwise
year21: 1 if 21 or below, 0 otherwise
to the regression to allow the model to be flexible enough to represent the relationship.
Of course, because you don't have values outside the interval from 21 to 60, nothing can be made to recover the information loss. All you can do is trying to choose a flexibly enough regression equation.
Let me demonstrate the idea on a simple example with just this one covariable in R
# Step 1: Generate and visualize data
set.seed(29)
age <- 15:90
ageCensored <- pmin(60, pmax(21, age)) # censored at 21 an 60
outcome <- 20 + 0.5 * age + 0.03 * (age - 40)^2 + rnorm(length(age))*10
plot(outcome ~ ageCensored)
# Simple linear regression, ignoring for potential misfit at the endpoints
fit <- lm(outcome ~ ageCensored)
summary(fit)
abline(fit, col = "red") # to add the regression line to the scatter plot above
# Output
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.30597 3.99649 4.330 4.61e-05 ***
ageCensored 0.60062 0.08176 7.346 2.21e-10 ***
[...]
Residual standard error: 10.39 on 74 degrees of freedom
Multiple R-squared: 0.4217, Adjusted R-squared: 0.4139
F-statistic: 53.97 on 1 and 74 DF, p-value: 2.213e-10
# Residual versus fitted plot shows considerable misfit which is also directly visible from the scatter plot with the regression line
plot(fit, which = 1)
# Now we can either improve the fit by using a squared age effect (by knowing how the data way generated) or using the dummy "trick" mentioned above. Let's try with the dummy trick.
fit2 <- lm(outcome ~ ageCensored + I(ageCensored == 21) + I(ageCensored == 60))
summary(fit2)
plot(fit2, which = 1)
# Results
Estimate Std. Error t value Pr(>|t|)
(Intercept) 31.0754 10.4810 2.965 0.0041 **
ageCensored 0.3242 0.2498 1.298 0.1984
I(ageCensored == 21)TRUE 4.3685 8.4830 0.515 0.6082
I(ageCensored == 60)TRUE 43.7598 6.3583 6.882 1.82e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 16.89 on 72 degrees of freedom
Multiple R-squared: 0.6965, Adjusted R-squared: 0.6838
F-statistic: 55.07 on 3 and 72 DF, p-value: < 2.2e-16
# Residuals versus fitted plot looks better now (although heterogeneity can be spottet at the right endpoint, a problem which I do not account for simplicity)
# Plot of the regression function against age
plot(outcome ~ ageCensored, xlim = range(age), xlab = "age")
lines(age, predict(fit2, list(ageCensored)), col = "red")
Note that since in your data, you cannot distinguish a 60 year old person with a person older than 60 (i.e. you don't know what value is really censored), you cannot do much more here. If you had this information, you could slighly redefine the dummy variables to
year>60: 1 if above 61, 0 otherwise
year<21: 1 if below 21, 0 otherwise
to treat persons ages 60 or 21 separately from the censored ones.
|
Does a continuous censored predictor have to be treated as ordinal?
|
I think most articles on "censored variables" will be related to the response variable which is quite a different story.
Being a censored regressor is not automatically a problem. If you are not fully
|
Does a continuous censored predictor have to be treated as ordinal?
I think most articles on "censored variables" will be related to the response variable which is quite a different story.
Being a censored regressor is not automatically a problem. If you are not fully trusting this regressor or if the corresponding "residuals versus variable"-plot shows troubles in the two extreme values 21 and 60, then you can still decide to add dummy variables like
year60: 1 if 60 or above, 0 otherwise
year21: 1 if 21 or below, 0 otherwise
to the regression to allow the model to be flexible enough to represent the relationship.
Of course, because you don't have values outside the interval from 21 to 60, nothing can be made to recover the information loss. All you can do is trying to choose a flexibly enough regression equation.
Let me demonstrate the idea on a simple example with just this one covariable in R
# Step 1: Generate and visualize data
set.seed(29)
age <- 15:90
ageCensored <- pmin(60, pmax(21, age)) # censored at 21 an 60
outcome <- 20 + 0.5 * age + 0.03 * (age - 40)^2 + rnorm(length(age))*10
plot(outcome ~ ageCensored)
# Simple linear regression, ignoring for potential misfit at the endpoints
fit <- lm(outcome ~ ageCensored)
summary(fit)
abline(fit, col = "red") # to add the regression line to the scatter plot above
# Output
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.30597 3.99649 4.330 4.61e-05 ***
ageCensored 0.60062 0.08176 7.346 2.21e-10 ***
[...]
Residual standard error: 10.39 on 74 degrees of freedom
Multiple R-squared: 0.4217, Adjusted R-squared: 0.4139
F-statistic: 53.97 on 1 and 74 DF, p-value: 2.213e-10
# Residual versus fitted plot shows considerable misfit which is also directly visible from the scatter plot with the regression line
plot(fit, which = 1)
# Now we can either improve the fit by using a squared age effect (by knowing how the data way generated) or using the dummy "trick" mentioned above. Let's try with the dummy trick.
fit2 <- lm(outcome ~ ageCensored + I(ageCensored == 21) + I(ageCensored == 60))
summary(fit2)
plot(fit2, which = 1)
# Results
Estimate Std. Error t value Pr(>|t|)
(Intercept) 31.0754 10.4810 2.965 0.0041 **
ageCensored 0.3242 0.2498 1.298 0.1984
I(ageCensored == 21)TRUE 4.3685 8.4830 0.515 0.6082
I(ageCensored == 60)TRUE 43.7598 6.3583 6.882 1.82e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 16.89 on 72 degrees of freedom
Multiple R-squared: 0.6965, Adjusted R-squared: 0.6838
F-statistic: 55.07 on 3 and 72 DF, p-value: < 2.2e-16
# Residuals versus fitted plot looks better now (although heterogeneity can be spottet at the right endpoint, a problem which I do not account for simplicity)
# Plot of the regression function against age
plot(outcome ~ ageCensored, xlim = range(age), xlab = "age")
lines(age, predict(fit2, list(ageCensored)), col = "red")
Note that since in your data, you cannot distinguish a 60 year old person with a person older than 60 (i.e. you don't know what value is really censored), you cannot do much more here. If you had this information, you could slighly redefine the dummy variables to
year>60: 1 if above 61, 0 otherwise
year<21: 1 if below 21, 0 otherwise
to treat persons ages 60 or 21 separately from the censored ones.
|
Does a continuous censored predictor have to be treated as ordinal?
I think most articles on "censored variables" will be related to the response variable which is quite a different story.
Being a censored regressor is not automatically a problem. If you are not fully
|
47,835
|
Parameter estimation of a Rayleigh random variable with an offset
|
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy.
Nevertheless, I'll continue as if the shifted-Rayleigh were a suitable model.
If the offset is unknown you can estimate it as a parameter.
The density for a one-parameter Rayleigh is:
$\qquad f(x;\sigma )={\frac {x}{\sigma ^{2}}}e^{-x^{2}/(2\sigma ^{2})},\quad x\geq 0,$
If we introduce a shift $\mu$, it becomes:
$\qquad f(x;\mu,\sigma )={\frac {x-\mu}{\sigma ^{2}}}e^{-(x-\mu)^{2}/(2\sigma ^{2})},\quad x\geq \mu .$
[NB Here $\mu$ is the lower bound on the random variable, not the mean.]
Dey et al, 2014 [1] discuss estimation in the two-parameter Rayleigh case. (However, you should carefully note that in the parameterization there, the second parameter, $\lambda$, is not the scale - even though they say it is - in fact, $\sigma$ (or anything proportional to it) is a scale parameter, where $\lambda=(2\sigma^2)^{-1}$.)
They provide a simple iterative estimator for the MLE of the shift parameter, $\mu$: $\require{enclose}$
$${\mu}^{[j+1]}=\enclose{horizontalstrike}{2\sum_{i=1}^n(x_i-\mu^{[j]})^2\times \sum_{i=1}^n(x_i-\mu^{[j]})\times \sum_{i=1}^n(x_i-\mu^{[j]})^{-1}}$$
[Edit: It looks like this formula (found in both the working and published versions!) cannot be correct, since it's in squared units-of-$x$. Clearly a shift/location parameter has to be in units-of-$x$.
For the moment (until I see if I can derive it correctly myself), probably the best thing to do is optimize the profile log likelihood for $\mu$ in equation 7 using a univariate optimizer:
$\qquad g(\mu) = \sum_{i=1}^n\ln\left[\frac{(x_i-\mu)}{\sum_{i=1}^n(x_i-\mu)^2}\right]$
a quick check of the algebra seems to suggest this formula is correct up to an additive constant. Running a few dozen examples on randomly generated Rayleigh data - both in small (n=10) and moderately large samples (n=1000) - suggests that simply optimizing the profile log likelihood directly seems to work quite well. I used Brent's method but any number of reasonable optimization methods should work adequately.]
Then $\hat{\mu}$ is taken to be the value of ${\mu}$ at the last iterate obtained at convergence, ${\mu}^{[T]}$, say.
These iterations could be started (${\mu}^{[0]}$) at the method of moments estimator of $\mu, \tilde{\mu}=\bar{x}-k s$ where $\bar{x}$ and $s$ are the sample mean and standard deviation respectively,
and $k = \frac{\Gamma(\frac32)}{\sqrt{1-\Gamma(\frac32)^2}}=\sqrt{\frac{\pi}{4-\pi}}\approx 1.913$, or at some suitably small distance below the smallest observation (e.g. ${\mu}^{[0]}=x_{(1)}-\frac{c}{\sqrt{n}}$ with $c$ near $0.3$ should work reasonably well as a start point). Note that the method of moments estimator may sometimes exceed the smallest observation (and should be avoided/modified in that case).
If the data are then shifted by $\hat{\mu}$, $x^{(0)}_i=x_i-\hat{\mu}$, the scale parameter may be estimated from the back-shifted
data in the usual fashion for a Rayleigh distribution. Standard errors and confidence intervals follow in the same fashion.
Interestingly, (but not entirely surprisingly for a shift parameter on a random variable bounded below by it),
the shift parameter doesn't have the "usual" asymptotics for MLEs, in that the variance isn't proportional to $\frac{1}{n}$.
(The paper gives asymptotic confidence intervals for the parameters - but again, note that they don't use the same parameterization for the main parameter. The same paper discusses other estimators, but since the MLEs are fairly simple, I'd suggest sticking with them)
[1]: Dey, S., T. Dey, and D. Kundu (2014),
Two-Parameter Rayleigh Distribution: Different Methods of Estimation,
American Journal of Mathematical and Management Sciences, Vol 33, No 1, p55-74
(working paper here)
|
Parameter estimation of a Rayleigh random variable with an offset
|
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy.
Nevertheless, I'll continue
|
Parameter estimation of a Rayleigh random variable with an offset
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy.
Nevertheless, I'll continue as if the shifted-Rayleigh were a suitable model.
If the offset is unknown you can estimate it as a parameter.
The density for a one-parameter Rayleigh is:
$\qquad f(x;\sigma )={\frac {x}{\sigma ^{2}}}e^{-x^{2}/(2\sigma ^{2})},\quad x\geq 0,$
If we introduce a shift $\mu$, it becomes:
$\qquad f(x;\mu,\sigma )={\frac {x-\mu}{\sigma ^{2}}}e^{-(x-\mu)^{2}/(2\sigma ^{2})},\quad x\geq \mu .$
[NB Here $\mu$ is the lower bound on the random variable, not the mean.]
Dey et al, 2014 [1] discuss estimation in the two-parameter Rayleigh case. (However, you should carefully note that in the parameterization there, the second parameter, $\lambda$, is not the scale - even though they say it is - in fact, $\sigma$ (or anything proportional to it) is a scale parameter, where $\lambda=(2\sigma^2)^{-1}$.)
They provide a simple iterative estimator for the MLE of the shift parameter, $\mu$: $\require{enclose}$
$${\mu}^{[j+1]}=\enclose{horizontalstrike}{2\sum_{i=1}^n(x_i-\mu^{[j]})^2\times \sum_{i=1}^n(x_i-\mu^{[j]})\times \sum_{i=1}^n(x_i-\mu^{[j]})^{-1}}$$
[Edit: It looks like this formula (found in both the working and published versions!) cannot be correct, since it's in squared units-of-$x$. Clearly a shift/location parameter has to be in units-of-$x$.
For the moment (until I see if I can derive it correctly myself), probably the best thing to do is optimize the profile log likelihood for $\mu$ in equation 7 using a univariate optimizer:
$\qquad g(\mu) = \sum_{i=1}^n\ln\left[\frac{(x_i-\mu)}{\sum_{i=1}^n(x_i-\mu)^2}\right]$
a quick check of the algebra seems to suggest this formula is correct up to an additive constant. Running a few dozen examples on randomly generated Rayleigh data - both in small (n=10) and moderately large samples (n=1000) - suggests that simply optimizing the profile log likelihood directly seems to work quite well. I used Brent's method but any number of reasonable optimization methods should work adequately.]
Then $\hat{\mu}$ is taken to be the value of ${\mu}$ at the last iterate obtained at convergence, ${\mu}^{[T]}$, say.
These iterations could be started (${\mu}^{[0]}$) at the method of moments estimator of $\mu, \tilde{\mu}=\bar{x}-k s$ where $\bar{x}$ and $s$ are the sample mean and standard deviation respectively,
and $k = \frac{\Gamma(\frac32)}{\sqrt{1-\Gamma(\frac32)^2}}=\sqrt{\frac{\pi}{4-\pi}}\approx 1.913$, or at some suitably small distance below the smallest observation (e.g. ${\mu}^{[0]}=x_{(1)}-\frac{c}{\sqrt{n}}$ with $c$ near $0.3$ should work reasonably well as a start point). Note that the method of moments estimator may sometimes exceed the smallest observation (and should be avoided/modified in that case).
If the data are then shifted by $\hat{\mu}$, $x^{(0)}_i=x_i-\hat{\mu}$, the scale parameter may be estimated from the back-shifted
data in the usual fashion for a Rayleigh distribution. Standard errors and confidence intervals follow in the same fashion.
Interestingly, (but not entirely surprisingly for a shift parameter on a random variable bounded below by it),
the shift parameter doesn't have the "usual" asymptotics for MLEs, in that the variance isn't proportional to $\frac{1}{n}$.
(The paper gives asymptotic confidence intervals for the parameters - but again, note that they don't use the same parameterization for the main parameter. The same paper discusses other estimators, but since the MLEs are fairly simple, I'd suggest sticking with them)
[1]: Dey, S., T. Dey, and D. Kundu (2014),
Two-Parameter Rayleigh Distribution: Different Methods of Estimation,
American Journal of Mathematical and Management Sciences, Vol 33, No 1, p55-74
(working paper here)
|
Parameter estimation of a Rayleigh random variable with an offset
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy.
Nevertheless, I'll continue
|
47,836
|
What is out-of-fold average?
|
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at.
Say you have a very high cardinality feature $x$ with some giant set of possible levels $l_1, l_2, \cdots, l_n$. These can be difficult to use in a model directly. One approach to deriving a feature from such a predictor is to take some other data set that is not used in training, and compute the average values of the response within each group determined by the predictor
$$ x'_i = \frac{1}{\# \{ x_j = x_i \} } \sum_{x_j = x_i} y_i $$
Then you can use this new predictor in a learner, but you have reduced the many binary features of the categorical into one new feature in the model.
There are caveats. First, you absolutely must use data that abstains from training to calculate the group level averages, or you have leaked the thing you are trying to predict directly into your predictors, and your model is worthless. Second, if you want your model to explain much of anything this is a pretty bad approach, as your new predictor basically says "y happened to x because y happened to x some other data set."
If you don't have a free data set hanging around to compute the group level averages, one pretty effective technique is to add noise to group level averages computed from your training data
$$ x'_i = \frac{ \sum_{x_j = x_i} y_i + \text{laplace}(0, \alpha)} {\# \{ x_j = x_i \} + \text{laplace}(0, \alpha)} $$
where $\text{laplace}(0, \alpha)$ is random noise generated from a laplace distribution. This technique is derived from research into differential privacy.
|
What is out-of-fold average?
|
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at.
Say you have a very high cardinality feature $x$ with some giant set of possi
|
What is out-of-fold average?
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at.
Say you have a very high cardinality feature $x$ with some giant set of possible levels $l_1, l_2, \cdots, l_n$. These can be difficult to use in a model directly. One approach to deriving a feature from such a predictor is to take some other data set that is not used in training, and compute the average values of the response within each group determined by the predictor
$$ x'_i = \frac{1}{\# \{ x_j = x_i \} } \sum_{x_j = x_i} y_i $$
Then you can use this new predictor in a learner, but you have reduced the many binary features of the categorical into one new feature in the model.
There are caveats. First, you absolutely must use data that abstains from training to calculate the group level averages, or you have leaked the thing you are trying to predict directly into your predictors, and your model is worthless. Second, if you want your model to explain much of anything this is a pretty bad approach, as your new predictor basically says "y happened to x because y happened to x some other data set."
If you don't have a free data set hanging around to compute the group level averages, one pretty effective technique is to add noise to group level averages computed from your training data
$$ x'_i = \frac{ \sum_{x_j = x_i} y_i + \text{laplace}(0, \alpha)} {\# \{ x_j = x_i \} + \text{laplace}(0, \alpha)} $$
where $\text{laplace}(0, \alpha)$ is random noise generated from a laplace distribution. This technique is derived from research into differential privacy.
|
What is out-of-fold average?
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at.
Say you have a very high cardinality feature $x$ with some giant set of possi
|
47,837
|
Missing data imputation in time series in R
|
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is no way to estimate the missing values)
You need imputation packages that work on time features.
You could use for example package imputeTS to impute the temperature.
library(imputeTS)
x <- ts(htemp$TEMPERATURE, frequency = 12)
x.withoutNA <- na_kalman(x)
This would be one possible solution of getting imputed temperature values.
Here another one with the forecast package:
library(forecast)
x <- ts(htemp$TEMPERATURE, frequency = 12)
x.withoutNA <- na.interp(x)
These packages actually work, because they work on time correlations of one attribute instead of inter-attribute correlations.
|
Missing data imputation in time series in R
|
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is n
|
Missing data imputation in time series in R
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is no way to estimate the missing values)
You need imputation packages that work on time features.
You could use for example package imputeTS to impute the temperature.
library(imputeTS)
x <- ts(htemp$TEMPERATURE, frequency = 12)
x.withoutNA <- na_kalman(x)
This would be one possible solution of getting imputed temperature values.
Here another one with the forecast package:
library(forecast)
x <- ts(htemp$TEMPERATURE, frequency = 12)
x.withoutNA <- na.interp(x)
These packages actually work, because they work on time correlations of one attribute instead of inter-attribute correlations.
|
Missing data imputation in time series in R
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is n
|
47,838
|
Missing data imputation in time series in R
|
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series.
https://www.est.colpos.mx/web/packages/kssa/index.html
|
Missing data imputation in time series in R
|
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series.
https://www.est.colpos.mx/web/packages/kssa/index.html
|
Missing data imputation in time series in R
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series.
https://www.est.colpos.mx/web/packages/kssa/index.html
|
Missing data imputation in time series in R
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series.
https://www.est.colpos.mx/web/packages/kssa/index.html
|
47,839
|
How to transform one PDF into another graphically?
|
You're heading in the right direction with your thoughts on considering the cdf.
Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying some monotonic increasing transformation, $t$ on $X$, giving $Y=t(X)$. The new variable $Y$ has cdf $F_Y(y)$ and density $f_Y(y)$. Then:
$F_Y(y) = P(Y\leq y) = P(t(X)\leq y) = P(X\leq t^{-1}(y)) = F_X(t^{-1}(y))$
(By plotting $F_X(t^{-1}(y))$ against $y$ , this has the "stretching" effect on the x-axis you mentioned - the values on the vertical axis are unchanged but are shifted on the horizontal axis.)
Now we can see where that $\frac{1}{x}$ term came from in the lognormal pdf.
Recall we had:
$F_Y(y) = F_X(t^{-1}(y))$
So
$f_Y(y) = \frac{d}{dy} F_X(t^{-1}(y)) = f_X(t^{-1}(y))\cdot \frac{d}{dy}t^{-1}(y)$
A similar result can be derived for monotonic decreasing transformations, yielding the more general result for invertible transformations:
$f_Y(y) = \frac{d}{dy} F_X(t^{-1}(y)) = f_X(t^{-1}(y))\cdot |\frac{d}{dy}t^{-1}(y)|$
When $t$ is the $\exp$ function, $t^{-1}$ is the log, which has the reciprocal as its derivative.
So you do that axis transformation you thought about, but you then have an additional factor, the Jacobian of the transformation, which changes the height. So far it's quite clear that we must have that term when we go to the pdf from the CDF.
But we can also explain more directly why we need it:
Loosely, note that if you have a very small interval $[x,x+\delta x)$ for which $f$ is effectively constant (so the area is effectively $f(x)\,\delta x$), if you stretch the axis by transforming it as for the cdf, the total area in the transformed small interval is changed by the stretching, but the probability of being in the interval is unchanged. So to preserve the probability represented by the small area, you need to "undo" the impact of the stretching on the small area so that it still represents the probability. The area is kept the same by modifying the height. (This is what the Jacobian does -- preserve small areas.)
Note that dividing in our example by $t'(x)=\exp(x)$ is in that case the same as dividing by $y$, which is the scaling factor we get from the Jacobian calculation above for $t(x)=\exp(x)$.
|
How to transform one PDF into another graphically?
|
You're heading in the right direction with your thoughts on considering the cdf.
Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying som
|
How to transform one PDF into another graphically?
You're heading in the right direction with your thoughts on considering the cdf.
Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying some monotonic increasing transformation, $t$ on $X$, giving $Y=t(X)$. The new variable $Y$ has cdf $F_Y(y)$ and density $f_Y(y)$. Then:
$F_Y(y) = P(Y\leq y) = P(t(X)\leq y) = P(X\leq t^{-1}(y)) = F_X(t^{-1}(y))$
(By plotting $F_X(t^{-1}(y))$ against $y$ , this has the "stretching" effect on the x-axis you mentioned - the values on the vertical axis are unchanged but are shifted on the horizontal axis.)
Now we can see where that $\frac{1}{x}$ term came from in the lognormal pdf.
Recall we had:
$F_Y(y) = F_X(t^{-1}(y))$
So
$f_Y(y) = \frac{d}{dy} F_X(t^{-1}(y)) = f_X(t^{-1}(y))\cdot \frac{d}{dy}t^{-1}(y)$
A similar result can be derived for monotonic decreasing transformations, yielding the more general result for invertible transformations:
$f_Y(y) = \frac{d}{dy} F_X(t^{-1}(y)) = f_X(t^{-1}(y))\cdot |\frac{d}{dy}t^{-1}(y)|$
When $t$ is the $\exp$ function, $t^{-1}$ is the log, which has the reciprocal as its derivative.
So you do that axis transformation you thought about, but you then have an additional factor, the Jacobian of the transformation, which changes the height. So far it's quite clear that we must have that term when we go to the pdf from the CDF.
But we can also explain more directly why we need it:
Loosely, note that if you have a very small interval $[x,x+\delta x)$ for which $f$ is effectively constant (so the area is effectively $f(x)\,\delta x$), if you stretch the axis by transforming it as for the cdf, the total area in the transformed small interval is changed by the stretching, but the probability of being in the interval is unchanged. So to preserve the probability represented by the small area, you need to "undo" the impact of the stretching on the small area so that it still represents the probability. The area is kept the same by modifying the height. (This is what the Jacobian does -- preserve small areas.)
Note that dividing in our example by $t'(x)=\exp(x)$ is in that case the same as dividing by $y$, which is the scaling factor we get from the Jacobian calculation above for $t(x)=\exp(x)$.
|
How to transform one PDF into another graphically?
You're heading in the right direction with your thoughts on considering the cdf.
Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying som
|
47,840
|
Martingale process
|
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then
$$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have
\begin{align}
\mathbb E[X_t\mid\mathcal F_s] &= \mathbb E\left[ \frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\,\big\vert\, \mathcal F_s\right]\\
&= \mathbb E\left[e^{\xi_t-\xi_s}e^{\xi_s}\mid\mathcal F_s\right]\mathbb E\left[e^{\xi_t}\right]^{-1}\\
&=\mathbb E\left[e^{\xi_t-\xi_s}\right]e^{\xi_s}\mathbb E\left[ e^{\xi_t-\xi_s}e^{\xi_s}\right]^{-1}\\
&= \mathbb E\left[e^{\xi_s} \right]^{-1}e^{\xi_s}\\
&= X_s,
\end{align}
which implies that $X_t$ is a martingale.
|
Martingale process
|
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then
$$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have
\begi
|
Martingale process
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then
$$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have
\begin{align}
\mathbb E[X_t\mid\mathcal F_s] &= \mathbb E\left[ \frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\,\big\vert\, \mathcal F_s\right]\\
&= \mathbb E\left[e^{\xi_t-\xi_s}e^{\xi_s}\mid\mathcal F_s\right]\mathbb E\left[e^{\xi_t}\right]^{-1}\\
&=\mathbb E\left[e^{\xi_t-\xi_s}\right]e^{\xi_s}\mathbb E\left[ e^{\xi_t-\xi_s}e^{\xi_s}\right]^{-1}\\
&= \mathbb E\left[e^{\xi_s} \right]^{-1}e^{\xi_s}\\
&= X_s,
\end{align}
which implies that $X_t$ is a martingale.
|
Martingale process
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then
$$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have
\begi
|
47,841
|
Variance of the modulus of a random variable
|
So
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$.
Now define a new random variable $X^+$ by $X^+ = X$ if $X>0$, and $X^+=0$ if $X\le 0$; similarly let $X^- = X$ if $X < 0$ and $X^-=0$ if $X\ge 0$.
Assuming both $E\left(X^+\right)$ and $E\left(X^-\right)$ exist, show that
$$ \var\bigl( |X| \bigr)= \var(X) + 4E\left(X^+\right)E\left(X^-\right).$$
Show that this is $\le \var(X)$, and check that the bound is tight.
|
Variance of the modulus of a random variable
|
So
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$.
Now define a new random variable $X^+$ by
|
Variance of the modulus of a random variable
So
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$.
Now define a new random variable $X^+$ by $X^+ = X$ if $X>0$, and $X^+=0$ if $X\le 0$; similarly let $X^- = X$ if $X < 0$ and $X^-=0$ if $X\ge 0$.
Assuming both $E\left(X^+\right)$ and $E\left(X^-\right)$ exist, show that
$$ \var\bigl( |X| \bigr)= \var(X) + 4E\left(X^+\right)E\left(X^-\right).$$
Show that this is $\le \var(X)$, and check that the bound is tight.
|
Variance of the modulus of a random variable
So
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$.
Now define a new random variable $X^+$ by
|
47,842
|
Variance of the modulus of a random variable
|
We know that
$$\;\;\;\;X \leq |X|\\
\Rightarrow E\big(X\big) \leq E\big(|X|\big)\\
\Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2
$$
Using the above in
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
we get
$$ \var\bigl( |X| \bigr) \leq E\left(X^2\right) - E\bigl( X \bigr)^2 = \var(X)$$
|
Variance of the modulus of a random variable
|
We know that
$$\;\;\;\;X \leq |X|\\
\Rightarrow E\big(X\big) \leq E\big(|X|\big)\\
\Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2
$$
Using the above in
$$ \def\var{\text{var}} \var\bigl( |X| \bigr
|
Variance of the modulus of a random variable
We know that
$$\;\;\;\;X \leq |X|\\
\Rightarrow E\big(X\big) \leq E\big(|X|\big)\\
\Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2
$$
Using the above in
$$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$
we get
$$ \var\bigl( |X| \bigr) \leq E\left(X^2\right) - E\bigl( X \bigr)^2 = \var(X)$$
|
Variance of the modulus of a random variable
We know that
$$\;\;\;\;X \leq |X|\\
\Rightarrow E\big(X\big) \leq E\big(|X|\big)\\
\Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2
$$
Using the above in
$$ \def\var{\text{var}} \var\bigl( |X| \bigr
|
47,843
|
Impact of inverting grayscale values on mnist dataset
|
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result.
In the MNIST input data, pixel values range from 0 (black background) to 255 (white foreground), which is usually scaled in the [0,1] interval.
In tensorflow, the actual output of mnist.train.next_batch(batch_size) is indeed a (batch_size, 784) matrix for the train data in that format. Now let's invert that grayscale, by doing batch_xs = 1-batch_xs. We can now measure the performance with the classification accuracy for both the normal and the inverted input data, and average this accuracy on 100 trials in each of which we perform 50 updates of the neural network.
n_trials = 100
n_iter = 50
accuracy_history = np.zeros((2,n_trials))
batch_size = 100
for k, preprocessing in enumerate(['normal','reversed']):
sess.run(init)
for t in range(n_trials):
for i in range(n_iter):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
if preprocessing == 'reversed':
batch_xs = 1-batch_xs
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
images = mnist.test.images
if preprocessing == 'reversed':
images = 1-mnist.test.images
accuracy_history[k,t] = sess.run(accuracy, feed_dict={x: images,
y_: mnist.test.labels})
print(accuracy_history.mean(axis=1))
>> Out[58]: array([ 0.91879 , 0.837478])
To answer your question, inverting grayscale values does impact performance.
I believe data centering is one of the reason why black on white performs worse that white on black. In general in machine learning, it is good practice to normalize center the data. When you think of the MNIST dataset, most pixels on the images are black, so that the mean is close to 0, whereas if you inverted it, it would be 1 (or 255 if you didn't scale down).
More importantly, in Neural Network updates, the weights corresponding to a 0 in the input are not going to be updated. You can see it experimentally by observing the evolution of the weights of your neural network after training (resp. W_begin and W_end). Below are two heatmaps representing the changes in absolute value of the weights.
from matplotlib.pyplot import imshow
heatmap = (np.abs(W_begin-W_end).max(axis=1)).reshape((28,28))
imshow(heatmap)
For this first image - white on black digits - you can see that the weights haven't changed at all on the image border (dark blue).
However on this second image - black on white digits - you can see there is a brighter blue on the border, meaning these weights have changed. But you also notice dark blue regions near the center of the image, which shows that weights haven't much evolved in this area.
Intuitively, we don't care about updating weights at the border because the corresponding pixels do not discriminate the different classes of digits. However, we do care about the weigths in the middle of the image for the opposite reason. This explains why white on black MNIST performs better than black on white.
|
Impact of inverting grayscale values on mnist dataset
|
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result.
In the MNIST input data, pixel values range
|
Impact of inverting grayscale values on mnist dataset
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result.
In the MNIST input data, pixel values range from 0 (black background) to 255 (white foreground), which is usually scaled in the [0,1] interval.
In tensorflow, the actual output of mnist.train.next_batch(batch_size) is indeed a (batch_size, 784) matrix for the train data in that format. Now let's invert that grayscale, by doing batch_xs = 1-batch_xs. We can now measure the performance with the classification accuracy for both the normal and the inverted input data, and average this accuracy on 100 trials in each of which we perform 50 updates of the neural network.
n_trials = 100
n_iter = 50
accuracy_history = np.zeros((2,n_trials))
batch_size = 100
for k, preprocessing in enumerate(['normal','reversed']):
sess.run(init)
for t in range(n_trials):
for i in range(n_iter):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
if preprocessing == 'reversed':
batch_xs = 1-batch_xs
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
images = mnist.test.images
if preprocessing == 'reversed':
images = 1-mnist.test.images
accuracy_history[k,t] = sess.run(accuracy, feed_dict={x: images,
y_: mnist.test.labels})
print(accuracy_history.mean(axis=1))
>> Out[58]: array([ 0.91879 , 0.837478])
To answer your question, inverting grayscale values does impact performance.
I believe data centering is one of the reason why black on white performs worse that white on black. In general in machine learning, it is good practice to normalize center the data. When you think of the MNIST dataset, most pixels on the images are black, so that the mean is close to 0, whereas if you inverted it, it would be 1 (or 255 if you didn't scale down).
More importantly, in Neural Network updates, the weights corresponding to a 0 in the input are not going to be updated. You can see it experimentally by observing the evolution of the weights of your neural network after training (resp. W_begin and W_end). Below are two heatmaps representing the changes in absolute value of the weights.
from matplotlib.pyplot import imshow
heatmap = (np.abs(W_begin-W_end).max(axis=1)).reshape((28,28))
imshow(heatmap)
For this first image - white on black digits - you can see that the weights haven't changed at all on the image border (dark blue).
However on this second image - black on white digits - you can see there is a brighter blue on the border, meaning these weights have changed. But you also notice dark blue regions near the center of the image, which shows that weights haven't much evolved in this area.
Intuitively, we don't care about updating weights at the border because the corresponding pixels do not discriminate the different classes of digits. However, we do care about the weigths in the middle of the image for the opposite reason. This explains why white on black MNIST performs better than black on white.
|
Impact of inverting grayscale values on mnist dataset
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result.
In the MNIST input data, pixel values range
|
47,844
|
Interpretation of coefficients in logistic regression output
|
Summary
The question misinterprets the coefficients.
The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant ($p=0.138$). Therefore the proportion of positive results in the data, equal to $100 - 19.95\% \approx 80\%$, ought to have a log odds close to the intercept of $1.64$. Indeed,
$$\log\left(\frac{80\%}{20\%}\right) = \log(4) \approx 1.4$$
is only about one standard error ($0.22$) away from the intercept. Everything looks consistent.
Detailed analysis
This generalized linear model supposes that the log odds of the response $H$ being $1$ when the independent variable $X$ has a particular value $x$ is some linear function of $x$,
$$\text{Log odds}(H=1\,|\,X=x) = \beta_0 + \beta_1 x.\tag{1}$$
The glm command in R estimated these unknown coefficients with values $$\hat\beta_0 = 1.641666\pm 0.2290133$$ and $$\hat\beta_1 = -0.0014039\pm 0.0009466.$$
The dataset contains a large number $n$ of observations with various values of $x$, written $x_i$ for $i=1, 2, \ldots, n$, which range from $82.3$ to $391.6$ and average $\bar x = 223.8$. Formula $(1)$ enables us to compute the estimated probabilities of each outcome, $\Pr(H=1\,|\,X=x_i)$. If the model is any good, the average of those probabilities ought to be close to the average of the outcomes.
Since the odds are, by definition, the ratio of a probability to its complement, we can use simple algebra to find the estimated probabilities in terms of the log odds
$$\widehat\Pr(H=1\,|\,X=x) = 1 - \frac{1}{1 + \exp\left(\hat\beta_0 + \hat\beta_1 x\right)}.$$
As a nonlinear function of $x$, that's difficult to average. However, provided $\beta_1 x$ is small (much less than $1$ in size) and $1+\exp(\hat\beta_0)$ is not small (it exceeds $6$ in this case), we can safely use a linear approximation
$$\frac{1}{1 + \exp\left(\hat\beta_0 + \hat\beta_1 x\right)} = \frac{1}{1 + \exp(\hat\beta_0)}\left(1 - \hat\beta_1 x \frac{\exp{\hat\beta_0}}{1 + \exp(\hat\beta_0)}\right) + O\left(\hat\beta_1 x\right)^2.$$
Since the $x_i$ never exceed $391.6$, $|\hat\beta_1 x_i|$ never exceeds $391.6\times 0.0014039 \approx 0.55$, so we're ok. Consequently, the average of the outcomes may be approximated as
$$\eqalign{
\frac{1}{n}\sum_{i=1}^n \widehat\Pr(H=1\,|\,X=x)
&\approx \frac{1}{n}\sum_{i=1}^n \left(1 - \frac{1}{1 + \exp(\hat\beta_0)}\left(1 - \hat\beta_1 x_i \frac{\exp{\hat\beta_0}}{1 + \exp(\hat\beta_0)}\right)\right)\\
&= 0.162238 + 0.000190814 \bar{x} \\
&= 20.4943\%.
}$$
Although that's not exactly equal to the $19.95\%$ observed in the data, it is more than close enough, because $\hat\beta_1$ has a relatively large standard error. For example, if $\beta_1$ were increased by only $0.3$ of its standard error to $-0.0011271$, then the previous calculation would produce $19.95\%$ exactly.
|
Interpretation of coefficients in logistic regression output
|
Summary
The question misinterprets the coefficients.
The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant
|
Interpretation of coefficients in logistic regression output
Summary
The question misinterprets the coefficients.
The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant ($p=0.138$). Therefore the proportion of positive results in the data, equal to $100 - 19.95\% \approx 80\%$, ought to have a log odds close to the intercept of $1.64$. Indeed,
$$\log\left(\frac{80\%}{20\%}\right) = \log(4) \approx 1.4$$
is only about one standard error ($0.22$) away from the intercept. Everything looks consistent.
Detailed analysis
This generalized linear model supposes that the log odds of the response $H$ being $1$ when the independent variable $X$ has a particular value $x$ is some linear function of $x$,
$$\text{Log odds}(H=1\,|\,X=x) = \beta_0 + \beta_1 x.\tag{1}$$
The glm command in R estimated these unknown coefficients with values $$\hat\beta_0 = 1.641666\pm 0.2290133$$ and $$\hat\beta_1 = -0.0014039\pm 0.0009466.$$
The dataset contains a large number $n$ of observations with various values of $x$, written $x_i$ for $i=1, 2, \ldots, n$, which range from $82.3$ to $391.6$ and average $\bar x = 223.8$. Formula $(1)$ enables us to compute the estimated probabilities of each outcome, $\Pr(H=1\,|\,X=x_i)$. If the model is any good, the average of those probabilities ought to be close to the average of the outcomes.
Since the odds are, by definition, the ratio of a probability to its complement, we can use simple algebra to find the estimated probabilities in terms of the log odds
$$\widehat\Pr(H=1\,|\,X=x) = 1 - \frac{1}{1 + \exp\left(\hat\beta_0 + \hat\beta_1 x\right)}.$$
As a nonlinear function of $x$, that's difficult to average. However, provided $\beta_1 x$ is small (much less than $1$ in size) and $1+\exp(\hat\beta_0)$ is not small (it exceeds $6$ in this case), we can safely use a linear approximation
$$\frac{1}{1 + \exp\left(\hat\beta_0 + \hat\beta_1 x\right)} = \frac{1}{1 + \exp(\hat\beta_0)}\left(1 - \hat\beta_1 x \frac{\exp{\hat\beta_0}}{1 + \exp(\hat\beta_0)}\right) + O\left(\hat\beta_1 x\right)^2.$$
Since the $x_i$ never exceed $391.6$, $|\hat\beta_1 x_i|$ never exceeds $391.6\times 0.0014039 \approx 0.55$, so we're ok. Consequently, the average of the outcomes may be approximated as
$$\eqalign{
\frac{1}{n}\sum_{i=1}^n \widehat\Pr(H=1\,|\,X=x)
&\approx \frac{1}{n}\sum_{i=1}^n \left(1 - \frac{1}{1 + \exp(\hat\beta_0)}\left(1 - \hat\beta_1 x_i \frac{\exp{\hat\beta_0}}{1 + \exp(\hat\beta_0)}\right)\right)\\
&= 0.162238 + 0.000190814 \bar{x} \\
&= 20.4943\%.
}$$
Although that's not exactly equal to the $19.95\%$ observed in the data, it is more than close enough, because $\hat\beta_1$ has a relatively large standard error. For example, if $\beta_1$ were increased by only $0.3$ of its standard error to $-0.0011271$, then the previous calculation would produce $19.95\%$ exactly.
|
Interpretation of coefficients in logistic regression output
Summary
The question misinterprets the coefficients.
The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant
|
47,845
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particular and in my experience, the categorical information will dominate the solution.
A better approach would be to employ a variant of finite mixture models which are often intended for use with mixtures of continuous and categorical information. Latent class mixture models (which are FMMs) have a huge literature built up around them. Much of that literature is focused in the field of marketing science where these methods see wide use for, e.g., consumer segmentation...but that's not the only field where they are used.
The software I know and recommend for latent class modeling is neither free nor R-based but, in terms of proprietary software, it's not that expensive. It's called Latent Gold, is sold by Statistical Innovations and costs about $1,000 for a perpetual license. If your project has a budget, it could easily be expensed. LG offers a wide suite of tools including FA for mixtures, clustering of mixtures, longitudinal markov chain-based clustering, and more.
Otherwise, the only R-based freeware I know about (polCA, https://www.jstatsoft.org/article/view/v042i10) is intended for use with multi-way contingency tables. I'm not aware that this tool can accept anything other than categorical information. There may be others. If you poke around, maybe you can find some alternatives.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particula
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particular and in my experience, the categorical information will dominate the solution.
A better approach would be to employ a variant of finite mixture models which are often intended for use with mixtures of continuous and categorical information. Latent class mixture models (which are FMMs) have a huge literature built up around them. Much of that literature is focused in the field of marketing science where these methods see wide use for, e.g., consumer segmentation...but that's not the only field where they are used.
The software I know and recommend for latent class modeling is neither free nor R-based but, in terms of proprietary software, it's not that expensive. It's called Latent Gold, is sold by Statistical Innovations and costs about $1,000 for a perpetual license. If your project has a budget, it could easily be expensed. LG offers a wide suite of tools including FA for mixtures, clustering of mixtures, longitudinal markov chain-based clustering, and more.
Otherwise, the only R-based freeware I know about (polCA, https://www.jstatsoft.org/article/view/v042i10) is intended for use with multi-way contingency tables. I'm not aware that this tool can accept anything other than categorical information. There may be others. If you poke around, maybe you can find some alternatives.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particula
|
47,846
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a starting point:
http://www.personality-project.org/r/psych/
http://www.personality-project.org/r/#factoranal
You can trick cor() into accepting logicals, because every logical is either 0 or 1 in R:
> TRUE*7
[1] 7
> FALSE*7
[1] 0
You just need to change the type using as.numeric() as in
a <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
as.numeric(a)
Hope that helps!
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a startin
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a starting point:
http://www.personality-project.org/r/psych/
http://www.personality-project.org/r/#factoranal
You can trick cor() into accepting logicals, because every logical is either 0 or 1 in R:
> TRUE*7
[1] 7
> FALSE*7
[1] 0
You just need to change the type using as.numeric() as in
a <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
as.numeric(a)
Hope that helps!
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a startin
|
47,847
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity.
A correlation coefficient could be assumed the similarity measure. We could, for example, compute Pearson $r$. Given that boolean true/false is convertible into 1/0 binary values, $r$ is computable. $r \text {(numeric,numeric)}$ is classic $r$; $r \text {(binary,binary)}$ is point-point $r$ or Phi coefficient; $r \text {(numeric,binary)}$ is point-biserial $r$. All these are hypostasized Pearsonian correlation.
You may go straightforward and do the analysis (cluster) based on those three kinds of correlation values collected in one matrix. You may do it if you see the boolean/binary data as profoundly dichotomous, where no underlying continuous variable is conceivable in the background.
But then some critic might take a stance to say that there is no theoretical (philosophical) way at all to compare a similarity between categorical features with a similarity between scale features. That view would suggest you then to dichotomize your continuous variables - some way, and forget that they were scale before. So all the data are binary and you are fine.
Whereas if you choose to accept the idea of underlying continuous variable then using the aforesaid initial correlation matrix directly in the analysis stambles against another snag. The problem is that - due to the fact that a manifest binary variable (i.e. dichotomized underlying one) is only 2-valued but a continuous manifest variable is many-valued - the magnitudes of the three coefficients is risky to compare directly. See, for example 2nd paragraph here. In short, coefficients including binary variable are heightened sensible to the cut point taken at the hypothetical dichotomization of its underlying precursor variable. One way out would be to try to "restore" (infer) correlation values which "existed" before dichotomizations. That means computation of tetrachoric correlations in place of point-point $r$s and biserial correlations in place of point-biserial $r$s. If needed, the whole matrix might be then "smoothed" towards positive-definitness.
Another approach (not unquestionable, as any is) might be to rescale correlations in their empirically accessible range in the given data. This trick is, so to speak, atheoretical, it may or may not imply the existence of underlying continuous variable for dichotomous ones. The idea is simply to take away the effect of any skew of variables' marginal distributions on the coefficients. $r_{rescaled}=r/r_{max}$; for example if the observed $r$ is $.4$ and the maximal possible value for these two variables is $.95$ (which you get after sorting their data both ascendingly) then the rescaled value is $.42$. The whole matrix might call be "smoothed" into p.s.d. in the end.
An approach alternative to the previous one (taking away the marginal effect) might be to compute nonparametric correlations instead of $r$ - such as rank-based Spearman rho or Kendall tau. It is also an option. And from this point we begin to sight, logically, having done a circle, the further option of dichotomizing the scale variables (instead of ranking them) - from what we started the discussion.
After you compute correlations (or you would like other similarity measures?) you will have to decide on the clustering method - for example one of hierarchical methods. But here starts another story. You might also want to use Factor analysis in place of Cluster analysis: although factor analysis is not clustering but rather latent variable technique, it gives "clusters", in some sense.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity.
A correlation coefficient could be assumed
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity.
A correlation coefficient could be assumed the similarity measure. We could, for example, compute Pearson $r$. Given that boolean true/false is convertible into 1/0 binary values, $r$ is computable. $r \text {(numeric,numeric)}$ is classic $r$; $r \text {(binary,binary)}$ is point-point $r$ or Phi coefficient; $r \text {(numeric,binary)}$ is point-biserial $r$. All these are hypostasized Pearsonian correlation.
You may go straightforward and do the analysis (cluster) based on those three kinds of correlation values collected in one matrix. You may do it if you see the boolean/binary data as profoundly dichotomous, where no underlying continuous variable is conceivable in the background.
But then some critic might take a stance to say that there is no theoretical (philosophical) way at all to compare a similarity between categorical features with a similarity between scale features. That view would suggest you then to dichotomize your continuous variables - some way, and forget that they were scale before. So all the data are binary and you are fine.
Whereas if you choose to accept the idea of underlying continuous variable then using the aforesaid initial correlation matrix directly in the analysis stambles against another snag. The problem is that - due to the fact that a manifest binary variable (i.e. dichotomized underlying one) is only 2-valued but a continuous manifest variable is many-valued - the magnitudes of the three coefficients is risky to compare directly. See, for example 2nd paragraph here. In short, coefficients including binary variable are heightened sensible to the cut point taken at the hypothetical dichotomization of its underlying precursor variable. One way out would be to try to "restore" (infer) correlation values which "existed" before dichotomizations. That means computation of tetrachoric correlations in place of point-point $r$s and biserial correlations in place of point-biserial $r$s. If needed, the whole matrix might be then "smoothed" towards positive-definitness.
Another approach (not unquestionable, as any is) might be to rescale correlations in their empirically accessible range in the given data. This trick is, so to speak, atheoretical, it may or may not imply the existence of underlying continuous variable for dichotomous ones. The idea is simply to take away the effect of any skew of variables' marginal distributions on the coefficients. $r_{rescaled}=r/r_{max}$; for example if the observed $r$ is $.4$ and the maximal possible value for these two variables is $.95$ (which you get after sorting their data both ascendingly) then the rescaled value is $.42$. The whole matrix might call be "smoothed" into p.s.d. in the end.
An approach alternative to the previous one (taking away the marginal effect) might be to compute nonparametric correlations instead of $r$ - such as rank-based Spearman rho or Kendall tau. It is also an option. And from this point we begin to sight, logically, having done a circle, the further option of dichotomizing the scale variables (instead of ranking them) - from what we started the discussion.
After you compute correlations (or you would like other similarity measures?) you will have to decide on the clustering method - for example one of hierarchical methods. But here starts another story. You might also want to use Factor analysis in place of Cluster analysis: although factor analysis is not clustering but rather latent variable technique, it gives "clusters", in some sense.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity.
A correlation coefficient could be assumed
|
47,848
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
You could one-hot encode your binary features and normalize your data to enable correlation computation:
library(caret)
df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df))))
library(corrplot)
corrplot(cor(df))
Based on this you could apply any clustering approach (example with K-Means, but also look into the details of factor analysis as suggested by @Bernhard):
km <- kmeans(x = t(df), centers = 3, iter.max = 1000)
print(km)
print(km$cluster)
print(km$centers)
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
You could one-hot encode your binary features and normalize your data to enable correlation computation:
library(caret)
df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df))))
library(corr
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
You could one-hot encode your binary features and normalize your data to enable correlation computation:
library(caret)
df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df))))
library(corrplot)
corrplot(cor(df))
Based on this you could apply any clustering approach (example with K-Means, but also look into the details of factor analysis as suggested by @Bernhard):
km <- kmeans(x = t(df), centers = 3, iter.max = 1000)
print(km)
print(km$cluster)
print(km$centers)
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
You could one-hot encode your binary features and normalize your data to enable correlation computation:
library(caret)
df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df))))
library(corr
|
47,849
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing with nominal variables with more than two categories (or binary variables with rare classes). In this case, 1-1 matches are important and 0-0 matches probably aren't. In other words, your variable is asymmetric binary (see here for a nice explanation).
Just using Euclidean distance with k-means will ignore this. On the other hand, using your suggestion of Gower similarity will not. This is because nominal variables are handled via the dice coefficient, which essentially just one-hot encodes the data and ignores 0-0 when computing the similarity. This is easily done using the daisy function in the cluster package, just be sure to have each variable set as the correct type in the data frame.
To cluster this distance matrix, you then just need to choose an algorithm that can handle a custom distance matrix. K-medoids is one, and it has an implementation in R using the pam function.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
|
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing wi
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing with nominal variables with more than two categories (or binary variables with rare classes). In this case, 1-1 matches are important and 0-0 matches probably aren't. In other words, your variable is asymmetric binary (see here for a nice explanation).
Just using Euclidean distance with k-means will ignore this. On the other hand, using your suggestion of Gower similarity will not. This is because nominal variables are handled via the dice coefficient, which essentially just one-hot encodes the data and ignores 0-0 when computing the similarity. This is easily done using the daisy function in the cluster package, just be sure to have each variable set as the correct type in the data frame.
To cluster this distance matrix, you then just need to choose an algorithm that can handle a custom distance matrix. K-medoids is one, and it has an implementation in R using the pam function.
|
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing wi
|
47,850
|
Intuitive explanation of state space models
|
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It's all about multiple equations with multiple matrix multiplications.
Some tools like R's bsts package make it more accessible, but it's fundamentally more complex than alternatives. Not that you should be using ARIMA or other methods without some level of technical sophistication, but in my experience most state space (also called dynamic linear model) packages have gaps where you'll need to know what parts of various matrices represent and mean.
Given all of that, as a readable introduction I'd recommend "An Introduction to State Space Time Series Analysis" by Jacques J.F. Commandeur and Siem Jan Koopman, Oxford 2007. It's a short book and used to be pretty expensive, but it appears that it may have been released on the Internet. I don't believe this book mentions mixed-frequency data, though.
And if you use R, you should check out bsts.
|
Intuitive explanation of state space models
|
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It
|
Intuitive explanation of state space models
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It's all about multiple equations with multiple matrix multiplications.
Some tools like R's bsts package make it more accessible, but it's fundamentally more complex than alternatives. Not that you should be using ARIMA or other methods without some level of technical sophistication, but in my experience most state space (also called dynamic linear model) packages have gaps where you'll need to know what parts of various matrices represent and mean.
Given all of that, as a readable introduction I'd recommend "An Introduction to State Space Time Series Analysis" by Jacques J.F. Commandeur and Siem Jan Koopman, Oxford 2007. It's a short book and used to be pretty expensive, but it appears that it may have been released on the Internet. I don't believe this book mentions mixed-frequency data, though.
And if you use R, you should check out bsts.
|
Intuitive explanation of state space models
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It
|
47,851
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects:
Mediaiton
Mediation is a hypothesized causal chain in which one variable affects a second variable that, in turn, affects a third variable. The intervening variable, $M$, is the mediator. It mediates the relationship between a predictor, $X$, and an outcome, $Y$. Graphically, mediation can be depicted in the following way: $$X \longrightarrow M \longrightarrow Y$$
Testing mediation
Inspect if $Y$ is influenced by $X$ with $\hat y = \beta_0 + \beta_1x$
See if $M$ is influenced by $X$ with $\hat m = \beta_0 + \beta_1x$
See if $Y$ is influenced by $M$ with $\hat y = \beta_0 + \beta_1m$
If one or more of these relationships are nonsignificant, researchers usually conclude that mediation is not possible or likely. Assuming the above steps yield significant results,
Conduct a multiple regression to see the influence of $X$ and $M$ on $Y$ with $\hat y = \beta_0 + \beta_1x + \beta_1m$
If $X$ is no longer significant when $M$ is controlled, the finding supports full mediation. If X is significant, i.e., both $X$ and $M$ both significantly predict $Y$, the finding indicates partial mediation.
Testing moderation
Let's assume a student's GPA (outcome variable) is affected not only by study-time (independent variable), but also by gender (moderating variable). In order to test moderation effect of gender, add to regression equation the interaction term between study-time and gender. $$GPA = \beta_0 + \beta_1x_{studytime} + \beta_2x_{gender} + \beta_3x_{studytime}x_{gender}$$
If $\beta_3$ is significant, there exists moderation.
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects:
Mediaiton
Mediation is a hypothesized causal chain in which o
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects:
Mediaiton
Mediation is a hypothesized causal chain in which one variable affects a second variable that, in turn, affects a third variable. The intervening variable, $M$, is the mediator. It mediates the relationship between a predictor, $X$, and an outcome, $Y$. Graphically, mediation can be depicted in the following way: $$X \longrightarrow M \longrightarrow Y$$
Testing mediation
Inspect if $Y$ is influenced by $X$ with $\hat y = \beta_0 + \beta_1x$
See if $M$ is influenced by $X$ with $\hat m = \beta_0 + \beta_1x$
See if $Y$ is influenced by $M$ with $\hat y = \beta_0 + \beta_1m$
If one or more of these relationships are nonsignificant, researchers usually conclude that mediation is not possible or likely. Assuming the above steps yield significant results,
Conduct a multiple regression to see the influence of $X$ and $M$ on $Y$ with $\hat y = \beta_0 + \beta_1x + \beta_1m$
If $X$ is no longer significant when $M$ is controlled, the finding supports full mediation. If X is significant, i.e., both $X$ and $M$ both significantly predict $Y$, the finding indicates partial mediation.
Testing moderation
Let's assume a student's GPA (outcome variable) is affected not only by study-time (independent variable), but also by gender (moderating variable). In order to test moderation effect of gender, add to regression equation the interaction term between study-time and gender. $$GPA = \beta_0 + \beta_1x_{studytime} + \beta_2x_{gender} + \beta_3x_{studytime}x_{gender}$$
If $\beta_3$ is significant, there exists moderation.
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects:
Mediaiton
Mediation is a hypothesized causal chain in which o
|
47,852
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
Here is an article giving an example of a moderating mediator.
https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144
This explains how a mediator may later become a moderator, however I would speculate that under most circumstances (particularly in biopsychology) mediation tests are detecting only statistical mediation, and in such case “full mediation” can not be truly observed unless all other potential mediators are controlled for in analysis (as well as other necessary conditions, such as random assignment to experimental condition and a longitudinal design); that being said a mechanism that partially explains an association between the independent and dependent variables may also moderate some path (a2 or b2) of a second mediator.
For example the relation between poverty and poor health may be mediated (partially) by substance use, and partially by social support network size; however it’s also possible that the effect of poverty on substance abuse (a1 path) may be moderated by social support (i.e., people in poverty may only be more likely to engage in substance abuse if they don’t have a quality social network).
For more current methodology for mediation analysis please see Andrew Hayes.
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
Here is an article giving an example of a moderating mediator.
https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144
This explains how a mediator may later become a moderator, howeve
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
Here is an article giving an example of a moderating mediator.
https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144
This explains how a mediator may later become a moderator, however I would speculate that under most circumstances (particularly in biopsychology) mediation tests are detecting only statistical mediation, and in such case “full mediation” can not be truly observed unless all other potential mediators are controlled for in analysis (as well as other necessary conditions, such as random assignment to experimental condition and a longitudinal design); that being said a mechanism that partially explains an association between the independent and dependent variables may also moderate some path (a2 or b2) of a second mediator.
For example the relation between poverty and poor health may be mediated (partially) by substance use, and partially by social support network size; however it’s also possible that the effect of poverty on substance abuse (a1 path) may be moderated by social support (i.e., people in poverty may only be more likely to engage in substance abuse if they don’t have a quality social network).
For more current methodology for mediation analysis please see Andrew Hayes.
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
Here is an article giving an example of a moderating mediator.
https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144
This explains how a mediator may later become a moderator, howeve
|
47,853
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable is considered as the treatment and the other as an effect-modifier, means moderation. It seems to me, however, that, in a mediation model, you're interested in the actual portion of the total effect explained by the indirect path while, in an moderation model, the focus is on the interaction effect (regardless of how much this affects results in practice).
Long reply: In the model by Erikson et al., 2005 (pdf, more deeply explored (and extended) by Buis, 2010): https://www.stata-journal.com/article.html?article=st0182 , the total effect is the sum of direct and indirect effect (or, if we can consider odds ratios, their product). Let’s call
our treatment (or, in general, the variable whose we want to explore the effects), $X$, and let’s assume it’s binary: $X=0$ for untreated/controls, and $X=1$ for treated;
our mediator $Z$, and $Z(0)$ its potential value under $X=0$, and $Z(1)$ its potential value under $X=1$;
our outcome $Y$, and $Y(0,0)$ its potential value under $X=0$, $Y(1,1)$ its potential value under $X=1$, $Y(1,0)$ its counterfactual value when $X=1$, but with $Z(0)$ and $Y(0,1)$ its counterfactual value when $X=0$, but with $Z(1)$. In practice, counterfactual values are those we can’t typically observe, because we have one treatment status and the mediator’s value in the case of the other treatment status.
The total effect is given by $Y(1,1)-Y(0,0)$ and can be calculated as the sum of one direct and one indirect effect in any case. This seems to rule out the existence of an interaction, but it’s not the case.
The problem is that there are two ways how we can make this calculation, depending on which counterfactual we use.
A) If we use $Y(1,0)$, we have, as direct effect: $Y(1,0)-Y(0,0)$, i.e., starting from a situation with no treatment, we see what would change if we gave the treatment, but without changing the mediator variable. As indirect effect we would have: $Y(1,1)-Y(1,0)$, i.e.: starting from the situation where the person is treated, but with the value of the mediating variable as if they were untreated, we see what would change if we moved the mediating variable to the one in case of treatment (so, the potential value in case of treatment).
B) If we use $Y(0,1)$, we have, as direct effect: $Y(1,1)-Y(0,1)$, i.e., starting from a counterfactual situation with no treatment, but the mediating variable at its value in case of treatment, we see what would change if we gave the treatment (so, the potential value in case of treatment).
As indirect effect we would have: $Y(0,1)-Y(0,0)$, i.e.: starting from the situation where the person is untreated, we see what would change if we moved the mediating variable to the one in case of treatment.
I see it as a matter of whether the direct or indirect effect moves first, in the path from no-treatment to treatment. As noticed by Buis (2010): “The logic behind these two methods is exactly the same, but they do not have to result in exactly the same estimates for the direct and indirect effects”. In my undersatanding, however, in case of no interaction effects, this is just an estimation issue, because his model is based on estimating counterfactual probabilities through simulations, then deriving log-odds ratios, that may be slightly different depending on the path.
However, by introducing interaction effects, thus expressing the model like that: $Y=\alpha+\beta_1*X+\beta_2*Z+\beta_3*XZ$, it wouldn't be just an issue of estimation methods, but of identification. In fact, we would have:
$Y(1,1)= \alpha+\beta_1+\beta_2*Z(1)+\beta_3*Z(1)$;
$Y(1,0)= \alpha+\beta_1+\beta_2*Z(0)+\beta_3*Z(0)$;
$Y(0,1)= \alpha+\beta_2*Z(1)$;
$Y(0,0)= \alpha+\beta_2*Z(0)$.
From here, we have:
A) First method: the direct effect is equal to: $Y(1,0)-Y(0,0)=\beta_1+\beta_3*Z(0)$; the indirect effect to: $Y(1,1)-Y(1,0)= \beta_2*Z(1)+\beta_3*Z(1)- \beta_2*Z(0)-\beta_3*Z(0)=(\beta_2+\beta_3)*(Z(1)-Z(0))$;
B) Second method: the direct effect is equal to: $Y(1,1)-Y(0,1)=\beta_1+\beta_3*Z(1)$; the indirect effect to: $Y(0,1)-Y(0,0)= \beta_2*Z(1)-\beta_2*Z(0) = \beta_2*(Z(1)-Z(0))$.
You can notice that the difference between the direct (and indirect) effects in the two methods depends on the interaction parameter $\beta_3$ (it is: $\beta_3*(Z(1)-Z(0)$), also corresponding to the difference between the total effect and the sum of the direct effect by keeping the mediator to the one in case of no-treatment ($Y(1,0)-Y(0,0)$) and the indirect effect for the case of no-treatment ($Y(0,1)-Y(0,0)$).
VanderWheele (2013): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3563853/
actually offers a three-way decomposition that uses such direct effect, such indirect effect, and the interaction term (i.e. the difference between the total effect and the sum of direct and indirect effect). Such interaction term shows that the mediator can also act as an effect-modifier, thus can also be a moderator.
Finally, VanderWheele(2014): https://journals.lww.com/epidem/Fulltext/2014/09000/A_Unification_of_Mediation_and_Interaction__A.19.aspx#errata shows that, in presence of a mediator, interactions can be accounted for by separating among 4 effects. I think that paper highlights that also interactions, in mediation models, are considered in terms of proportion of the effect explained. The main reason to do so is expressed by the author by saying "the 4-way decomposition provides 4 components capturing all the subtleties: the portion of the total effect that is
attributable just to mediation, just to interaction, to both mediation
and interaction, or to neither mediation nor interaction" in “Relation to Mediation Decompositions”, where he also describes at least three other reasons to propose his approach. The first one is to discuss not in terms of counterfactual outcomes (like $Y(1,0)$), that he defines as "difficult to interpret", but of possible values of the mediator (like $X=1, Z=0$). The second one is to separate the "pure direct effect" (often called "natural direct effect" in the literature) between a "controlled direct effect" (in the case of null mediator) and a "reference interaction effect" (expressing the change in the pure direct effect due to the presence of the mediator). This is particularly relevant in the case of a binary mediator where, even if the mediator is present even with no treatment, its effect on treatment effect may be worth investigating. The author explains the concept of "portion eliminated" to clarify this point. The third one is that he shows that the third component (the "mediated interaction effect") "is sometimes combined with the pure indirect effect to obtain the total indirect effect and sometimes combined with the pure
direct effect to obtain the total direct effect".
He starts by the case where the mediator is binary, by defining the “controlled direct effect” as the one in the case the mediator is 0. In his words: "The intuition behind this decomposition is that if the exposure affects the outcome for a particular individual, then at least 1 of 4 things must be the case. One possibility is that the exposure might affect the outcome through pathways that do not require the mediator (ie, the exposure affects the outcome even when the mediator is absent); in other words, the first component is non-zero. A second possibility is that the exposure effect might operate only in the presence of the mediator (ie, there is an interaction), with the exposure itself not necessary for the mediator to be present (ie, the mediator itself would be present in the absence of the exposure, although the mediator is itself necessary for the exposure to have an effect on the outcome); in other words, the second component is non-zero. A third possibility is that the exposure effect might operate only in the presence of the mediator (ie, there is an interaction), with the exposure itself needed for the mediator to be present (ie, the exposure causes the mediator, and the presence of the mediator is itself necessary for the exposure to have an effect on the outcome); in other words, the third component in non-zero. The fourth possibility is that the mediator can cause the outcome in the absence of the exposure, but the exposure is necessary for the mediator itself to be present; in other words, the fourth component is non-zero".
In his notation, the mediator is called $M$, and the second subscript for $Y$ does not stand for potential values of the mediator under the given value of $X$, but directly for mediators value; so, for example, $Y_{1,0}$ means: “$Y$ in case the person is treated and the mediator is absent”.
He calls the term : $Y_{11} – Y_{10} – Y_{01} + Y_{00}$ as “additive interaction”, that can be seen as the difference between the global effect of moving both the treatment and the mediator from 0 to 1, and the sum of the two separate effects of moving the treatment to 1 while leaving the mediator to 0 and viceversa. Given, as said above, he uses $M=0$ as reference case, the interaction plays a role only in case $M(1)=1$ (otherwise, either both interaction effects are null, or they cancel out each other, a case that however the author doesn’t seem to explore, also because, when talking about the proportion of the total effect due to each component, he says: “reporting such proportion measures, however, generally makes sense only if all the components
are in the same direction (eg, all positive or all negative)"). In that case, either also $M(0)=1$ (in such situation, the interaction is ascribed to “reference interaction”, because it takes place without needing the intervention), or $M(0)=0$ (in that case, the interaction is ascribed to “mediated interaction”, because it takes place only thank to the intervention).
At the beginning of the Appendix, the general case (i.e.: not restricted to binary exposure and mediator) is presented. The point is that, in that case, the decomposition between the 4 effects is conditional on the mediator’s value (i.e., it depends on it).
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
|
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable is considered as the treatment and the other as an effect-modifier, means moderation. It seems to me, however, that, in a mediation model, you're interested in the actual portion of the total effect explained by the indirect path while, in an moderation model, the focus is on the interaction effect (regardless of how much this affects results in practice).
Long reply: In the model by Erikson et al., 2005 (pdf, more deeply explored (and extended) by Buis, 2010): https://www.stata-journal.com/article.html?article=st0182 , the total effect is the sum of direct and indirect effect (or, if we can consider odds ratios, their product). Let’s call
our treatment (or, in general, the variable whose we want to explore the effects), $X$, and let’s assume it’s binary: $X=0$ for untreated/controls, and $X=1$ for treated;
our mediator $Z$, and $Z(0)$ its potential value under $X=0$, and $Z(1)$ its potential value under $X=1$;
our outcome $Y$, and $Y(0,0)$ its potential value under $X=0$, $Y(1,1)$ its potential value under $X=1$, $Y(1,0)$ its counterfactual value when $X=1$, but with $Z(0)$ and $Y(0,1)$ its counterfactual value when $X=0$, but with $Z(1)$. In practice, counterfactual values are those we can’t typically observe, because we have one treatment status and the mediator’s value in the case of the other treatment status.
The total effect is given by $Y(1,1)-Y(0,0)$ and can be calculated as the sum of one direct and one indirect effect in any case. This seems to rule out the existence of an interaction, but it’s not the case.
The problem is that there are two ways how we can make this calculation, depending on which counterfactual we use.
A) If we use $Y(1,0)$, we have, as direct effect: $Y(1,0)-Y(0,0)$, i.e., starting from a situation with no treatment, we see what would change if we gave the treatment, but without changing the mediator variable. As indirect effect we would have: $Y(1,1)-Y(1,0)$, i.e.: starting from the situation where the person is treated, but with the value of the mediating variable as if they were untreated, we see what would change if we moved the mediating variable to the one in case of treatment (so, the potential value in case of treatment).
B) If we use $Y(0,1)$, we have, as direct effect: $Y(1,1)-Y(0,1)$, i.e., starting from a counterfactual situation with no treatment, but the mediating variable at its value in case of treatment, we see what would change if we gave the treatment (so, the potential value in case of treatment).
As indirect effect we would have: $Y(0,1)-Y(0,0)$, i.e.: starting from the situation where the person is untreated, we see what would change if we moved the mediating variable to the one in case of treatment.
I see it as a matter of whether the direct or indirect effect moves first, in the path from no-treatment to treatment. As noticed by Buis (2010): “The logic behind these two methods is exactly the same, but they do not have to result in exactly the same estimates for the direct and indirect effects”. In my undersatanding, however, in case of no interaction effects, this is just an estimation issue, because his model is based on estimating counterfactual probabilities through simulations, then deriving log-odds ratios, that may be slightly different depending on the path.
However, by introducing interaction effects, thus expressing the model like that: $Y=\alpha+\beta_1*X+\beta_2*Z+\beta_3*XZ$, it wouldn't be just an issue of estimation methods, but of identification. In fact, we would have:
$Y(1,1)= \alpha+\beta_1+\beta_2*Z(1)+\beta_3*Z(1)$;
$Y(1,0)= \alpha+\beta_1+\beta_2*Z(0)+\beta_3*Z(0)$;
$Y(0,1)= \alpha+\beta_2*Z(1)$;
$Y(0,0)= \alpha+\beta_2*Z(0)$.
From here, we have:
A) First method: the direct effect is equal to: $Y(1,0)-Y(0,0)=\beta_1+\beta_3*Z(0)$; the indirect effect to: $Y(1,1)-Y(1,0)= \beta_2*Z(1)+\beta_3*Z(1)- \beta_2*Z(0)-\beta_3*Z(0)=(\beta_2+\beta_3)*(Z(1)-Z(0))$;
B) Second method: the direct effect is equal to: $Y(1,1)-Y(0,1)=\beta_1+\beta_3*Z(1)$; the indirect effect to: $Y(0,1)-Y(0,0)= \beta_2*Z(1)-\beta_2*Z(0) = \beta_2*(Z(1)-Z(0))$.
You can notice that the difference between the direct (and indirect) effects in the two methods depends on the interaction parameter $\beta_3$ (it is: $\beta_3*(Z(1)-Z(0)$), also corresponding to the difference between the total effect and the sum of the direct effect by keeping the mediator to the one in case of no-treatment ($Y(1,0)-Y(0,0)$) and the indirect effect for the case of no-treatment ($Y(0,1)-Y(0,0)$).
VanderWheele (2013): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3563853/
actually offers a three-way decomposition that uses such direct effect, such indirect effect, and the interaction term (i.e. the difference between the total effect and the sum of direct and indirect effect). Such interaction term shows that the mediator can also act as an effect-modifier, thus can also be a moderator.
Finally, VanderWheele(2014): https://journals.lww.com/epidem/Fulltext/2014/09000/A_Unification_of_Mediation_and_Interaction__A.19.aspx#errata shows that, in presence of a mediator, interactions can be accounted for by separating among 4 effects. I think that paper highlights that also interactions, in mediation models, are considered in terms of proportion of the effect explained. The main reason to do so is expressed by the author by saying "the 4-way decomposition provides 4 components capturing all the subtleties: the portion of the total effect that is
attributable just to mediation, just to interaction, to both mediation
and interaction, or to neither mediation nor interaction" in “Relation to Mediation Decompositions”, where he also describes at least three other reasons to propose his approach. The first one is to discuss not in terms of counterfactual outcomes (like $Y(1,0)$), that he defines as "difficult to interpret", but of possible values of the mediator (like $X=1, Z=0$). The second one is to separate the "pure direct effect" (often called "natural direct effect" in the literature) between a "controlled direct effect" (in the case of null mediator) and a "reference interaction effect" (expressing the change in the pure direct effect due to the presence of the mediator). This is particularly relevant in the case of a binary mediator where, even if the mediator is present even with no treatment, its effect on treatment effect may be worth investigating. The author explains the concept of "portion eliminated" to clarify this point. The third one is that he shows that the third component (the "mediated interaction effect") "is sometimes combined with the pure indirect effect to obtain the total indirect effect and sometimes combined with the pure
direct effect to obtain the total direct effect".
He starts by the case where the mediator is binary, by defining the “controlled direct effect” as the one in the case the mediator is 0. In his words: "The intuition behind this decomposition is that if the exposure affects the outcome for a particular individual, then at least 1 of 4 things must be the case. One possibility is that the exposure might affect the outcome through pathways that do not require the mediator (ie, the exposure affects the outcome even when the mediator is absent); in other words, the first component is non-zero. A second possibility is that the exposure effect might operate only in the presence of the mediator (ie, there is an interaction), with the exposure itself not necessary for the mediator to be present (ie, the mediator itself would be present in the absence of the exposure, although the mediator is itself necessary for the exposure to have an effect on the outcome); in other words, the second component is non-zero. A third possibility is that the exposure effect might operate only in the presence of the mediator (ie, there is an interaction), with the exposure itself needed for the mediator to be present (ie, the exposure causes the mediator, and the presence of the mediator is itself necessary for the exposure to have an effect on the outcome); in other words, the third component in non-zero. The fourth possibility is that the mediator can cause the outcome in the absence of the exposure, but the exposure is necessary for the mediator itself to be present; in other words, the fourth component is non-zero".
In his notation, the mediator is called $M$, and the second subscript for $Y$ does not stand for potential values of the mediator under the given value of $X$, but directly for mediators value; so, for example, $Y_{1,0}$ means: “$Y$ in case the person is treated and the mediator is absent”.
He calls the term : $Y_{11} – Y_{10} – Y_{01} + Y_{00}$ as “additive interaction”, that can be seen as the difference between the global effect of moving both the treatment and the mediator from 0 to 1, and the sum of the two separate effects of moving the treatment to 1 while leaving the mediator to 0 and viceversa. Given, as said above, he uses $M=0$ as reference case, the interaction plays a role only in case $M(1)=1$ (otherwise, either both interaction effects are null, or they cancel out each other, a case that however the author doesn’t seem to explore, also because, when talking about the proportion of the total effect due to each component, he says: “reporting such proportion measures, however, generally makes sense only if all the components
are in the same direction (eg, all positive or all negative)"). In that case, either also $M(0)=1$ (in such situation, the interaction is ascribed to “reference interaction”, because it takes place without needing the intervention), or $M(0)=0$ (in that case, the interaction is ascribed to “mediated interaction”, because it takes place only thank to the intervention).
At the beginning of the Appendix, the general case (i.e.: not restricted to binary exposure and mediator) is presented. The point is that, in that case, the decomposition between the 4 effects is conditional on the mediator’s value (i.e., it depends on it).
|
Testing mediation and moderation; can one variable function as both mediator and moderator?
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable
|
47,854
|
Normal distribution necessary for linear-mixed effects? (R)
|
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on the random effects, that is assumed to be normally distributed. This means that the residuals should be normally distributed. Therefore, you can proceed with fitting an LMM and then check the residuals to see if they are normally distributed. Treating likert item responses as continuous data is a contentious topic - for example see here:
Parametric tests and Likert Scales (Ordinal data) - Two different views
This simulation study plays down the concerns. Clearly, with fewer levels in the likert scale there is going to be more of a problem. This presentation from one of the authors of the lme4 package for R seems to suggest that 10 or more levels is OK.
So with a 7 point scale, there is a good chance that the residuals will not be normally distributed, in which case you can look at fitting a generalised linear mixed model for ordinal data - two such packages which fit these models in R are ordinal and MCMCglmm
|
Normal distribution necessary for linear-mixed effects? (R)
|
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on
|
Normal distribution necessary for linear-mixed effects? (R)
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on the random effects, that is assumed to be normally distributed. This means that the residuals should be normally distributed. Therefore, you can proceed with fitting an LMM and then check the residuals to see if they are normally distributed. Treating likert item responses as continuous data is a contentious topic - for example see here:
Parametric tests and Likert Scales (Ordinal data) - Two different views
This simulation study plays down the concerns. Clearly, with fewer levels in the likert scale there is going to be more of a problem. This presentation from one of the authors of the lme4 package for R seems to suggest that 10 or more levels is OK.
So with a 7 point scale, there is a good chance that the residuals will not be normally distributed, in which case you can look at fitting a generalised linear mixed model for ordinal data - two such packages which fit these models in R are ordinal and MCMCglmm
|
Normal distribution necessary for linear-mixed effects? (R)
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on
|
47,855
|
Normal distribution necessary for linear-mixed effects? (R)
|
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
|
Normal distribution necessary for linear-mixed effects? (R)
|
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
|
Normal distribution necessary for linear-mixed effects? (R)
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
|
Normal distribution necessary for linear-mixed effects? (R)
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
|
47,856
|
Difference between Log Entropy Model and TF-IDF Model?
|
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containing a partial answer to your question:
The first global weighting function we considered normalized each word using the local weighting function, the second was an inverse docu- ment frequency measure, and the third global was an entropy measure. More details are provided by Pincombe (2004).
And
The results of these analyses are shown in Figure 4. It is clear that altering the local weighting function makes relatively little difference but that changing the global weighting function does make a difference. Entropy global weighting is generally superior to normalized weighting, and both are better than the inverse document frequency function. For the 50 document corpus, performance is best when there is no dimensionality reduction in the representation (i.e., when all 50 factors are used thus reducing LSA to a weighted vector space model). Peak perfor- mance for the extended 364 document corpus is better and is achieved when between 100 and 200 factors are used.
Figure 4: Correlations between the human similarity measures and nine LSA similarity models, for each of four situations corresponding to (a) the 50 document corpus; (b) the 50 document without stopwords; (c) the 364 document corpus; (b) the 364 document without stopwords. The nine similarity models consider every pairing of the binary (‘bin’), logarithmic (‘log’) and term frequency (‘tf’) local weighting functions with the entropy (‘ent’), normalized (‘nml’) and inverse document frequency (‘idf’) global weighting functions. The dashed lines shows the inter-rater correlation.
So this in turn references Pincombe (2004) - a Comparison of Human and Latent Semantic Analysis (LSA) Judgements of Pairwise Document Similarities for a News Corpus
. Checking there, this paper contains far more detail on the topic (I will omit more figures, as they are mostly similar), but comes to a very similar conclusion:
Overall, the two best correlations with human judgements of pairwise document similarity are achieved using log-entropy weighting on stopped and backgrounded text. This is consistent with the literature where log-entropy weighting has performed best in information recall (Dumais, 1991) and text categorisation (Nakov et al., 2001). More controversial are the relative performances of the normal and idf global weighting schemes. The results showed that the use of idf as the global-weight produced correlations with human pairwise judgements that were uniformly worse than those achieved using entropy or normal global-weights in similar situations. In an information recall study (Dumais, 1991) idf weighting outperformed normal weighting. The same is true for most local weighting schemes in a text identification study (Nakov et al., 2001) although this ordering of global weighting function performance did occur for term-frequency local weighting.
And
The choice of the global weighting function affects the correlations more than any other characteristic. The use of idf global weighting produces correlations with human pairwise judgments that are uniformly worse than those achieved using entropy or normal global-weights in similar situations. Variations in global weights have much more effect on the level of correlation with human pairwise judgments than do variations in local weights.
So it appears log-entropy seems to work better information retrieval tasks, while you might want rely on TF-IDF for the more semantics-heavy information extraction/classification tasks where you will be using far more features. That being said, the TF-IDF measure has many knobs to tune (Sublinear TF and DFs or not? - see Nakov et al., 2001 Weight functions impact on LSA performance) and your results with TF-IDF will vary greatly with respect to the exact implementation.
Overall, I'd say it makes intrinsically sense that log(TF)-Entropy should perform best, given that (the probability-based) entropy captures more "information" about the term across your documents than (the "binary") DF does.
|
Difference between Log Entropy Model and TF-IDF Model?
|
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containi
|
Difference between Log Entropy Model and TF-IDF Model?
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containing a partial answer to your question:
The first global weighting function we considered normalized each word using the local weighting function, the second was an inverse docu- ment frequency measure, and the third global was an entropy measure. More details are provided by Pincombe (2004).
And
The results of these analyses are shown in Figure 4. It is clear that altering the local weighting function makes relatively little difference but that changing the global weighting function does make a difference. Entropy global weighting is generally superior to normalized weighting, and both are better than the inverse document frequency function. For the 50 document corpus, performance is best when there is no dimensionality reduction in the representation (i.e., when all 50 factors are used thus reducing LSA to a weighted vector space model). Peak perfor- mance for the extended 364 document corpus is better and is achieved when between 100 and 200 factors are used.
Figure 4: Correlations between the human similarity measures and nine LSA similarity models, for each of four situations corresponding to (a) the 50 document corpus; (b) the 50 document without stopwords; (c) the 364 document corpus; (b) the 364 document without stopwords. The nine similarity models consider every pairing of the binary (‘bin’), logarithmic (‘log’) and term frequency (‘tf’) local weighting functions with the entropy (‘ent’), normalized (‘nml’) and inverse document frequency (‘idf’) global weighting functions. The dashed lines shows the inter-rater correlation.
So this in turn references Pincombe (2004) - a Comparison of Human and Latent Semantic Analysis (LSA) Judgements of Pairwise Document Similarities for a News Corpus
. Checking there, this paper contains far more detail on the topic (I will omit more figures, as they are mostly similar), but comes to a very similar conclusion:
Overall, the two best correlations with human judgements of pairwise document similarity are achieved using log-entropy weighting on stopped and backgrounded text. This is consistent with the literature where log-entropy weighting has performed best in information recall (Dumais, 1991) and text categorisation (Nakov et al., 2001). More controversial are the relative performances of the normal and idf global weighting schemes. The results showed that the use of idf as the global-weight produced correlations with human pairwise judgements that were uniformly worse than those achieved using entropy or normal global-weights in similar situations. In an information recall study (Dumais, 1991) idf weighting outperformed normal weighting. The same is true for most local weighting schemes in a text identification study (Nakov et al., 2001) although this ordering of global weighting function performance did occur for term-frequency local weighting.
And
The choice of the global weighting function affects the correlations more than any other characteristic. The use of idf global weighting produces correlations with human pairwise judgments that are uniformly worse than those achieved using entropy or normal global-weights in similar situations. Variations in global weights have much more effect on the level of correlation with human pairwise judgments than do variations in local weights.
So it appears log-entropy seems to work better information retrieval tasks, while you might want rely on TF-IDF for the more semantics-heavy information extraction/classification tasks where you will be using far more features. That being said, the TF-IDF measure has many knobs to tune (Sublinear TF and DFs or not? - see Nakov et al., 2001 Weight functions impact on LSA performance) and your results with TF-IDF will vary greatly with respect to the exact implementation.
Overall, I'd say it makes intrinsically sense that log(TF)-Entropy should perform best, given that (the probability-based) entropy captures more "information" about the term across your documents than (the "binary") DF does.
|
Difference between Log Entropy Model and TF-IDF Model?
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containi
|
47,857
|
Parameter n_iter in scikit-learn's SGDClassifier
|
It must be the second.
I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I searched for SGDClassifier then followed the function calls until I got to this one, which is a low level routine).
Breaking out the important piece:
for epoch in range(n_iter):
...
for i in range(n_samples):
...
That's exactly the pattern you would expect for n_iter passes over the full training data.
|
Parameter n_iter in scikit-learn's SGDClassifier
|
It must be the second.
I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I
|
Parameter n_iter in scikit-learn's SGDClassifier
It must be the second.
I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I searched for SGDClassifier then followed the function calls until I got to this one, which is a low level routine).
Breaking out the important piece:
for epoch in range(n_iter):
...
for i in range(n_samples):
...
That's exactly the pattern you would expect for n_iter passes over the full training data.
|
Parameter n_iter in scikit-learn's SGDClassifier
It must be the second.
I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I
|
47,858
|
Time series analysis of electricity load questions
|
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands for heating in winter, higher power demands for air conditioning in summer). Plus time-shifting holidays.
I'd say your ACF and Dickey-Fuller are fully consonant with these seasonalities. (It's hard to see it in your ACF plot, but I assume the peaks are at multiples of 24?)
Anyway, these seasonalities are so typical and prevalent for electricity demands that, to be honest, I would not be too interested in diagnostics checking these. I'd be more interested in (P)ACFs and tests for residuals after accounting for such seasonalities.
That is, starting from observations $y_t$, you would create a model that accounts for multiple seasonalities and yields in-sample fits $\tilde{y}_t$. If this model truly captures the full seasonal pattern, then the residuals $y_t-\tilde{y}_t$ should not exhibit any remaining seasonality - which you can then test by applying (P)ACF and statistical tests to these residuals. (More precisely, the model that yields $\tilde{y}_t$ should also capture trends and other sources of nonstationarity - but the multiple seasonalities will usually be the strongest source of explainable variation, which is why I'm concentrating on them.)
As to how to deal with and forecast electric load, this is an active research topic. Googling "electric load forecasting" and similar will yield quite a number of relevant hits, such as Cho et al. (2013, JASA). The most important point is of course to capture the overlapping seasonalities, as paradigmatically done by Taylor (2003, JORS). You could also browse our previous questions on multiple seasonal patterns.
Finally, Weron (2014, IJF) is a recent review for electricity price forecasting, which of course is different from load forecasting, but it may be inspirational.
EDIT: I just read Hong & Fan (2016, International Journal of Forecasting), which is probably the very best review on electric load forecasting far and wide. Very much recommended, indeed. Of course, anything by Tao Hong is, who I'd say is the top expert in the field.
|
Time series analysis of electricity load questions
|
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands
|
Time series analysis of electricity load questions
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands for heating in winter, higher power demands for air conditioning in summer). Plus time-shifting holidays.
I'd say your ACF and Dickey-Fuller are fully consonant with these seasonalities. (It's hard to see it in your ACF plot, but I assume the peaks are at multiples of 24?)
Anyway, these seasonalities are so typical and prevalent for electricity demands that, to be honest, I would not be too interested in diagnostics checking these. I'd be more interested in (P)ACFs and tests for residuals after accounting for such seasonalities.
That is, starting from observations $y_t$, you would create a model that accounts for multiple seasonalities and yields in-sample fits $\tilde{y}_t$. If this model truly captures the full seasonal pattern, then the residuals $y_t-\tilde{y}_t$ should not exhibit any remaining seasonality - which you can then test by applying (P)ACF and statistical tests to these residuals. (More precisely, the model that yields $\tilde{y}_t$ should also capture trends and other sources of nonstationarity - but the multiple seasonalities will usually be the strongest source of explainable variation, which is why I'm concentrating on them.)
As to how to deal with and forecast electric load, this is an active research topic. Googling "electric load forecasting" and similar will yield quite a number of relevant hits, such as Cho et al. (2013, JASA). The most important point is of course to capture the overlapping seasonalities, as paradigmatically done by Taylor (2003, JORS). You could also browse our previous questions on multiple seasonal patterns.
Finally, Weron (2014, IJF) is a recent review for electricity price forecasting, which of course is different from load forecasting, but it may be inspirational.
EDIT: I just read Hong & Fan (2016, International Journal of Forecasting), which is probably the very best review on electric load forecasting far and wide. Very much recommended, indeed. Of course, anything by Tao Hong is, who I'd say is the top expert in the field.
|
Time series analysis of electricity load questions
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands
|
47,859
|
Support of likelihood ratio test statistic
|
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consider two possibilities:
The denominator is positive. This means that $H_0$ assigns a positive chance to any tiny neighborhood of $y$. It occurs when $0 \lt y$. There is no problem with a division by zero. In terms of the indicator function $\mathcal{I}$, a formula for the likelihood ratio is $$\frac{\mathcal{I}_{(0,1)}(y)}{e^{-y}}.$$ This equals $e^y$ for $0 \lt y \lt 1$ and otherwise is zero.
The denominator is zero. This means $H_0$ assigns no probability density to $y$. There are two possibilities:
$H_1$ assigns no probability density to $y$, either. Thus, this $y$ has no chance of being observed under either hypothesis. We needn't consider this any further. The set of $y$ for which this is the case is the intersection of the complements of the supports of the hypotheses: the non-positive real numbers.
$H_1$ assigns some probability density to $y$. Thus, this $y$ is possible under $H_1$ but not under $H_0$. The conclusion is obvious. As a convention we may use values in an extended Real number line $\{-\infty, \infty\}\cup \mathbb{R}$ to designate such likelihood ratios (or their logarithms); here we would say that the likelihood ratio (and its log) is $\infty$.
To summarize, let $S_i\subset\mathbb{R}$ be the supports of the hypotheses. Then the likelihood ratio must be considered a function whose domain is the union of supports $S_0\cup S_1$ which takes values in the extended positive reals $[0,\infty)\cup\{\infty\}$. The log likelihood ratio takes values in the extended reals $\mathbb{R}\cup\{-\infty,\infty\}$, with $\log(0)$ defined to be $-\infty$.
When $S_0=S_1$, the zero-denominator case has no chance of happening, regardless of the hypothesis, and we may dispense with using the extended reals if we wish. This is a frequent assumption in likelihood ratio settings.
|
Support of likelihood ratio test statistic
|
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consid
|
Support of likelihood ratio test statistic
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consider two possibilities:
The denominator is positive. This means that $H_0$ assigns a positive chance to any tiny neighborhood of $y$. It occurs when $0 \lt y$. There is no problem with a division by zero. In terms of the indicator function $\mathcal{I}$, a formula for the likelihood ratio is $$\frac{\mathcal{I}_{(0,1)}(y)}{e^{-y}}.$$ This equals $e^y$ for $0 \lt y \lt 1$ and otherwise is zero.
The denominator is zero. This means $H_0$ assigns no probability density to $y$. There are two possibilities:
$H_1$ assigns no probability density to $y$, either. Thus, this $y$ has no chance of being observed under either hypothesis. We needn't consider this any further. The set of $y$ for which this is the case is the intersection of the complements of the supports of the hypotheses: the non-positive real numbers.
$H_1$ assigns some probability density to $y$. Thus, this $y$ is possible under $H_1$ but not under $H_0$. The conclusion is obvious. As a convention we may use values in an extended Real number line $\{-\infty, \infty\}\cup \mathbb{R}$ to designate such likelihood ratios (or their logarithms); here we would say that the likelihood ratio (and its log) is $\infty$.
To summarize, let $S_i\subset\mathbb{R}$ be the supports of the hypotheses. Then the likelihood ratio must be considered a function whose domain is the union of supports $S_0\cup S_1$ which takes values in the extended positive reals $[0,\infty)\cup\{\infty\}$. The log likelihood ratio takes values in the extended reals $\mathbb{R}\cup\{-\infty,\infty\}$, with $\log(0)$ defined to be $-\infty$.
When $S_0=S_1$, the zero-denominator case has no chance of happening, regardless of the hypothesis, and we may dispense with using the extended reals if we wish. This is a frequent assumption in likelihood ratio settings.
|
Support of likelihood ratio test statistic
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consid
|
47,860
|
Support of likelihood ratio test statistic
|
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention.
You have to think carefully about the density across at least the +ve half-line -- your ansẃer defines what you get when $0<y<1$, but what's the LR when y=4.3?
That could happen, if the distribution really were exponential, so you have to consider it.
|
Support of likelihood ratio test statistic
|
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention.
You have to think carefully about the density across at least the +ve half-line -- your a
|
Support of likelihood ratio test statistic
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention.
You have to think carefully about the density across at least the +ve half-line -- your ansẃer defines what you get when $0<y<1$, but what's the LR when y=4.3?
That could happen, if the distribution really were exponential, so you have to consider it.
|
Support of likelihood ratio test statistic
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention.
You have to think carefully about the density across at least the +ve half-line -- your a
|
47,861
|
Interpret regression coefficients when independent variable is a ratio
|
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the formula, which is
$$E\left[\log Y\right] = \beta_0 + \beta_1 x_1 + \beta_2\left(\frac{x_3}{x_1}\right).$$
The derivatives are
$$\frac{\partial}{\partial x_1} E\left[\log Y \right] = \beta_1 - \beta_2\left( \frac{x_3}{x_1^2}\right)$$
and
$$\frac{\partial}{\partial x_3} E\left[\log Y \right] = \beta_2 \left(\frac{1}{x_1}\right).$$
Because the results depend on the values of the variables, there is no universal interpretation of the coefficients: their effects depend on the values of the variables.
Often we will examine these rates of change when the variables are set to average values (and, when the model is estimated from data, we use the parameter estimates as surrogates for the parameters themselves). For instance, suppose the mean value of $x_1$ in the dataset is $2$ and the mean value of $x_3$ is $4.$ Then a small change of size $\mathrm{d}x_1$ in $x_1$ is associated with a change of size
$$\left(\frac{\partial}{\partial x_1} E\left[\log Y \right] \right)\mathrm{d}x_1 = (\beta_1 - \beta_2(4/2^2))\mathrm{d}x_1 = (\beta_1 - \beta_2)\mathrm{d}x_1.$$
Similarly, changing $x_3$ to $x_3+\mathrm{d}x_3$ is associated with change of size
$$\left(\frac{\partial}{\partial x_3} E\left[\log Y \right] \right)\mathrm{d}x_3 = \left(\frac{\beta_{2}}{2}\right)\mathrm{d}x_3$$
in $E\left[\log y\right].$
For more examples of these kinds of calculations and interpretations, and to see how the calculations can (often) be performed without knowing any Calculus, visit How to interpret coefficients of angular terms in a regression model?, How do I interpret the coefficients of a log-linear regression with quadratic terms?, Linear and quadratic term interpretation in regression analysis, and How to interpret log-log regression coefficients for other than 1 or 10 percent change?.
|
Interpret regression coefficients when independent variable is a ratio
|
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the for
|
Interpret regression coefficients when independent variable is a ratio
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the formula, which is
$$E\left[\log Y\right] = \beta_0 + \beta_1 x_1 + \beta_2\left(\frac{x_3}{x_1}\right).$$
The derivatives are
$$\frac{\partial}{\partial x_1} E\left[\log Y \right] = \beta_1 - \beta_2\left( \frac{x_3}{x_1^2}\right)$$
and
$$\frac{\partial}{\partial x_3} E\left[\log Y \right] = \beta_2 \left(\frac{1}{x_1}\right).$$
Because the results depend on the values of the variables, there is no universal interpretation of the coefficients: their effects depend on the values of the variables.
Often we will examine these rates of change when the variables are set to average values (and, when the model is estimated from data, we use the parameter estimates as surrogates for the parameters themselves). For instance, suppose the mean value of $x_1$ in the dataset is $2$ and the mean value of $x_3$ is $4.$ Then a small change of size $\mathrm{d}x_1$ in $x_1$ is associated with a change of size
$$\left(\frac{\partial}{\partial x_1} E\left[\log Y \right] \right)\mathrm{d}x_1 = (\beta_1 - \beta_2(4/2^2))\mathrm{d}x_1 = (\beta_1 - \beta_2)\mathrm{d}x_1.$$
Similarly, changing $x_3$ to $x_3+\mathrm{d}x_3$ is associated with change of size
$$\left(\frac{\partial}{\partial x_3} E\left[\log Y \right] \right)\mathrm{d}x_3 = \left(\frac{\beta_{2}}{2}\right)\mathrm{d}x_3$$
in $E\left[\log y\right].$
For more examples of these kinds of calculations and interpretations, and to see how the calculations can (often) be performed without knowing any Calculus, visit How to interpret coefficients of angular terms in a regression model?, How do I interpret the coefficients of a log-linear regression with quadratic terms?, Linear and quadratic term interpretation in regression analysis, and How to interpret log-log regression coefficients for other than 1 or 10 percent change?.
|
Interpret regression coefficients when independent variable is a ratio
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the for
|
47,862
|
Interpret regression coefficients when independent variable is a ratio
|
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing the other parts of the model. It then becomes:
$$
\log Y = \beta_0 + \beta x + E
$$
which in multiplicative form becomes
$$
Y = C e^{\beta x} E
$$
where $C=e^{\beta_0}, E = e^\epsilon$ . The derivative of $Y$ with respect to $x$ is then $\frac{\partial Y}{\partial x}= \beta Y$ so that
$\beta = \frac{\partial Y}{\partial x} / Y$. The fact that $x$ is a ratio plays no part, the interpretation is the same. You seem to be preoccupied with the fact that increasing a ratio with 1 doesn't make sense, then increase it by a smaller amount, say 0.01, then the relative increase in $Y$ (that is, increase as a proportion of $Y$ is $0.01 \beta$.
There might be other issues unrelated to this problem with interpretation, that is, if your proportion is based on few cases, that is, of the form $z/n$ where $z$ is a count and $n$ is small, it will be measured with error, which would need some elaboration to take into account.
|
Interpret regression coefficients when independent variable is a ratio
|
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing
|
Interpret regression coefficients when independent variable is a ratio
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing the other parts of the model. It then becomes:
$$
\log Y = \beta_0 + \beta x + E
$$
which in multiplicative form becomes
$$
Y = C e^{\beta x} E
$$
where $C=e^{\beta_0}, E = e^\epsilon$ . The derivative of $Y$ with respect to $x$ is then $\frac{\partial Y}{\partial x}= \beta Y$ so that
$\beta = \frac{\partial Y}{\partial x} / Y$. The fact that $x$ is a ratio plays no part, the interpretation is the same. You seem to be preoccupied with the fact that increasing a ratio with 1 doesn't make sense, then increase it by a smaller amount, say 0.01, then the relative increase in $Y$ (that is, increase as a proportion of $Y$ is $0.01 \beta$.
There might be other issues unrelated to this problem with interpretation, that is, if your proportion is based on few cases, that is, of the form $z/n$ where $z$ is a count and $n$ is small, it will be measured with error, which would need some elaboration to take into account.
|
Interpret regression coefficients when independent variable is a ratio
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing
|
47,863
|
Interpret regression coefficients when independent variable is a ratio
|
I suppose you could interpret the numerator and denominator with ratio.
If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your denominator (x1) decreased and that would be its effect on dependent variable.
|
Interpret regression coefficients when independent variable is a ratio
|
I suppose you could interpret the numerator and denominator with ratio.
If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your
|
Interpret regression coefficients when independent variable is a ratio
I suppose you could interpret the numerator and denominator with ratio.
If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your denominator (x1) decreased and that would be its effect on dependent variable.
|
Interpret regression coefficients when independent variable is a ratio
I suppose you could interpret the numerator and denominator with ratio.
If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your
|
47,864
|
Interpret regression coefficients when independent variable is a ratio
|
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covariate.
As long as your response variable is indeed linear in that ratio, then that is simply how your system behaves. Suppose you had a model that was linear in body mass index (BMI). That is a ratio commonly used in medicine (although it is highly suspect). Or, the HDL ratio for cholesterol. Or pressure in physics (F/A). If that's the way the system behaves, then that is just how it is.
It is up to you as the modeler to know why that ratio is important in your model, not up to the regression table output to tell you (as in "how to interpret" it).
|
Interpret regression coefficients when independent variable is a ratio
|
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covari
|
Interpret regression coefficients when independent variable is a ratio
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covariate.
As long as your response variable is indeed linear in that ratio, then that is simply how your system behaves. Suppose you had a model that was linear in body mass index (BMI). That is a ratio commonly used in medicine (although it is highly suspect). Or, the HDL ratio for cholesterol. Or pressure in physics (F/A). If that's the way the system behaves, then that is just how it is.
It is up to you as the modeler to know why that ratio is important in your model, not up to the regression table output to tell you (as in "how to interpret" it).
|
Interpret regression coefficients when independent variable is a ratio
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covari
|
47,865
|
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
|
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
$$\begin{align}
b_n&=\frac{1}{2}\left(y^Ty - \mu_n^T\Lambda_n\mu_n\right)\\
&=\frac 12\left(y^Ty -\mu_n^TX^Ty-y^TX\mu_n+ \mu_n^T\Lambda_n\mu_n\right)\\
&=\frac 12\left(y-X\mu_n\right)^T\left(y-X\mu_n\right),
\end{align}$$
where we have used the fact that $\Lambda_n=X^TX$ and $\mu_n=\left(X^TX\right)^{-1}X^Ty$. Thus, $b_n$ is $n/2$ times the sample variance of the residuals $y-X\mu_n$. But we already used the data to estimate the regression coefficients and the sample variance of the residuals is a biased estimator of the population variance. In particular, we have $\nu=n-p$ degrees of freedom and an unbiased estimate of the population variance is
$$
\frac{1}{n-p}\left(y-X\mu_n\right)^T\left(y-X\mu_n\right).
$$
Whenever $n\gg p$ just using the sample variance is fine because $\frac{n}{n-p}\approx 1$. However, as soon as $p$ becomes comparable with $n$ the population variance is underestimated by the sample variance. The inference fails. The problem is discussed in the context of maximum likelihood inference on page 388 in "Data Analysis Using Regression and Multilevel/Hierarchical Models".
I like conjugate priors. But in this case they caused me some trouble. Moral of the story: don't just pick the functional form of your priors because they are conjugate.
As an aside: using a variational mean-field approximation such that the posterior factorises with respect to the regression coefficients and the noise precision works much better than the closed form solution provided by the conjugate priors (better being defined as the posterior being consistent with the true value that was used to generate the data).
|
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
|
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
|
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
$$\begin{align}
b_n&=\frac{1}{2}\left(y^Ty - \mu_n^T\Lambda_n\mu_n\right)\\
&=\frac 12\left(y^Ty -\mu_n^TX^Ty-y^TX\mu_n+ \mu_n^T\Lambda_n\mu_n\right)\\
&=\frac 12\left(y-X\mu_n\right)^T\left(y-X\mu_n\right),
\end{align}$$
where we have used the fact that $\Lambda_n=X^TX$ and $\mu_n=\left(X^TX\right)^{-1}X^Ty$. Thus, $b_n$ is $n/2$ times the sample variance of the residuals $y-X\mu_n$. But we already used the data to estimate the regression coefficients and the sample variance of the residuals is a biased estimator of the population variance. In particular, we have $\nu=n-p$ degrees of freedom and an unbiased estimate of the population variance is
$$
\frac{1}{n-p}\left(y-X\mu_n\right)^T\left(y-X\mu_n\right).
$$
Whenever $n\gg p$ just using the sample variance is fine because $\frac{n}{n-p}\approx 1$. However, as soon as $p$ becomes comparable with $n$ the population variance is underestimated by the sample variance. The inference fails. The problem is discussed in the context of maximum likelihood inference on page 388 in "Data Analysis Using Regression and Multilevel/Hierarchical Models".
I like conjugate priors. But in this case they caused me some trouble. Moral of the story: don't just pick the functional form of your priors because they are conjugate.
As an aside: using a variational mean-field approximation such that the posterior factorises with respect to the regression coefficients and the noise precision works much better than the closed form solution provided by the conjugate priors (better being defined as the posterior being consistent with the true value that was used to generate the data).
|
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
|
47,866
|
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
|
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided
$$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$
Let $y-x$ = $u$
$$ f_{1,n}(x,x+u) = n(n- 1) \dfrac{u^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$
Now, I integrate out $x$
$$f_U(u) = \int_0^{\theta - u} n(n- 1) \dfrac{u^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2} dx = n(n-1) \dfrac{u^{n-2}}{\theta^{n}} (\theta - u) $$
Now,
\begin{align*}
E(U) &= n(n-1) \int u \dfrac{u^{n-2}}{\theta^{n}}(\theta - u) du\\
& = \dfrac{n(n-1)}{\theta^{n}} \int u^{n-1}(\theta - u) du\\
& = \dfrac{n(n-1)}{\theta^{n}} \dfrac{\theta^{n+1}}{n(n+1)}\\
& = \theta\dfrac{n-1}{n+1}
\end{align*}
I think your mistake was in finding the density for the range.
|
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
|
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided
$$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2
|
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided
$$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$
Let $y-x$ = $u$
$$ f_{1,n}(x,x+u) = n(n- 1) \dfrac{u^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$
Now, I integrate out $x$
$$f_U(u) = \int_0^{\theta - u} n(n- 1) \dfrac{u^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2} dx = n(n-1) \dfrac{u^{n-2}}{\theta^{n}} (\theta - u) $$
Now,
\begin{align*}
E(U) &= n(n-1) \int u \dfrac{u^{n-2}}{\theta^{n}}(\theta - u) du\\
& = \dfrac{n(n-1)}{\theta^{n}} \int u^{n-1}(\theta - u) du\\
& = \dfrac{n(n-1)}{\theta^{n}} \dfrac{\theta^{n+1}}{n(n+1)}\\
& = \theta\dfrac{n-1}{n+1}
\end{align*}
I think your mistake was in finding the density for the range.
|
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided
$$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2
|
47,867
|
Generating random samples from Huber density
|
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely,
$$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\left\{-k|x|+k^2/2\right\}\mathbb{I}_{(-k,k)^c}(x)$$ implies that the distribution is the mixture of the Normal distribution truncated to $(-k,k)$ and of the Laplace distribution with rate $k$ truncated to $(-k,k)^c$. The only missing item is the weight $\alpha$ of the truncated Normal, which amounts to normalise both terms:
$$\exp\left\{-x^2/2\right\}=\frac{\sqrt{2\pi}}{\sqrt{2\pi}}\times
\dfrac{\Phi(k)-\Phi(-k)}{\Phi(k)-\Phi(-k)}\times\exp\left\{-x^2/2\right\}$$ hence the coefficient of the truncated Gaussian is
$$\sqrt{2\pi}\times\{\Phi(k)-\Phi(-k)\}$$while
$$\exp\left\{-k|x|+k^2/2\right\}=e^{k^2/2}\times\dfrac{2k^{-1}e^{-k^2}}{2k^{-1}e^{-k^2}}\times\exp\left\{-k|x|\right\}$$
since
$$\int_k^\infty \exp\left\{-k|x|\right\}\text{d}x=k^{-1}e^{-k^2}$$
Therefore this distribution is equal to
$$\alpha\mathcal{N}_{(-k,k)}(0,1)+(1-\alpha)\mathcal{L}_{(-k,k)^c}(0,k)$$with
$$\alpha=\dfrac{\sqrt{2\pi}\{\Phi(k)-\Phi(-k)\}}{\sqrt{2\pi}\{\Phi(k)-\Phi(-k)\}+2k^{-1}e^{-k^2/2}}$$
Simulating one of the truncated distributions is straightforward by cdf inversion. Here is an illustration of the fit for k=1:
based on the R code
genhuber <- function(k=1,n=1){
pk=pnorm(k);pmk=pnorm(-k);dk=pk-pmk
alp=1/(1+2*dnorm(k)/(k*dk))
u=runif(n)
return((u<alp)*qnorm(pmk+runif(n)*dk)+(u>alp)*(1-2*(runif(n)<.5))*(k+rexp(n)/k))
}
dhuber <-function(x,k=1){
x=abs(x)
meps=1/(1+2*dnorm(k)/k-2*pnorm(-k))
return(meps*((x<k)*dnorm(x)+(x>=k)*exp(-k*(x-.5*k))/sqrt(2*pi)))}
|
Generating random samples from Huber density
|
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely,
$$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\le
|
Generating random samples from Huber density
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely,
$$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\left\{-k|x|+k^2/2\right\}\mathbb{I}_{(-k,k)^c}(x)$$ implies that the distribution is the mixture of the Normal distribution truncated to $(-k,k)$ and of the Laplace distribution with rate $k$ truncated to $(-k,k)^c$. The only missing item is the weight $\alpha$ of the truncated Normal, which amounts to normalise both terms:
$$\exp\left\{-x^2/2\right\}=\frac{\sqrt{2\pi}}{\sqrt{2\pi}}\times
\dfrac{\Phi(k)-\Phi(-k)}{\Phi(k)-\Phi(-k)}\times\exp\left\{-x^2/2\right\}$$ hence the coefficient of the truncated Gaussian is
$$\sqrt{2\pi}\times\{\Phi(k)-\Phi(-k)\}$$while
$$\exp\left\{-k|x|+k^2/2\right\}=e^{k^2/2}\times\dfrac{2k^{-1}e^{-k^2}}{2k^{-1}e^{-k^2}}\times\exp\left\{-k|x|\right\}$$
since
$$\int_k^\infty \exp\left\{-k|x|\right\}\text{d}x=k^{-1}e^{-k^2}$$
Therefore this distribution is equal to
$$\alpha\mathcal{N}_{(-k,k)}(0,1)+(1-\alpha)\mathcal{L}_{(-k,k)^c}(0,k)$$with
$$\alpha=\dfrac{\sqrt{2\pi}\{\Phi(k)-\Phi(-k)\}}{\sqrt{2\pi}\{\Phi(k)-\Phi(-k)\}+2k^{-1}e^{-k^2/2}}$$
Simulating one of the truncated distributions is straightforward by cdf inversion. Here is an illustration of the fit for k=1:
based on the R code
genhuber <- function(k=1,n=1){
pk=pnorm(k);pmk=pnorm(-k);dk=pk-pmk
alp=1/(1+2*dnorm(k)/(k*dk))
u=runif(n)
return((u<alp)*qnorm(pmk+runif(n)*dk)+(u>alp)*(1-2*(runif(n)<.5))*(k+rexp(n)/k))
}
dhuber <-function(x,k=1){
x=abs(x)
meps=1/(1+2*dnorm(k)/k-2*pnorm(-k))
return(meps*((x<k)*dnorm(x)+(x>=k)*exp(-k*(x-.5*k))/sqrt(2*pi)))}
|
Generating random samples from Huber density
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely,
$$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\le
|
47,868
|
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
|
Yes, it's easy.
Let's say that this is your population model:
+---+ 0.5 +----+
| X +------------> | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
0.5 +----+ 0.5
And you fit this model:
+---+ +----+
| X | | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
+----+
You've omitted the direct path from X to Y.
This omission will make the model fit, very badly.
However, the parameters from X to M and M to Y will be high - higher than they should be, and (for any reasonable sample size) highly significant.
Model fit comes first. If your model doesn't fit, you don't trust the parameter estimates.
(That doesn't mean that if your model does fit, you do trust them.)
|
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
|
Yes, it's easy.
Let's say that this is your population model:
+---+ 0.5 +----+
| X +------------> | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
0.5
|
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
Yes, it's easy.
Let's say that this is your population model:
+---+ 0.5 +----+
| X +------------> | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
0.5 +----+ 0.5
And you fit this model:
+---+ +----+
| X | | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
+----+
You've omitted the direct path from X to Y.
This omission will make the model fit, very badly.
However, the parameters from X to M and M to Y will be high - higher than they should be, and (for any reasonable sample size) highly significant.
Model fit comes first. If your model doesn't fit, you don't trust the parameter estimates.
(That doesn't mean that if your model does fit, you do trust them.)
|
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
Yes, it's easy.
Let's say that this is your population model:
+---+ 0.5 +----+
| X +------------> | Y |
+-+-+ +-+--+
| +----+ ^
+---->+ M +-------+
0.5
|
47,869
|
Help with zero-inflated generalized linear mixed models with random factor in R
|
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in the zero-inflation part. See vignette("countreg", package = "pscl") for more details.
If you want random effects, then no. If you use fixed interaction effects instead, you could still try to find a suitable model with zeroinfl(). But with your number of observations this is probably not the best solution.
As the model is not the one you would want to fit, this is not relevant here.
For zeroinfl() there would be and I suppose that for glmmADMB there are as well. But I'm not an expert on that.
You could employ effect plots for the covariate effects or rootograms for the goodness of fit. It depends on what you really want to show.
|
Help with zero-inflated generalized linear mixed models with random factor in R
|
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in
|
Help with zero-inflated generalized linear mixed models with random factor in R
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in the zero-inflation part. See vignette("countreg", package = "pscl") for more details.
If you want random effects, then no. If you use fixed interaction effects instead, you could still try to find a suitable model with zeroinfl(). But with your number of observations this is probably not the best solution.
As the model is not the one you would want to fit, this is not relevant here.
For zeroinfl() there would be and I suppose that for glmmADMB there are as well. But I'm not an expert on that.
You could employ effect plots for the covariate effects or rootograms for the goodness of fit. It depends on what you really want to show.
|
Help with zero-inflated generalized linear mixed models with random factor in R
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in
|
47,870
|
Finding the MLE for a mixture of random variables which are discrete and continuous
|
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition of $Y_i$ implies there exists an index $k$ for which
$$y_1 = y_2 = \cdots = y_k = 1;\ y_{k+1}=y_{k+2}=\cdots=y_n = 0.$$
When $p$ is such that $x_k \le p \le x_{k+1}$ the likelihood is nonzero and equals
$$L(p) = p^k(1-p)^{n-k}.$$
For any other value of $p$ the likelihood is zero, demonstrating we may confine the search for a maximum to the interval $[x_k, x_{k+1}]$. Within the interior of this interval the log likelihood
$$\Lambda(p) = k\log(p) + (n-k)\log(1-p)$$
has derivative
$$\frac{d\Lambda}{dp}(p) = \frac{k}{p} - \frac{n-k}{1-p}$$
which (as a function of the interval $(0,1)$) is positive for small $p$, negative for large $p$, and zero where $p=k/n$. This leads to three circumstances:
When $x_k \lt k/n \lt x_{k+1}$, then $\hat p = k/n$. Moreover, $\Lambda$ is smooth in a neighborhood of $\hat p$ (implying the usual Hessian/Fisher Information/score techniques apply for large $n$).
When $k/n \le x_k$, then $\hat p = x_k$. However, $\Lambda$ is discontinuous at this value, so the usual MLE estimates of standard errors, confidence intervals, etc do not apply.
When $k/n \ge x_{k+1}$, then $\hat p = x_{k+1}$. The same caution applies as in (2).
It might be of interest to compute the chances of these three cases. In (1), exactly $k$ of the $n$ $x_i$ are in the interval $[0, p]$ and $n-k$ are in its complement. The chance of this Binomial event is $\binom{n}{k}p^k(1-p)^{n-k}$. This chance approaches zero asymptotically (at a $O(n^{-1/2})$ rate). Thus for large $n$ we can expect that case (1) rarely holds.
|
Finding the MLE for a mixture of random variables which are discrete and continuous
|
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition o
|
Finding the MLE for a mixture of random variables which are discrete and continuous
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition of $Y_i$ implies there exists an index $k$ for which
$$y_1 = y_2 = \cdots = y_k = 1;\ y_{k+1}=y_{k+2}=\cdots=y_n = 0.$$
When $p$ is such that $x_k \le p \le x_{k+1}$ the likelihood is nonzero and equals
$$L(p) = p^k(1-p)^{n-k}.$$
For any other value of $p$ the likelihood is zero, demonstrating we may confine the search for a maximum to the interval $[x_k, x_{k+1}]$. Within the interior of this interval the log likelihood
$$\Lambda(p) = k\log(p) + (n-k)\log(1-p)$$
has derivative
$$\frac{d\Lambda}{dp}(p) = \frac{k}{p} - \frac{n-k}{1-p}$$
which (as a function of the interval $(0,1)$) is positive for small $p$, negative for large $p$, and zero where $p=k/n$. This leads to three circumstances:
When $x_k \lt k/n \lt x_{k+1}$, then $\hat p = k/n$. Moreover, $\Lambda$ is smooth in a neighborhood of $\hat p$ (implying the usual Hessian/Fisher Information/score techniques apply for large $n$).
When $k/n \le x_k$, then $\hat p = x_k$. However, $\Lambda$ is discontinuous at this value, so the usual MLE estimates of standard errors, confidence intervals, etc do not apply.
When $k/n \ge x_{k+1}$, then $\hat p = x_{k+1}$. The same caution applies as in (2).
It might be of interest to compute the chances of these three cases. In (1), exactly $k$ of the $n$ $x_i$ are in the interval $[0, p]$ and $n-k$ are in its complement. The chance of this Binomial event is $\binom{n}{k}p^k(1-p)^{n-k}$. This chance approaches zero asymptotically (at a $O(n^{-1/2})$ rate). Thus for large $n$ we can expect that case (1) rarely holds.
|
Finding the MLE for a mixture of random variables which are discrete and continuous
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition o
|
47,871
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
|
Hypothesis testing:
I refer to this answer: What follows if we fail to reject the null hypothesis?.
Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'', i.e. whether the data you observe is (statistical) evidence that $H_A$ is true (see What follows if we fail to reject the null hypothesis? for more detail).
So if you want to ''show'' (with the data that you observe) that $p_{high} > p_{low}$ then your alternative hypothesis should be $H_A: p_{high} > p_{low}$ versus $H_0: p_{high} \le p_{low}$. (note that there is no ''$\hat{}"$ here because this is about the ''true'' values for $p_{high}$ and $p_{low}$.
This is a one-sided hypothesis, so this has nothing to do yet with the critical region being one-sided or two-sided. Note that we talk about a critical region, not about a confidence interval. (see below for explanation on confidence intervals).
Once you formulated your hypothesis, you can, using a test-statistic, define a critical region. If the data that you observe i.e. $\hat{p}_{high} - \hat{p}_{low}$ (here is where the "$\hat{}$" comes in, i.e. your data you observe) falls in the critical region then you reject $H_0$ and conclude that your data is evidence in favor of $H_A$. How will we now chose that critical region ? Well, we will, given a significance level $\alpha$ chose that critical region where we can most easily find evidence for $H_A$. In other words, we will chose the critical region with the highest power given $\alpha$. It happens that, for a (univariate) one-sided hypothesis (as you have) a one-sided critical region (i.e. an interval in the tail of the distribution of the test statistic) has more power.
Note that, if you want to show that the ''true'' $p_{high}$ is different from the ''true'' $p_{low}$ then you $H_A^{(1)}$ should be $H_A^{(1)}: p_{high} \ne p_{low}$ and the null should be $H_0^{(1)}: p_{high} = p_{low}$. In that case you're better of with a two-sided critical region.
But your alternative hypothesis is just what you want to show, so if you want to show that $p_{high} > p_{low}$ then this is your alternative hypothesis and your critical region is chosen to have the best power. If you want to show that $p_{high} = p_{low}$ then this should be your alternative (and then you're better of with a two-sided critical region). So it all depends on what the researcher wants to show: do you want to show that ''high treatment has more effect than low treatment'' or do you want to show that ''high and low treatment have a different effect'' ?
Confidence intervals
Confidence intervals are different from critical regions, see Why is there a need for a 'sampling distribution' to find confidence intervals? for explanation on confidence intervals.
Hypothesis testing use the observed data to find 'statistical evidence' for your hypothesis $H_A$ that is just a statement about the ''true parameters'' $p_{high}$ and $p_{low}$ (no $\hat{}$ because it is about the ''true'' values of these parameters.
A confidence interval is an interval (at a chosen confidence level) around the observed value $\hat{p}_{high} - \hat{p}_{low}$ (observed therefore the $\hat{}$) in which you are ''confident'' at a certain level that the interval will contain the ''true'' value of $p_{high}-p_{low}$.
Note that there is only one ''true'' value for $p_{high}$ and $p_{low}$ but we do not know these ''true'' values. On the other hand, each time we take a sample (i.e. each time we observe data) you will usually find another value for $\hat{p}_{high} - \hat{p}_{low}$. You want to use now one such an observation $\hat{p}_{high} - \hat{p}_{low}$ to make inferences about the (one) ''true'' value $p_{high}$ and $p_{low}$. There are two ways to make inferences: (1) make a statement about $p_{high}$ and $p_{low}$ and then ''check'' whether the observation confirms the statement (at a certain significance level), this is hypothesis testing and (2) use the observed data $\hat{p}_{high} - \hat{p}_{low}$ to construct an interval that contains (at a certain confidence level) the true value $p_{high}-p_{low}$. This is a confidence interval.
Finally note that hypothesis tests can be used to construct confidence intervals and vice versa.
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
|
Hypothesis testing:
I refer to this answer: What follows if we fail to reject the null hypothesis?.
Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'',
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
Hypothesis testing:
I refer to this answer: What follows if we fail to reject the null hypothesis?.
Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'', i.e. whether the data you observe is (statistical) evidence that $H_A$ is true (see What follows if we fail to reject the null hypothesis? for more detail).
So if you want to ''show'' (with the data that you observe) that $p_{high} > p_{low}$ then your alternative hypothesis should be $H_A: p_{high} > p_{low}$ versus $H_0: p_{high} \le p_{low}$. (note that there is no ''$\hat{}"$ here because this is about the ''true'' values for $p_{high}$ and $p_{low}$.
This is a one-sided hypothesis, so this has nothing to do yet with the critical region being one-sided or two-sided. Note that we talk about a critical region, not about a confidence interval. (see below for explanation on confidence intervals).
Once you formulated your hypothesis, you can, using a test-statistic, define a critical region. If the data that you observe i.e. $\hat{p}_{high} - \hat{p}_{low}$ (here is where the "$\hat{}$" comes in, i.e. your data you observe) falls in the critical region then you reject $H_0$ and conclude that your data is evidence in favor of $H_A$. How will we now chose that critical region ? Well, we will, given a significance level $\alpha$ chose that critical region where we can most easily find evidence for $H_A$. In other words, we will chose the critical region with the highest power given $\alpha$. It happens that, for a (univariate) one-sided hypothesis (as you have) a one-sided critical region (i.e. an interval in the tail of the distribution of the test statistic) has more power.
Note that, if you want to show that the ''true'' $p_{high}$ is different from the ''true'' $p_{low}$ then you $H_A^{(1)}$ should be $H_A^{(1)}: p_{high} \ne p_{low}$ and the null should be $H_0^{(1)}: p_{high} = p_{low}$. In that case you're better of with a two-sided critical region.
But your alternative hypothesis is just what you want to show, so if you want to show that $p_{high} > p_{low}$ then this is your alternative hypothesis and your critical region is chosen to have the best power. If you want to show that $p_{high} = p_{low}$ then this should be your alternative (and then you're better of with a two-sided critical region). So it all depends on what the researcher wants to show: do you want to show that ''high treatment has more effect than low treatment'' or do you want to show that ''high and low treatment have a different effect'' ?
Confidence intervals
Confidence intervals are different from critical regions, see Why is there a need for a 'sampling distribution' to find confidence intervals? for explanation on confidence intervals.
Hypothesis testing use the observed data to find 'statistical evidence' for your hypothesis $H_A$ that is just a statement about the ''true parameters'' $p_{high}$ and $p_{low}$ (no $\hat{}$ because it is about the ''true'' values of these parameters.
A confidence interval is an interval (at a chosen confidence level) around the observed value $\hat{p}_{high} - \hat{p}_{low}$ (observed therefore the $\hat{}$) in which you are ''confident'' at a certain level that the interval will contain the ''true'' value of $p_{high}-p_{low}$.
Note that there is only one ''true'' value for $p_{high}$ and $p_{low}$ but we do not know these ''true'' values. On the other hand, each time we take a sample (i.e. each time we observe data) you will usually find another value for $\hat{p}_{high} - \hat{p}_{low}$. You want to use now one such an observation $\hat{p}_{high} - \hat{p}_{low}$ to make inferences about the (one) ''true'' value $p_{high}$ and $p_{low}$. There are two ways to make inferences: (1) make a statement about $p_{high}$ and $p_{low}$ and then ''check'' whether the observation confirms the statement (at a certain significance level), this is hypothesis testing and (2) use the observed data $\hat{p}_{high} - \hat{p}_{low}$ to construct an interval that contains (at a certain confidence level) the true value $p_{high}-p_{low}$. This is a confidence interval.
Finally note that hypothesis tests can be used to construct confidence intervals and vice versa.
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
Hypothesis testing:
I refer to this answer: What follows if we fail to reject the null hypothesis?.
Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'',
|
47,872
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
|
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('hunch'), using a one-sided approach is almost groundless, and you risk being not conservative enough. I would thus recommend to use a two-sided CI/test.
Choosing a shortcut such as a one-sided CI/test in this phase is likely going to backlash when the final report is submitted for dissemination/publication.
References which are also pertinent to your issue are those on inferiority vs equivalence trials:
http://www.ncbi.nlm.nih.gov/pubmed/26604186
http://www.ncbi.nlm.nih.gov/pubmed/24137721
http://www.ncbi.nlm.nih.gov/pubmed/22145119
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
|
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('hunch'), using a one-sided approach is almost groundless, and you risk being not conservative enough. I would thus recommend to use a two-sided CI/test.
Choosing a shortcut such as a one-sided CI/test in this phase is likely going to backlash when the final report is submitted for dissemination/publication.
References which are also pertinent to your issue are those on inferiority vs equivalence trials:
http://www.ncbi.nlm.nih.gov/pubmed/26604186
http://www.ncbi.nlm.nih.gov/pubmed/24137721
http://www.ncbi.nlm.nih.gov/pubmed/22145119
|
Appropriateness of one-sided hypothesis tests when testing medical treatments
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('
|
47,873
|
Why do we calculate pooled standard deviations by using variances?
|
We work with variances rather than standard deviations because variances have special properties.
In particular, variances of sums and differences of variables have a simple form, and if the variables are independent, the result is even simpler.
That is, if two variables are independent, the variance of the difference is the sum of the variances ("variances add" -- but standard deviations don't).
Specifically, in say a two-sample t test, we're trying to find the standard deviation of the difference in sample means. We can use basic properties of variance (linked above) to see that the variance of the individual sample means is $\sigma^2/n$, which we can estimate by $s^2/n$ for each sample.
Now that we have the variance of each the means, we can use the "variances add" result to get that the variance of the difference of the means is the sum of the two variances of the sample means. So the standard deviation of the distribution of the difference in means (the standard error of the difference in means) is the square root of that sum.
This works quite directly for the Welch t-test, where we estimate $\text{Var}(\bar{X}-\bar{Y})$ by $s_x^2/n_x+s_y^2/n_y$. The equal-variance version works using the same idea but because the variances are assumed identical, there we produce a single overall estimate of $\sigma^2$ from both samples. That is, we
add together all the squared deviations from the corresponding group mean before dividing by the total d.f. from the two groups (each loses 1 d.f. because we measure deviations from the individual group means). This corresponds to a form of d.f.-weighted average of the individual variances $s^2_p=w_xs^2_x+w_ys^2_y$ where $w_x=\text{df}_x/(\text{df}_x+\text{df}_y)$. Then that single estimate of pooled variance $s^2_p$ is used in an estimate of the variance of the difference in means. Since $\text{Var}(\bar{X})=\sigma^2/n_x$ and $\text{Var}(\bar{Y})=\sigma^2/n_y$, again the variance of the sum is the sum of the variances, so $\text{Var}(\bar{X}-\bar{Y})=\sigma^2/n_x+\sigma^2/n_y$, which we again then estimate by replacing $\sigma^2$ by the estimate $s^2_p$.
In either case, we can standardize our difference in means by dividing by the corresponding estimate of standard error. In both cases this is where the denominator of the $t$-statistic comes from.
Similar results come up in other cases.
|
Why do we calculate pooled standard deviations by using variances?
|
We work with variances rather than standard deviations because variances have special properties.
In particular, variances of sums and differences of variables have a simple form, and if the variables
|
Why do we calculate pooled standard deviations by using variances?
We work with variances rather than standard deviations because variances have special properties.
In particular, variances of sums and differences of variables have a simple form, and if the variables are independent, the result is even simpler.
That is, if two variables are independent, the variance of the difference is the sum of the variances ("variances add" -- but standard deviations don't).
Specifically, in say a two-sample t test, we're trying to find the standard deviation of the difference in sample means. We can use basic properties of variance (linked above) to see that the variance of the individual sample means is $\sigma^2/n$, which we can estimate by $s^2/n$ for each sample.
Now that we have the variance of each the means, we can use the "variances add" result to get that the variance of the difference of the means is the sum of the two variances of the sample means. So the standard deviation of the distribution of the difference in means (the standard error of the difference in means) is the square root of that sum.
This works quite directly for the Welch t-test, where we estimate $\text{Var}(\bar{X}-\bar{Y})$ by $s_x^2/n_x+s_y^2/n_y$. The equal-variance version works using the same idea but because the variances are assumed identical, there we produce a single overall estimate of $\sigma^2$ from both samples. That is, we
add together all the squared deviations from the corresponding group mean before dividing by the total d.f. from the two groups (each loses 1 d.f. because we measure deviations from the individual group means). This corresponds to a form of d.f.-weighted average of the individual variances $s^2_p=w_xs^2_x+w_ys^2_y$ where $w_x=\text{df}_x/(\text{df}_x+\text{df}_y)$. Then that single estimate of pooled variance $s^2_p$ is used in an estimate of the variance of the difference in means. Since $\text{Var}(\bar{X})=\sigma^2/n_x$ and $\text{Var}(\bar{Y})=\sigma^2/n_y$, again the variance of the sum is the sum of the variances, so $\text{Var}(\bar{X}-\bar{Y})=\sigma^2/n_x+\sigma^2/n_y$, which we again then estimate by replacing $\sigma^2$ by the estimate $s^2_p$.
In either case, we can standardize our difference in means by dividing by the corresponding estimate of standard error. In both cases this is where the denominator of the $t$-statistic comes from.
Similar results come up in other cases.
|
Why do we calculate pooled standard deviations by using variances?
We work with variances rather than standard deviations because variances have special properties.
In particular, variances of sums and differences of variables have a simple form, and if the variables
|
47,874
|
Combining one class classifiers to do multi-class classification
|
I've done something like this using either of the following:
(a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A, '0' otherwise - do this for B and C classes using the same logic. The foregoing columns will be your target fields for three separate binary classifiers (a classifier for A, B, and C).
(b) Feed the predictions - in addition to any other features - into a third classifier, a multiclass classifier whose target is the tri-level target.
Taking the same approach as 1(a), take the predictions and use rule-based logic (or misclassification costs) to separate the class predictions - this is to avoid ending up with the same sample being predicted as both A and B, both A and C, etc.
|
Combining one class classifiers to do multi-class classification
|
I've done something like this using either of the following:
(a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A,
|
Combining one class classifiers to do multi-class classification
I've done something like this using either of the following:
(a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A, '0' otherwise - do this for B and C classes using the same logic. The foregoing columns will be your target fields for three separate binary classifiers (a classifier for A, B, and C).
(b) Feed the predictions - in addition to any other features - into a third classifier, a multiclass classifier whose target is the tri-level target.
Taking the same approach as 1(a), take the predictions and use rule-based logic (or misclassification costs) to separate the class predictions - this is to avoid ending up with the same sample being predicted as both A and B, both A and C, etc.
|
Combining one class classifiers to do multi-class classification
I've done something like this using either of the following:
(a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A,
|
47,875
|
Combining one class classifiers to do multi-class classification
|
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice of which 2-classifier to use for a given sample is also to be learned when doing the 0 vs 1 vs 2 classification problem.
I nice paper I found which might help was Fitted Learning: Models with Awareness of their Limits.
It takes a simple neural network, the Feed forward kind but the key idea is that instead of teaching it to predict a vector [0,0,1], [0, 1, 0] or [1, 0, 0] you teach it to predict another vector.
You choose an arbitrary number (say 2) and then the targets you need to predict for each class follow a simple mapping.
[0, 0, 1] -> [0, 0, 0.5, 0, 0, 0.5]
[0, 1, 0] -> [0, 0.5, 0, 0, 0.5, 0]
[1, 0, 0] -> [0.5, 0, 0, 0.5, 0, 0]
That allows you to learn a much cleaner classification. I'd recommend going through the paper and seeing if it helps your problem.
|
Combining one class classifiers to do multi-class classification
|
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice
|
Combining one class classifiers to do multi-class classification
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice of which 2-classifier to use for a given sample is also to be learned when doing the 0 vs 1 vs 2 classification problem.
I nice paper I found which might help was Fitted Learning: Models with Awareness of their Limits.
It takes a simple neural network, the Feed forward kind but the key idea is that instead of teaching it to predict a vector [0,0,1], [0, 1, 0] or [1, 0, 0] you teach it to predict another vector.
You choose an arbitrary number (say 2) and then the targets you need to predict for each class follow a simple mapping.
[0, 0, 1] -> [0, 0, 0.5, 0, 0, 0.5]
[0, 1, 0] -> [0, 0.5, 0, 0, 0.5, 0]
[1, 0, 0] -> [0.5, 0, 0, 0.5, 0, 0]
That allows you to learn a much cleaner classification. I'd recommend going through the paper and seeing if it helps your problem.
|
Combining one class classifiers to do multi-class classification
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice
|
47,876
|
Combining one class classifiers to do multi-class classification
|
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novelty detection, where you have data coming only from a single class. If you have two classes, it's a two-class classifier.
What you want to do is one-vs-rest strategy in multiclass classification. Scikit-learn has nice documentation on such classifiers and API that allows for fitting such classifiers out of the box. You don't need to do weighted averaging, each of the individual classifiers will return some kind of score (often a probability) so for each class you would have the score for this class vs other classes, to make a prediction, you pick the class that has the highest score.
|
Combining one class classifiers to do multi-class classification
|
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novel
|
Combining one class classifiers to do multi-class classification
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novelty detection, where you have data coming only from a single class. If you have two classes, it's a two-class classifier.
What you want to do is one-vs-rest strategy in multiclass classification. Scikit-learn has nice documentation on such classifiers and API that allows for fitting such classifiers out of the box. You don't need to do weighted averaging, each of the individual classifiers will return some kind of score (often a probability) so for each class you would have the score for this class vs other classes, to make a prediction, you pick the class that has the highest score.
|
Combining one class classifiers to do multi-class classification
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novel
|
47,877
|
Combining one class classifiers to do multi-class classification
|
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs.
A perceptron or linear support vector machine may work well here.
|
Combining one class classifiers to do multi-class classification
|
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs.
A perceptron or linear support vector machine
|
Combining one class classifiers to do multi-class classification
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs.
A perceptron or linear support vector machine may work well here.
|
Combining one class classifiers to do multi-class classification
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs.
A perceptron or linear support vector machine
|
47,878
|
Need more intuition for the curse of dimensionality [duplicate]
|
I am used to an essentially same but a bit more illustrative example, in my opinion.
Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be shown (I'm not writing out the derivation now, let me know if you're interested) that the median of the maximum of Euclidean distances of these points from the origin $m=\text{med}\max_l(\rho(x_1,0),...,\rho(x_l,0))$ is
$$
m=\left[1-2^{-1/l}\right]^{1/n}
$$
Obviously, $m\to_{n\to\infty}1$.
Now, for some intuition about the curse of dimensionality, imagine that we want to classify the point at the origin using a $kNN$ classifier (for simplicity even with $k=1$). What this formula gives us is that when the dimensionality of the feature space becomes large enough, typically the points of our training sample will "almost surely" (not exactly in the measure-theoretic sense) will be lying almost on the boundary of our unit ball and, thus, will have almost the same Euclidean distance from our point, rendering comparisons of distances to the point of interest effectively useless.
This is how I like to think about the catchphrase "In a high-dimensional space, almost all points are almost equally as distant from each other". Hope this intuition satisfies you.
EDIT
Proof of the formula:
1) Let $r(x)=\rho(x, 0)$. Then the distribution function of $r$ is given by
$$
F_r(t)=P(\rho(x, 0)<t)=\frac{V_n(t)}{V_n(1)}=t^n,
$$
where $V_n(t)$ is the volume of an N-dimensional ball of a radius $t$.
2)Let $M(X)=\max (r(x_1),...,r(x_l))$. Then the distribution of $M$ is
$$
F_M(t)=P(M<t)=1-P(M\geq{t})=1-(1-F_r(t))^l=1-(1-t^n)^l.
$$
3) Now, the definition of $m$ is $F_M(m)=1/2$. Simple arithmetics now give the claim.
|
Need more intuition for the curse of dimensionality [duplicate]
|
I am used to an essentially same but a bit more illustrative example, in my opinion.
Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be sh
|
Need more intuition for the curse of dimensionality [duplicate]
I am used to an essentially same but a bit more illustrative example, in my opinion.
Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be shown (I'm not writing out the derivation now, let me know if you're interested) that the median of the maximum of Euclidean distances of these points from the origin $m=\text{med}\max_l(\rho(x_1,0),...,\rho(x_l,0))$ is
$$
m=\left[1-2^{-1/l}\right]^{1/n}
$$
Obviously, $m\to_{n\to\infty}1$.
Now, for some intuition about the curse of dimensionality, imagine that we want to classify the point at the origin using a $kNN$ classifier (for simplicity even with $k=1$). What this formula gives us is that when the dimensionality of the feature space becomes large enough, typically the points of our training sample will "almost surely" (not exactly in the measure-theoretic sense) will be lying almost on the boundary of our unit ball and, thus, will have almost the same Euclidean distance from our point, rendering comparisons of distances to the point of interest effectively useless.
This is how I like to think about the catchphrase "In a high-dimensional space, almost all points are almost equally as distant from each other". Hope this intuition satisfies you.
EDIT
Proof of the formula:
1) Let $r(x)=\rho(x, 0)$. Then the distribution function of $r$ is given by
$$
F_r(t)=P(\rho(x, 0)<t)=\frac{V_n(t)}{V_n(1)}=t^n,
$$
where $V_n(t)$ is the volume of an N-dimensional ball of a radius $t$.
2)Let $M(X)=\max (r(x_1),...,r(x_l))$. Then the distribution of $M$ is
$$
F_M(t)=P(M<t)=1-P(M\geq{t})=1-(1-F_r(t))^l=1-(1-t^n)^l.
$$
3) Now, the definition of $m$ is $F_M(m)=1/2$. Simple arithmetics now give the claim.
|
Need more intuition for the curse of dimensionality [duplicate]
I am used to an essentially same but a bit more illustrative example, in my opinion.
Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be sh
|
47,879
|
Need more intuition for the curse of dimensionality [duplicate]
|
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of the first pair is different than that of the second pair.
Suppose there are two points $x$ and $y$ in $\mathbb{R}^n$ and you want to calculate the distance between them. Suppose that in the beginning only the first coordinate is revealed to you, and the observed distance is $d_1=\sqrt{(x_1-y_1)^2}$. After that, another coordinate is revealed, and the observed distance becomes $d_2=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2}$. Chances are that $d_2>d_1$ even though the two points $x$ and $y$ are the same in both cases. That means you have trouble comparing the distances in different dimensions (but you still can meaningfully compare distances between different points when the dimension is fixed).
Taking, for example, the mean of coordinate-by-coordinate distance ($d=\frac{1}{n}\sum_{i=1}^n |x_i-y_i|$) could be a remedy.
|
Need more intuition for the curse of dimensionality [duplicate]
|
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of
|
Need more intuition for the curse of dimensionality [duplicate]
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of the first pair is different than that of the second pair.
Suppose there are two points $x$ and $y$ in $\mathbb{R}^n$ and you want to calculate the distance between them. Suppose that in the beginning only the first coordinate is revealed to you, and the observed distance is $d_1=\sqrt{(x_1-y_1)^2}$. After that, another coordinate is revealed, and the observed distance becomes $d_2=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2}$. Chances are that $d_2>d_1$ even though the two points $x$ and $y$ are the same in both cases. That means you have trouble comparing the distances in different dimensions (but you still can meaningfully compare distances between different points when the dimension is fixed).
Taking, for example, the mean of coordinate-by-coordinate distance ($d=\frac{1}{n}\sum_{i=1}^n |x_i-y_i|$) could be a remedy.
|
Need more intuition for the curse of dimensionality [duplicate]
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of
|
47,880
|
Feature extraction for time series classification
|
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights.
You could use the python package tsfresh to automatically extract a huge of number of features and filter them for their importance.
You described that you calculated both frequency domain, skewness and kurtosis features. Those are also contained in tsfresh. But there is a huge number of other time series characteristics that can be also used as potential features for audio classification. There are simple features such as the mean, time series related features such as the coefficients of an AR model or highly sophisticated features such as the test statistic of the augmented dickey fuller hypothesis test.
So regarding your question: You can find inspiration about other features in the documentation about the calculated features of tsfresh here.
Disclaimer: I am one of the authors of tsfresh.
|
Feature extraction for time series classification
|
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights.
You could use the python package tsfresh to auto
|
Feature extraction for time series classification
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights.
You could use the python package tsfresh to automatically extract a huge of number of features and filter them for their importance.
You described that you calculated both frequency domain, skewness and kurtosis features. Those are also contained in tsfresh. But there is a huge number of other time series characteristics that can be also used as potential features for audio classification. There are simple features such as the mean, time series related features such as the coefficients of an AR model or highly sophisticated features such as the test statistic of the augmented dickey fuller hypothesis test.
So regarding your question: You can find inspiration about other features in the documentation about the calculated features of tsfresh here.
Disclaimer: I am one of the authors of tsfresh.
|
Feature extraction for time series classification
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights.
You could use the python package tsfresh to auto
|
47,881
|
slice sampling within a Gibbs sampler
|
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works.
@inbook{cruz,
Author = {Cruz, Marcelo G. and Peters, Gareth W. and Shevchenko, Pavel V.},
Chapter = {7.6.2: Generic univariate auxiliary variable Gibbs sampler: slice sampler},
Publisher = {Wiley},
Title = {Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk},
Year = {2015}}
Another one, also partially available on Google Books for free, seems to allude to slice-sampling-within-Gibbs.
@inbook{banerjee,
Author = {Banerjee, Sudipto and Carlin, Bradley P. and Gelfand, Alan E. },
Chapter = {9.4.1: Regression in the Gaussian case},
Edition = {2nd},
Publisher = {CRC Press},
Title = {Hierarchical Modeling and Analysis for Spatial Data},
Year = {2015}}
I agree, it would be nice to find solid proof of validity, preferably in a good journal.
EDIT: Even Gelman's famous "Bayesian Data Analysis" (3rd ed) mentions the idea. In Section 12.3: Further extensions to Gibbs and Metropolis, under the "Slice sampling" heading, the end of the first paragraph says
Slice sampling refers to the application of iterative simulation
algorithms on this uniform distribution. The details of implementing
an effective slice sampling procedure can be complicated, but the
method can be applied in great generality and can be especially useful
for sampling one-dimensional conditional distributions in a Gibbs
sampling structure.
Neal's famous 2003 slice sampling paper is where I think it was first suggested. The first paragraph of Section 4 says
Slice sampling is simplest when only one (real-valued) variable is
being updated. This will of course be the case when the distribution
of interest is univariate, but more typically, the single-variable
slice sampling methods of this section will be used to sample from a
multivariate distribution for x = (x1,...,xn) by sampling repeatedly
for each variable in turn. To update xi, we must be able to compute a
function, fi(xi), that is proportional to p(xi|{xj}j̸=i), where
{xj}j̸=i are the values of the other variables.
Yet I still can find no proof of correctness.
|
slice sampling within a Gibbs sampler
|
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works.
@inbook{cruz,
Author = {Cruz, Marcelo G. and Pe
|
slice sampling within a Gibbs sampler
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works.
@inbook{cruz,
Author = {Cruz, Marcelo G. and Peters, Gareth W. and Shevchenko, Pavel V.},
Chapter = {7.6.2: Generic univariate auxiliary variable Gibbs sampler: slice sampler},
Publisher = {Wiley},
Title = {Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk},
Year = {2015}}
Another one, also partially available on Google Books for free, seems to allude to slice-sampling-within-Gibbs.
@inbook{banerjee,
Author = {Banerjee, Sudipto and Carlin, Bradley P. and Gelfand, Alan E. },
Chapter = {9.4.1: Regression in the Gaussian case},
Edition = {2nd},
Publisher = {CRC Press},
Title = {Hierarchical Modeling and Analysis for Spatial Data},
Year = {2015}}
I agree, it would be nice to find solid proof of validity, preferably in a good journal.
EDIT: Even Gelman's famous "Bayesian Data Analysis" (3rd ed) mentions the idea. In Section 12.3: Further extensions to Gibbs and Metropolis, under the "Slice sampling" heading, the end of the first paragraph says
Slice sampling refers to the application of iterative simulation
algorithms on this uniform distribution. The details of implementing
an effective slice sampling procedure can be complicated, but the
method can be applied in great generality and can be especially useful
for sampling one-dimensional conditional distributions in a Gibbs
sampling structure.
Neal's famous 2003 slice sampling paper is where I think it was first suggested. The first paragraph of Section 4 says
Slice sampling is simplest when only one (real-valued) variable is
being updated. This will of course be the case when the distribution
of interest is univariate, but more typically, the single-variable
slice sampling methods of this section will be used to sample from a
multivariate distribution for x = (x1,...,xn) by sampling repeatedly
for each variable in turn. To update xi, we must be able to compute a
function, fi(xi), that is proportional to p(xi|{xj}j̸=i), where
{xj}j̸=i are the values of the other variables.
Yet I still can find no proof of correctness.
|
slice sampling within a Gibbs sampler
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works.
@inbook{cruz,
Author = {Cruz, Marcelo G. and Pe
|
47,882
|
slice sampling within a Gibbs sampler
|
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate distribution.
This fact is used extensively all over the literature on MCMC. Its proof is straightforward. There is no need to prove anything special in this regard for slice sampling in particular.
|
slice sampling within a Gibbs sampler
|
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate dist
|
slice sampling within a Gibbs sampler
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate distribution.
This fact is used extensively all over the literature on MCMC. Its proof is straightforward. There is no need to prove anything special in this regard for slice sampling in particular.
|
slice sampling within a Gibbs sampler
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate dist
|
47,883
|
Clarification about no free lunch theorem
|
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the specialized model to perform worst on average on other subsets that aren't its specialty.
This is not entirely accurate, but just to use your example:
A model for classifying Arabic documents can perform better than the general language classification model, but it will have worse performance on English, French, Spanish, Hebrew, etc. than the general model.
|
Clarification about no free lunch theorem
|
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the
|
Clarification about no free lunch theorem
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the specialized model to perform worst on average on other subsets that aren't its specialty.
This is not entirely accurate, but just to use your example:
A model for classifying Arabic documents can perform better than the general language classification model, but it will have worse performance on English, French, Spanish, Hebrew, etc. than the general model.
|
Clarification about no free lunch theorem
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the
|
47,884
|
Using re.form= in predict.merMod() for a lmer() model
|
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional".
In the first model, for which you controlled for days and random slopes and intercepts, each individual has three contributions to their predicted value when interpolating data: the fixed effects as well as the two random effects. For observation 3, subject 308,, you find reaction = 250.8006 and days=2. fm1 generates fixed effects
> fixef(fm1)
(Intercept) Days
251.40510 10.46729
So the response form
> predict(fm1, re.form=NA)[3]
3
272.3397
is equal to
> 251.40510 + 10.46729 *2
[1] 272.3397
The prediction when you don't use your random effects even though you estimated them.
The random effects are:
> ranef(fm1)
$Subject
(Intercept) Days
308 2.2585649 9.1989720
309 -40.3985805 -8.6197025
310 -38.9602496 -5.4488793
330 23.6905015 -4.8143318
331 22.2602054 -3.0698950
332 9.0395269 -0.2721708
333 16.8404330 -0.2236247
334 -7.2325801 1.0745762
335 -0.3336950 -10.7521592
337 34.8903539 8.6282834
349 -25.2101132 1.1734147
350 -13.0699587 6.6142053
351 4.5778359 -3.0152573
352 20.8635944 3.5360130
369 3.2754533 0.8722165
370 -25.6128726 4.8224651
371 0.8070399 -0.9881551
372 12.3145406 1.2840295
So adding 2.2585649 to this prediction is just using the intercept but not the slope, and it gives you:
> 251.40510 + 10.46729 *2 + 2.2585649
[1] 274.5982
Which is the same as:
> predict(fm1, re.form=(~1|Subject))[3]
3
274.5982
Now the trick to understanding all this is realizing that specifying more complex random effects will fundamentally change the lower level effects you're estimating:
A simple comparison of
fixef(fm1) fixef(fm2) and ranef(fm1) and ranef(fm2) will show you this.
|
Using re.form= in predict.merMod() for a lmer() model
|
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional".
In the first m
|
Using re.form= in predict.merMod() for a lmer() model
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional".
In the first model, for which you controlled for days and random slopes and intercepts, each individual has three contributions to their predicted value when interpolating data: the fixed effects as well as the two random effects. For observation 3, subject 308,, you find reaction = 250.8006 and days=2. fm1 generates fixed effects
> fixef(fm1)
(Intercept) Days
251.40510 10.46729
So the response form
> predict(fm1, re.form=NA)[3]
3
272.3397
is equal to
> 251.40510 + 10.46729 *2
[1] 272.3397
The prediction when you don't use your random effects even though you estimated them.
The random effects are:
> ranef(fm1)
$Subject
(Intercept) Days
308 2.2585649 9.1989720
309 -40.3985805 -8.6197025
310 -38.9602496 -5.4488793
330 23.6905015 -4.8143318
331 22.2602054 -3.0698950
332 9.0395269 -0.2721708
333 16.8404330 -0.2236247
334 -7.2325801 1.0745762
335 -0.3336950 -10.7521592
337 34.8903539 8.6282834
349 -25.2101132 1.1734147
350 -13.0699587 6.6142053
351 4.5778359 -3.0152573
352 20.8635944 3.5360130
369 3.2754533 0.8722165
370 -25.6128726 4.8224651
371 0.8070399 -0.9881551
372 12.3145406 1.2840295
So adding 2.2585649 to this prediction is just using the intercept but not the slope, and it gives you:
> 251.40510 + 10.46729 *2 + 2.2585649
[1] 274.5982
Which is the same as:
> predict(fm1, re.form=(~1|Subject))[3]
3
274.5982
Now the trick to understanding all this is realizing that specifying more complex random effects will fundamentally change the lower level effects you're estimating:
A simple comparison of
fixef(fm1) fixef(fm2) and ranef(fm1) and ranef(fm2) will show you this.
|
Using re.form= in predict.merMod() for a lmer() model
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional".
In the first m
|
47,885
|
EFA on one part of the dataset and CFA/SEM on another part of the dataset
|
I believe you should do the structural equation modeling on the second half of the dataset.
As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA on. This is where you explore the data and get a feel for how the structure shapes up. But who knows if this is just due to artifacts of the data you had? So you move on to the second half of the dataset. This is where you specify the structure you got from the EFA and see if it fits these other data well (i.e., doing a CFA).
Now, this is what most people do, because they are interested in investigating the psychometric properties of a scale (factor). I always see this in papers on scale validation.
But you are also interested in relationships between the factor and other variables. I think you should do structural equation modeling in the second half of the data; CFA is really just a part of fitting a SEM.
In any SEM, you are first going to specify a measurement model (i.e., do a CFA) to make sure that the latent variables fit before going on to modeling relationships between them. It's like building from the ground-up: First, let's make sure that our latent variables are solid building blocks before trying to build a model out of those blocks. So, in a sense, if you do the SEM, you are really doing a CFA at the same time — instead, now just a little bit more.
So with the second half of the dataset, I would write it up as such:
Specify and run the entire model.
When writing it up, first focus on the measurement model of the factor that you are interested in and did the EFA on with the first half of the dataset. You could even report fit statistics for just this part of the model.
Then, talk about the structural relationships between this latent variable and other variables.
Basically, you can do an EFA in the first half and a SEM in the second half, but give special attention to the measurement model part (i.e., the CFA) for the factor of interest, because doing this CFA is part of the structural equation model itself. There's no need to think of it as EFA $\rightarrow$ CFA $\rightarrow$ SEM. Think of CFA as part of specifying a SEM.
|
EFA on one part of the dataset and CFA/SEM on another part of the dataset
|
I believe you should do the structural equation modeling on the second half of the dataset.
As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA
|
EFA on one part of the dataset and CFA/SEM on another part of the dataset
I believe you should do the structural equation modeling on the second half of the dataset.
As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA on. This is where you explore the data and get a feel for how the structure shapes up. But who knows if this is just due to artifacts of the data you had? So you move on to the second half of the dataset. This is where you specify the structure you got from the EFA and see if it fits these other data well (i.e., doing a CFA).
Now, this is what most people do, because they are interested in investigating the psychometric properties of a scale (factor). I always see this in papers on scale validation.
But you are also interested in relationships between the factor and other variables. I think you should do structural equation modeling in the second half of the data; CFA is really just a part of fitting a SEM.
In any SEM, you are first going to specify a measurement model (i.e., do a CFA) to make sure that the latent variables fit before going on to modeling relationships between them. It's like building from the ground-up: First, let's make sure that our latent variables are solid building blocks before trying to build a model out of those blocks. So, in a sense, if you do the SEM, you are really doing a CFA at the same time — instead, now just a little bit more.
So with the second half of the dataset, I would write it up as such:
Specify and run the entire model.
When writing it up, first focus on the measurement model of the factor that you are interested in and did the EFA on with the first half of the dataset. You could even report fit statistics for just this part of the model.
Then, talk about the structural relationships between this latent variable and other variables.
Basically, you can do an EFA in the first half and a SEM in the second half, but give special attention to the measurement model part (i.e., the CFA) for the factor of interest, because doing this CFA is part of the structural equation model itself. There's no need to think of it as EFA $\rightarrow$ CFA $\rightarrow$ SEM. Think of CFA as part of specifying a SEM.
|
EFA on one part of the dataset and CFA/SEM on another part of the dataset
I believe you should do the structural equation modeling on the second half of the dataset.
As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA
|
47,886
|
Maximize variance of a distribution subject to constraints
|
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have
$$
f(x)
= f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr)
\leq (1-x) f(0) + x f(1)
$$
for all $x\in[0,1]$ and thus we get the bound
$$\begin{align*}
\mathrm{Var}(X)
&= \mathbb{E}\bigl(f(X)\bigr) \\
&\leq \mathbb{E}(1-X) f(0) + \mathbb{E}(X) f(1) \\
&= (1-\mu)\mu^2 + \mu (1-\mu)^2 \\
&= \mu(1-\mu)
\end{align*}
$$
for all random variables $X\in[0,1]$ with $\mathbb{E}(X) = \mu$.
For the two-point random variable $X$ from the question, with $P(X=0)=1-\mu$ and $P(X=1) = \mu$ we have $\mathrm{Var}(X) = \mathbb{E}(X^2) - \mu^2 = (1-\mu)0^2 + \mu 1^2 - \mu^2 = \mu(1-\mu)$. Thus, the bound is sharp, and the two-point distribution indeed maximises the variance.
[I suspect that the above is what cardinal's cryptic comments on the question hint at.]
|
Maximize variance of a distribution subject to constraints
|
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have
$$
f(x)
= f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr)
\leq (1-x) f(0) + x f(1)
$$
for all $x\in[0,1]$ and thus we get the bound
$$\begin{align*}
\mat
|
Maximize variance of a distribution subject to constraints
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have
$$
f(x)
= f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr)
\leq (1-x) f(0) + x f(1)
$$
for all $x\in[0,1]$ and thus we get the bound
$$\begin{align*}
\mathrm{Var}(X)
&= \mathbb{E}\bigl(f(X)\bigr) \\
&\leq \mathbb{E}(1-X) f(0) + \mathbb{E}(X) f(1) \\
&= (1-\mu)\mu^2 + \mu (1-\mu)^2 \\
&= \mu(1-\mu)
\end{align*}
$$
for all random variables $X\in[0,1]$ with $\mathbb{E}(X) = \mu$.
For the two-point random variable $X$ from the question, with $P(X=0)=1-\mu$ and $P(X=1) = \mu$ we have $\mathrm{Var}(X) = \mathbb{E}(X^2) - \mu^2 = (1-\mu)0^2 + \mu 1^2 - \mu^2 = \mu(1-\mu)$. Thus, the bound is sharp, and the two-point distribution indeed maximises the variance.
[I suspect that the above is what cardinal's cryptic comments on the question hint at.]
|
Maximize variance of a distribution subject to constraints
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have
$$
f(x)
= f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr)
\leq (1-x) f(0) + x f(1)
$$
for all $x\in[0,1]$ and thus we get the bound
$$\begin{align*}
\mat
|
47,887
|
Maximize variance of a distribution subject to constraints
|
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then
$$
\mathbb{E}[X] = ap + p_1 = \mu,
$$
so that $p_1=\mu-ap, p_0=1-p-\mu+ap$ (some reasonable conditions must be applied so that the solutions are proper, $0\le p_0, p_1\le 1$; I will not bother and assume these conditions to be satisfied). Then
$$
\mathbb{V}[X]=a^2p + p_1 - \mu^2 = a^2p + \mu-ap -\mu^2=(a^2-a)p+\mu(1-\mu).
$$
Considering this now as a function of $p$, we see that $a^2-a<0$ for $a\in(0,1)$, so $\mathbb{V}[X]$ increases as $p$ decreases, and hence is maximized at the boundary value $p=0$ (i.e., a two-point distribution).
|
Maximize variance of a distribution subject to constraints
|
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then
$$
\mathbb{E
|
Maximize variance of a distribution subject to constraints
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then
$$
\mathbb{E}[X] = ap + p_1 = \mu,
$$
so that $p_1=\mu-ap, p_0=1-p-\mu+ap$ (some reasonable conditions must be applied so that the solutions are proper, $0\le p_0, p_1\le 1$; I will not bother and assume these conditions to be satisfied). Then
$$
\mathbb{V}[X]=a^2p + p_1 - \mu^2 = a^2p + \mu-ap -\mu^2=(a^2-a)p+\mu(1-\mu).
$$
Considering this now as a function of $p$, we see that $a^2-a<0$ for $a\in(0,1)$, so $\mathbb{V}[X]$ increases as $p$ decreases, and hence is maximized at the boundary value $p=0$ (i.e., a two-point distribution).
|
Maximize variance of a distribution subject to constraints
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then
$$
\mathbb{E
|
47,888
|
In which cases we can approximate expected value of a function by assuming the function and the expectation commute?
|
I will use $E$ for expectation, rather than angle brackets.
First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approximation, which can be very context-specific.
If $f$ is linear (or more generally, affine), $E(f(X)) = f(E(X))$, and so the order of evaluating the function $f$ and expectation can be interchanged without introducing any error. To the extent that $f$ is "almost" linear, then $E(f(X))$ may be almost equal to $f(E(X))$; ultimately it comes down to quantifying this in a given case.
If $X$ takes on a specific constant value with probability 1, then $E(f(X)) = f(E(X))$, regardless of $f$. To the extent that $X$ has a distribution very close to being equal to a specific constant with probability 1, then $E(f(X))$ may be almost equal to $f(E(X))$; ultimately it comes down to quantifying this in a given case.
Depending on the function $f$ and the probability distribution $X$, there could be other combinations which also allow interchange of order without introducing any error. Here is a simple contrived family of examples. Let $f(x)$ = some function of $x$ for $x > 0$, $f(0) = 0$, and $f(x)= -f(-x)$ for $x < 0$. Let $X$ be a random variable which is symmetric about 0, and assume that $E(f(X))$ exists. Then $E(f(X)) = f(E(X))$, which happens to equal zero.
If $f$ is convex or concave, then Jensen's inequality https://en.wikipedia.org/wiki/Jensen's_inequality can be used to provide a one-sided bound on the error in interchanging expectation and a nonlinear function. Specifically, if $f$ is convex, then $f(E(X)) \le E(f(X))$. If $f$ is concave, then $f(E(X)) \ge E(f(X))$.
|
In which cases we can approximate expected value of a function by assuming the function and the expe
|
I will use $E$ for expectation, rather than angle brackets.
First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approx
|
In which cases we can approximate expected value of a function by assuming the function and the expectation commute?
I will use $E$ for expectation, rather than angle brackets.
First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approximation, which can be very context-specific.
If $f$ is linear (or more generally, affine), $E(f(X)) = f(E(X))$, and so the order of evaluating the function $f$ and expectation can be interchanged without introducing any error. To the extent that $f$ is "almost" linear, then $E(f(X))$ may be almost equal to $f(E(X))$; ultimately it comes down to quantifying this in a given case.
If $X$ takes on a specific constant value with probability 1, then $E(f(X)) = f(E(X))$, regardless of $f$. To the extent that $X$ has a distribution very close to being equal to a specific constant with probability 1, then $E(f(X))$ may be almost equal to $f(E(X))$; ultimately it comes down to quantifying this in a given case.
Depending on the function $f$ and the probability distribution $X$, there could be other combinations which also allow interchange of order without introducing any error. Here is a simple contrived family of examples. Let $f(x)$ = some function of $x$ for $x > 0$, $f(0) = 0$, and $f(x)= -f(-x)$ for $x < 0$. Let $X$ be a random variable which is symmetric about 0, and assume that $E(f(X))$ exists. Then $E(f(X)) = f(E(X))$, which happens to equal zero.
If $f$ is convex or concave, then Jensen's inequality https://en.wikipedia.org/wiki/Jensen's_inequality can be used to provide a one-sided bound on the error in interchanging expectation and a nonlinear function. Specifically, if $f$ is convex, then $f(E(X)) \le E(f(X))$. If $f$ is concave, then $f(E(X)) \ge E(f(X))$.
|
In which cases we can approximate expected value of a function by assuming the function and the expe
I will use $E$ for expectation, rather than angle brackets.
First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approx
|
47,889
|
Convert double differenced forecast into actual value
|
I found answer in stackoverflow. To summarize instead of doing
ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg)))
we should instead do
ARIMAfit <- auto.arima(val.ts, d=2, approximation=FALSE,trace=FALSE, xreg=xreg)
This d=2 will make sure that forecasted values for future are also in the same metric.
so if i do forecast(ARIMAfit,h=300,xreg=testxreg), i will be able to get future 300 values.
|
Convert double differenced forecast into actual value
|
I found answer in stackoverflow. To summarize instead of doing
ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg)))
we should instead do
ARIMAfit <- aut
|
Convert double differenced forecast into actual value
I found answer in stackoverflow. To summarize instead of doing
ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg)))
we should instead do
ARIMAfit <- auto.arima(val.ts, d=2, approximation=FALSE,trace=FALSE, xreg=xreg)
This d=2 will make sure that forecasted values for future are also in the same metric.
so if i do forecast(ARIMAfit,h=300,xreg=testxreg), i will be able to get future 300 values.
|
Convert double differenced forecast into actual value
I found answer in stackoverflow. To summarize instead of doing
ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg)))
we should instead do
ARIMAfit <- aut
|
47,890
|
Use of fixed effects and random effects
|
Here is a standard linear panel data model:
$$
y_{it}=X_{it}\delta+\alpha_i+\eta_{it},
$$
the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component $\eta_{it}$ is "idiosyncratic", varying both over units and over time.
A reason to use a random or fixed effects approach instead of pooled OLS is that the presence of $\alpha_i$ will lead to an error covariance matrix that is not "spherical" (so not a multiple of the identity matrix(, so that a GLS-type approach like random effects will be more efficient than OLS.
If, however, the $\alpha_i$ correlate with the regressors $X_{it}$ - as will be the case in many typical applications - omitting these individual-specific intercepts will lead to omitted variable bias. Then, a fixed effect approach which effectively fits such intercepts will be more convincing.
The following figure aims to illustrate this point. The raw correlation between $y$ and $X$ is positive. But, the observations belonging to one unit (color) exhibit a negative relationship - this is what we would like to identify, because this is the reaction of $y_{it}$ to a change in $X_{it}$.
Also, there is correlation between the $\alpha_i$ and $X_{it}$: If the former are individual-specific intercepts (i.e., expected values for unit $i$ when $X_{it}=0$), we see that the intercept for, e.g., the lightblue panel unit is much smaller than that for the brown unit. At the same time, the lightblue panel unit has much smaller regressor values $X_{it}$.
So, random effects or pooled OLS would be the wrong strategy here, because it would result in a positive esimate of $\delta$, as these two estimators basically ignore the colors (RE only incroporates the colors for the estimate of the variance covariance matrix).
|
Use of fixed effects and random effects
|
Here is a standard linear panel data model:
$$
y_{it}=X_{it}\delta+\alpha_i+\eta_{it},
$$
the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heteroge
|
Use of fixed effects and random effects
Here is a standard linear panel data model:
$$
y_{it}=X_{it}\delta+\alpha_i+\eta_{it},
$$
the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component $\eta_{it}$ is "idiosyncratic", varying both over units and over time.
A reason to use a random or fixed effects approach instead of pooled OLS is that the presence of $\alpha_i$ will lead to an error covariance matrix that is not "spherical" (so not a multiple of the identity matrix(, so that a GLS-type approach like random effects will be more efficient than OLS.
If, however, the $\alpha_i$ correlate with the regressors $X_{it}$ - as will be the case in many typical applications - omitting these individual-specific intercepts will lead to omitted variable bias. Then, a fixed effect approach which effectively fits such intercepts will be more convincing.
The following figure aims to illustrate this point. The raw correlation between $y$ and $X$ is positive. But, the observations belonging to one unit (color) exhibit a negative relationship - this is what we would like to identify, because this is the reaction of $y_{it}$ to a change in $X_{it}$.
Also, there is correlation between the $\alpha_i$ and $X_{it}$: If the former are individual-specific intercepts (i.e., expected values for unit $i$ when $X_{it}=0$), we see that the intercept for, e.g., the lightblue panel unit is much smaller than that for the brown unit. At the same time, the lightblue panel unit has much smaller regressor values $X_{it}$.
So, random effects or pooled OLS would be the wrong strategy here, because it would result in a positive esimate of $\delta$, as these two estimators basically ignore the colors (RE only incroporates the colors for the estimate of the variance covariance matrix).
|
Use of fixed effects and random effects
Here is a standard linear panel data model:
$$
y_{it}=X_{it}\delta+\alpha_i+\eta_{it},
$$
the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heteroge
|
47,891
|
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
|
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e. nested cross-validation to check how robust the selection of parameters really is). If the performance is bad it is better to fix the features or look for an other method.
Pruning trees works nice for decision trees because it removes noise, but doing this within RF kills bagging which relays on it for having uncorrelated members during voting. Max depth is usually only a technical parameter to avoid recursion overflows while min sample in leaf is mainly for smoothing votes for regression -- the spirit of the method is that
Each tree is grown to the largest extent possible.
|
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
|
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e
|
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e. nested cross-validation to check how robust the selection of parameters really is). If the performance is bad it is better to fix the features or look for an other method.
Pruning trees works nice for decision trees because it removes noise, but doing this within RF kills bagging which relays on it for having uncorrelated members during voting. Max depth is usually only a technical parameter to avoid recursion overflows while min sample in leaf is mainly for smoothing votes for regression -- the spirit of the method is that
Each tree is grown to the largest extent possible.
|
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e
|
47,892
|
GLMER sampling random effects
|
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different schools you could treat schools as either fixed effect with estimating parameter for each school, or as random effect and be interested in overall influence of schools, that can be described by a statistical distribution with its own mean and standard deviation, where individual schools are samples from this distribution. This means that if you are interested in estimating random effects for participants given some random sample from the population, than this seems to agree with the general way of thinking about what random effects describe.
The only issue in here is how well does your sample reflect the population of interest. Now, if you are sampling from a dataset then you have full control on the sampling process. Sampling cases randomly from your population in most cases should be enough for your sample to be representative as long as the sample is big enough (how big is big enough is a different question that you have to ask yourself). You have to remember however that sampling data with hierarchical structure can be more complicated than simple random sampling of cases.
As about validating your model and literature on this topic I would recommend you a book by Gelman and Hill (2006). This book describes linear regression, multilevel models and Bayesian hierarchical models. Authors describe several ways of validating models including approach borrowed from Bayesian statistics named posterior predictive checks (cf. Kruschke, 2013). The idea about posterior predictive checks is simple: you compare posterior distribution of your model with real data to check if it is similar and where they disagree. In non-Bayesian analysis you do not have any posterior distribution, so you can obtain it using a simulation (lme4 library has simulate function for that). The aim of simulation is to produce fake data under fitted model, so this data can be compared to real data. Results of such simulation can be compared visually (e.g. histograms), by using summary statistics (e.g. mean, median, variance, quantiles). Notice also that since you can obtain other samples from your population you can always compare (a) distribution of your sample to the distribution of other samples, (b) posterior distribution of fitted model to distribution of variable of interest in other samples. You should not forget about general model diagnostics, but this is already described in this thread (see also Bates, 2010 and Bolker et al 2008).
Combining models (model averaging, cf. Buckland, Burnham and Augustin, 1997) is something that always can be done and is often done. If you are interested in prediction then averaging parameters or predictions from different models should lead to better prediction (in terms of errors) than any of the individual predictions alone and should be more robust than the individual predictions. You can find some brief information about model averaging in papers by Johnson and Omland (2004) and Bolker et al (2008) and more detailed description by Zhang, Zou and Liang (2014), who propose to use information criteria such as AIC for creating weights for averaging (the more information model provides, the greater its weight, similar example here). For averaging, $k$-th models weight $w_k$ is calculates using its AIC value $I_k$ and normalized so weights sum to $1$:
$$ w_k = \frac{\exp(-I_k/2)}{ \sum_{i=1}^K \exp(-I_i/2)} $$
In your case it would be probably enough to take sample from your population, estimate your model on it, and then use this model to make predictions on data from another sample, i.e. use cross-validation with holdout sample. If your model appears to fit badly to the holdout sample data, then you can always make changes in your model (or take larger sample for training model) and assess the results on another holdout sample (better take a different sample than using previous one so not to get overfitting model). Such approach would be easy, not computationally intensive and clear in its methodology. Making predictions on holdout sample would enable you to estimate models error.
Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
Kruschke, J. K. (2013). Posterior predictive checks can and should be Bayesian: Comment on Gelman and Shalizi,‘Philosophy and the practice of Bayesian statistics’. British Journal of Mathematical and Statistical Psychology, 66(1), 45-56.
Bates, D.M. (2010). lme4: Mixed-effects modeling with R. (Unpublished.)
Bolker, B.M., Brooks, M.E., Clark, C.J., Geange, S.W., Poulsen, J.R., Stevens, M.H.H., & White, J.S.S. (2009). Generalized linear mixed models: a practical guide for ecology and evolution. Trends in ecology & evolution, 24(3), 127-135.
Johnson, J. B., & Omland, K. S. (2004). Model selection in ecology and evolution. Trends in ecology & evolution, 19(2), 101-108.
Zhang, X., Zou, G., & Liang, H. (2014). Model averaging and weight choice in linear mixed-effects models. Biometrika, 101(1), 205-218.
Buckland, S.T., Burnham, K.P., & Augustin, N.H. (1997). Model selection: an integral part of inference. Biometrics, 603-618.
|
GLMER sampling random effects
|
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different scho
|
GLMER sampling random effects
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different schools you could treat schools as either fixed effect with estimating parameter for each school, or as random effect and be interested in overall influence of schools, that can be described by a statistical distribution with its own mean and standard deviation, where individual schools are samples from this distribution. This means that if you are interested in estimating random effects for participants given some random sample from the population, than this seems to agree with the general way of thinking about what random effects describe.
The only issue in here is how well does your sample reflect the population of interest. Now, if you are sampling from a dataset then you have full control on the sampling process. Sampling cases randomly from your population in most cases should be enough for your sample to be representative as long as the sample is big enough (how big is big enough is a different question that you have to ask yourself). You have to remember however that sampling data with hierarchical structure can be more complicated than simple random sampling of cases.
As about validating your model and literature on this topic I would recommend you a book by Gelman and Hill (2006). This book describes linear regression, multilevel models and Bayesian hierarchical models. Authors describe several ways of validating models including approach borrowed from Bayesian statistics named posterior predictive checks (cf. Kruschke, 2013). The idea about posterior predictive checks is simple: you compare posterior distribution of your model with real data to check if it is similar and where they disagree. In non-Bayesian analysis you do not have any posterior distribution, so you can obtain it using a simulation (lme4 library has simulate function for that). The aim of simulation is to produce fake data under fitted model, so this data can be compared to real data. Results of such simulation can be compared visually (e.g. histograms), by using summary statistics (e.g. mean, median, variance, quantiles). Notice also that since you can obtain other samples from your population you can always compare (a) distribution of your sample to the distribution of other samples, (b) posterior distribution of fitted model to distribution of variable of interest in other samples. You should not forget about general model diagnostics, but this is already described in this thread (see also Bates, 2010 and Bolker et al 2008).
Combining models (model averaging, cf. Buckland, Burnham and Augustin, 1997) is something that always can be done and is often done. If you are interested in prediction then averaging parameters or predictions from different models should lead to better prediction (in terms of errors) than any of the individual predictions alone and should be more robust than the individual predictions. You can find some brief information about model averaging in papers by Johnson and Omland (2004) and Bolker et al (2008) and more detailed description by Zhang, Zou and Liang (2014), who propose to use information criteria such as AIC for creating weights for averaging (the more information model provides, the greater its weight, similar example here). For averaging, $k$-th models weight $w_k$ is calculates using its AIC value $I_k$ and normalized so weights sum to $1$:
$$ w_k = \frac{\exp(-I_k/2)}{ \sum_{i=1}^K \exp(-I_i/2)} $$
In your case it would be probably enough to take sample from your population, estimate your model on it, and then use this model to make predictions on data from another sample, i.e. use cross-validation with holdout sample. If your model appears to fit badly to the holdout sample data, then you can always make changes in your model (or take larger sample for training model) and assess the results on another holdout sample (better take a different sample than using previous one so not to get overfitting model). Such approach would be easy, not computationally intensive and clear in its methodology. Making predictions on holdout sample would enable you to estimate models error.
Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
Kruschke, J. K. (2013). Posterior predictive checks can and should be Bayesian: Comment on Gelman and Shalizi,‘Philosophy and the practice of Bayesian statistics’. British Journal of Mathematical and Statistical Psychology, 66(1), 45-56.
Bates, D.M. (2010). lme4: Mixed-effects modeling with R. (Unpublished.)
Bolker, B.M., Brooks, M.E., Clark, C.J., Geange, S.W., Poulsen, J.R., Stevens, M.H.H., & White, J.S.S. (2009). Generalized linear mixed models: a practical guide for ecology and evolution. Trends in ecology & evolution, 24(3), 127-135.
Johnson, J. B., & Omland, K. S. (2004). Model selection in ecology and evolution. Trends in ecology & evolution, 19(2), 101-108.
Zhang, X., Zou, G., & Liang, H. (2014). Model averaging and weight choice in linear mixed-effects models. Biometrika, 101(1), 205-218.
Buckland, S.T., Burnham, K.P., & Augustin, N.H. (1997). Model selection: an integral part of inference. Biometrics, 603-618.
|
GLMER sampling random effects
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different scho
|
47,893
|
How can we calculate the variance inflation factor for a categorical predictor variable when examining multicollinearity in a linear regression model?
|
The function you requested comes in the package {car} in R.
I tried to figure it out running some regression models using the mtcars package in R.
Evidently, I can get the VIF both using the function and manually, when the regressor is a continuous variable:
require(car)
attach(mtcars)
fit1 <- lm(mpg ~ wt + hp + disp) # The model we want.
fit_wt <- lm(wt ~ hp + disp) # Regressing wt against other regressors.
rsq_wt <- summary(fit_wt)$r.square # Detecting the R square of the model
(v_wt <- 1/(1 - (rsq_wt))) # Actual formula for VIF
vif(fit1) # R built-in function
Now for the real question, here is what I find. Let's say that your regressor is am, which corresponds to the categorical variable for the type of transmission of the car (automatic versus manual).
Ordinarily, you would fit a model such as:
fit2 <- lm(mpg ~ wt + disp + as.factor(am))
The problem is that if you try now to get the VIF for am by just reshuffling the regressors you get an error message:
fit_am <- lm(as.factor(am) ~ wt + disp)
Warning messages:
1: In model.response(mf, "numeric") :
using type = "numeric" with a factor response will be ignored
2: In Ops.factor(y, z$residuals) : - not meaningful for factors
Game over? Not quite... Look what happens if I treat am as continuous:
> fit2 <- lm(mpg ~ wt + disp + as.factor(am))
> fit_am <- lm(am ~ wt + disp)
> rsq_am <- summary(fit_am)$r.square
> (v_am <- 1/(1 - (rsq_am)))
[1] 1.931264
> vif(fit2)
wt disp as.factor(am)
5.939675 4.752561 1.931264
We get the same value manually as with the R built-in function vif.
|
How can we calculate the variance inflation factor for a categorical predictor variable when examini
|
The function you requested comes in the package {car} in R.
I tried to figure it out running some regression models using the mtcars package in R.
Evidently, I can get the VIF both using the function
|
How can we calculate the variance inflation factor for a categorical predictor variable when examining multicollinearity in a linear regression model?
The function you requested comes in the package {car} in R.
I tried to figure it out running some regression models using the mtcars package in R.
Evidently, I can get the VIF both using the function and manually, when the regressor is a continuous variable:
require(car)
attach(mtcars)
fit1 <- lm(mpg ~ wt + hp + disp) # The model we want.
fit_wt <- lm(wt ~ hp + disp) # Regressing wt against other regressors.
rsq_wt <- summary(fit_wt)$r.square # Detecting the R square of the model
(v_wt <- 1/(1 - (rsq_wt))) # Actual formula for VIF
vif(fit1) # R built-in function
Now for the real question, here is what I find. Let's say that your regressor is am, which corresponds to the categorical variable for the type of transmission of the car (automatic versus manual).
Ordinarily, you would fit a model such as:
fit2 <- lm(mpg ~ wt + disp + as.factor(am))
The problem is that if you try now to get the VIF for am by just reshuffling the regressors you get an error message:
fit_am <- lm(as.factor(am) ~ wt + disp)
Warning messages:
1: In model.response(mf, "numeric") :
using type = "numeric" with a factor response will be ignored
2: In Ops.factor(y, z$residuals) : - not meaningful for factors
Game over? Not quite... Look what happens if I treat am as continuous:
> fit2 <- lm(mpg ~ wt + disp + as.factor(am))
> fit_am <- lm(am ~ wt + disp)
> rsq_am <- summary(fit_am)$r.square
> (v_am <- 1/(1 - (rsq_am)))
[1] 1.931264
> vif(fit2)
wt disp as.factor(am)
5.939675 4.752561 1.931264
We get the same value manually as with the R built-in function vif.
|
How can we calculate the variance inflation factor for a categorical predictor variable when examini
The function you requested comes in the package {car} in R.
I tried to figure it out running some regression models using the mtcars package in R.
Evidently, I can get the VIF both using the function
|
47,894
|
Testing for classification significance
|
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you could have chosen a Leave-one-out procedure)
The issue is the null hypothesis. You probably want to know if your classifier is significantly better than a random classifier - one that did not really learned anything from the data.
Let us assume that the dataset is binary (only two classes, + and -) where p+ is the proportion of positive classes in the dataset. Let us assume the classifier that randomly answers + with 50% probability. The chance that a data will be + is p+. Finally since the classifier output is independent of the data value itself, the probability that the classifier will be correct on a + prediction is 0.5*p+.
Similarly, the probability of being right on a - prediction is 0.5*p-.
If p+ is 0.5, than the classifier will be right 0.5 of the time. And that is the null hypothesis for the situation where p+=0.5.
But if p+=0.9, a classifier that guesses + with 0.5 probability will still have a
0.5*0.9+0.5*0.1 = 0.5
probability of being right. But a "smarter" random classifier, that makes a + guess with 0.9 probability, will have an accuracy of
0.9*0.9+0.1*0.1 = 0.82
probability of being right, which is the maximum probability for a random classifier.
Thus, the null hypothesis for a daaset with p+ proportion of positives is an accuracy of
acc_null = p+^2 + p-^2
So you need to collect the p+ and p- of your dataset and compute the acc_null.
The question now is whether your 71% accuracy is significantly different than acc_null. Than can only be answered if you know the number of times your classified was right, and you know it. Of the 160 data points, the classifier was correct 0.71*160 = 133.6 = 134 times.
Thus you need a binomial test to figure out the probability that a random process that generates a "correct" or a 1 or a "success" with probability acc_null would have generated 134 "correct" ou "success" of 160 tries. This is the p-value you are looking for.
|
Testing for classification significance
|
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you co
|
Testing for classification significance
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you could have chosen a Leave-one-out procedure)
The issue is the null hypothesis. You probably want to know if your classifier is significantly better than a random classifier - one that did not really learned anything from the data.
Let us assume that the dataset is binary (only two classes, + and -) where p+ is the proportion of positive classes in the dataset. Let us assume the classifier that randomly answers + with 50% probability. The chance that a data will be + is p+. Finally since the classifier output is independent of the data value itself, the probability that the classifier will be correct on a + prediction is 0.5*p+.
Similarly, the probability of being right on a - prediction is 0.5*p-.
If p+ is 0.5, than the classifier will be right 0.5 of the time. And that is the null hypothesis for the situation where p+=0.5.
But if p+=0.9, a classifier that guesses + with 0.5 probability will still have a
0.5*0.9+0.5*0.1 = 0.5
probability of being right. But a "smarter" random classifier, that makes a + guess with 0.9 probability, will have an accuracy of
0.9*0.9+0.1*0.1 = 0.82
probability of being right, which is the maximum probability for a random classifier.
Thus, the null hypothesis for a daaset with p+ proportion of positives is an accuracy of
acc_null = p+^2 + p-^2
So you need to collect the p+ and p- of your dataset and compute the acc_null.
The question now is whether your 71% accuracy is significantly different than acc_null. Than can only be answered if you know the number of times your classified was right, and you know it. Of the 160 data points, the classifier was correct 0.71*160 = 133.6 = 134 times.
Thus you need a binomial test to figure out the probability that a random process that generates a "correct" or a 1 or a "success" with probability acc_null would have generated 134 "correct" ou "success" of 160 tries. This is the p-value you are looking for.
|
Testing for classification significance
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you co
|
47,895
|
Treating missing values in panel data set
|
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for example, 100 observations and four regressors, each with a 10% missing observation rate, you'll only be missing 10% of the data but on average you'll be deleting about 34% of the observations if you drop each observation with one or more missing values - which is what happens if you just run the data through a standard regression package. You'll be deleting much more data (2.4x in fact) than is actually missing. In addition, unless your data is missing completely at random, case deletion can introduce bias into your parameter estimates.
It is typically better to use an imputation algorithm that captures at least the covariance structure of the data and generates random numbers (rather than replacing with mean or median values.) This holds true especially if you're going to be doing some estimation using the imputed data, because you'll get more accurate estimates of the covariance matrix of the parameters. Replacing by the mean value will give you overly optimistic standard errors, sometimes by quite a bit.
I've included an example using the default imputation method from the mice package in R. The example has a regression with 100 observations and four regressors, each with a 10% chance of a missing value at every observation. We compare the std. errors of the estimates for the complete-data regression (no missing values), the case deletion regression (delete any observation with a missing value), mean imputation (replace the missing value by the mean of the variable), and a good quality imputation routine that estimates the covariance matrix of the data and generates random values. I've constructed nonlinear relationships between the regressors such that mice isn't going to model them using their true relationships, just to add a layer of inaccuracy to the whole thing. I've run the entire process 100 times and averaged the standard errors of the four methods for each of the parameters for comparative purposes.
Here's the code, with a comparison of the standard errors at the bottom:
results <- data.frame(se_x1 = rep(0,400),
se_x2 = rep(0,400),
se_x3 = rep(0,400),
se_x4 = rep(0,400),
method = c(rep("Complete data",100),
rep("Case deletion",100),
rep("Mean value imputation", 100),
rep("Randomized imputation", 100)))
N <- 100
pct_missing <- 0.1
for (i in 1:100) {
x1 <- 4 + rnorm(N)
x2 <- 0.025*x1^2 + rnorm(N)
x3 <- 0.2*x1^1.3 + 0.04*x2^0.7 + rnorm(N)
x4 <- 0.4*x1^0.3 - 0.3*x2^1.1 + rnorm(N)
e <- rnorm(N, 0, 1.5)
y <- x1 + x2 + x3 + e # The coefficient of x4 = 0
# Complete data regression
mc <- summary(lm(y~x1+x2+x3+x4))
results[i,1:4] <- mc$coefficients[2:5,2]
# Cause data to be missing
x1[rbinom(N,1,pct_missing)==1] <- NA
x2[rbinom(N,1,pct_missing)==1] <- NA
x3[rbinom(N,1,pct_missing)==1] <- NA
x4[rbinom(N,1,pct_missing)==1] <- NA
# Case deletion
mm <- summary(lm(y~x1+x2+x3+x4))
results[i+100,1:4] <- mm$coefficients[2:5,2]
# Mean value imputation
x1m <- x1; x1m[is.na(x1m)] <- mean(x1, na.rm=TRUE)
x2m <- x2; x2m[is.na(x2m)] <- mean(x2, na.rm=TRUE)
x3m <- x3; x3m[is.na(x3m)] <- mean(x3, na.rm=TRUE)
x4m <- x4; x4m[is.na(x4m)] <- mean(x4, na.rm=TRUE)
mmv <- summary(lm(y~x1m+x2m+x3m+x4m))
results[i+200,1:4] <- mmv$coefficients[2:5,2]
# Imputation; I'm only using 1 of the 5 multiple imputations
# It would be better to use all the multiple imputations, though.
imp <- mice(cbind(y,x1,x2,x3,x4), printFlag=FALSE)
x1[is.na(x1)] <- as.numeric(imp$imp$x1[,1])
x2[is.na(x2)] <- as.numeric(imp$imp$x2[,1])
x3[is.na(x3)] <- as.numeric(imp$imp$x3[,1])
x4[is.na(x4)] <- as.numeric(imp$imp$x4[,1])
mi <- summary(lm(y~x1+x2+x3+x4))
results[i+300,1:4] <- mi$coefficients[2:5,2]
}
options(digits = 3)
results <- data.table(results)
results[, .(se_x1 = mean(se_x1),
se_x2 = mean(se_x2),
se_x3 = mean(se_x3),
se_x4 = mean(se_x4)), by = method]
And the output:
method se_x1 se_x2 se_x3 se_x4
1: Complete data 0.208 0.278 0.192 0.193
2: Case deletion 0.267 0.359 0.244 0.250
3: Mean value imputation 0.231 0.301 0.212 0.217
4: Randomized imputation 0.213 0.271 0.195 0.198
Note that the complete data method is as good as you can get with this data. Case deletion results in considerably less accurate parameter estimates, but the randomized imputation of mice gets you almost all the way back to the accuracy you would get with complete data. (These numbers are a little optimistic, as I'm not using the full multiple imputation approach, but this is just a simple example.) The mean value imputation in this case appears to have improved things considerably relative to case deletion, but is actually overly optimistic.
So the tl;dr version is: impute, unless you'd only be missing a very small fraction of your cases using case deletion (like 1%). The big caveat is: understand the assumptions that are required for imputation first! If data is not missing at random, and I'm using that phrase non-technically so look up what imputation requires in this respect, imputation won't help you, and may make things worse. But that's a topic for another question. Here are a couple of links which might be helpful: overview of imputation, missing data rates and imputation, different imputation algorithms.
|
Treating missing values in panel data set
|
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for exampl
|
Treating missing values in panel data set
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for example, 100 observations and four regressors, each with a 10% missing observation rate, you'll only be missing 10% of the data but on average you'll be deleting about 34% of the observations if you drop each observation with one or more missing values - which is what happens if you just run the data through a standard regression package. You'll be deleting much more data (2.4x in fact) than is actually missing. In addition, unless your data is missing completely at random, case deletion can introduce bias into your parameter estimates.
It is typically better to use an imputation algorithm that captures at least the covariance structure of the data and generates random numbers (rather than replacing with mean or median values.) This holds true especially if you're going to be doing some estimation using the imputed data, because you'll get more accurate estimates of the covariance matrix of the parameters. Replacing by the mean value will give you overly optimistic standard errors, sometimes by quite a bit.
I've included an example using the default imputation method from the mice package in R. The example has a regression with 100 observations and four regressors, each with a 10% chance of a missing value at every observation. We compare the std. errors of the estimates for the complete-data regression (no missing values), the case deletion regression (delete any observation with a missing value), mean imputation (replace the missing value by the mean of the variable), and a good quality imputation routine that estimates the covariance matrix of the data and generates random values. I've constructed nonlinear relationships between the regressors such that mice isn't going to model them using their true relationships, just to add a layer of inaccuracy to the whole thing. I've run the entire process 100 times and averaged the standard errors of the four methods for each of the parameters for comparative purposes.
Here's the code, with a comparison of the standard errors at the bottom:
results <- data.frame(se_x1 = rep(0,400),
se_x2 = rep(0,400),
se_x3 = rep(0,400),
se_x4 = rep(0,400),
method = c(rep("Complete data",100),
rep("Case deletion",100),
rep("Mean value imputation", 100),
rep("Randomized imputation", 100)))
N <- 100
pct_missing <- 0.1
for (i in 1:100) {
x1 <- 4 + rnorm(N)
x2 <- 0.025*x1^2 + rnorm(N)
x3 <- 0.2*x1^1.3 + 0.04*x2^0.7 + rnorm(N)
x4 <- 0.4*x1^0.3 - 0.3*x2^1.1 + rnorm(N)
e <- rnorm(N, 0, 1.5)
y <- x1 + x2 + x3 + e # The coefficient of x4 = 0
# Complete data regression
mc <- summary(lm(y~x1+x2+x3+x4))
results[i,1:4] <- mc$coefficients[2:5,2]
# Cause data to be missing
x1[rbinom(N,1,pct_missing)==1] <- NA
x2[rbinom(N,1,pct_missing)==1] <- NA
x3[rbinom(N,1,pct_missing)==1] <- NA
x4[rbinom(N,1,pct_missing)==1] <- NA
# Case deletion
mm <- summary(lm(y~x1+x2+x3+x4))
results[i+100,1:4] <- mm$coefficients[2:5,2]
# Mean value imputation
x1m <- x1; x1m[is.na(x1m)] <- mean(x1, na.rm=TRUE)
x2m <- x2; x2m[is.na(x2m)] <- mean(x2, na.rm=TRUE)
x3m <- x3; x3m[is.na(x3m)] <- mean(x3, na.rm=TRUE)
x4m <- x4; x4m[is.na(x4m)] <- mean(x4, na.rm=TRUE)
mmv <- summary(lm(y~x1m+x2m+x3m+x4m))
results[i+200,1:4] <- mmv$coefficients[2:5,2]
# Imputation; I'm only using 1 of the 5 multiple imputations
# It would be better to use all the multiple imputations, though.
imp <- mice(cbind(y,x1,x2,x3,x4), printFlag=FALSE)
x1[is.na(x1)] <- as.numeric(imp$imp$x1[,1])
x2[is.na(x2)] <- as.numeric(imp$imp$x2[,1])
x3[is.na(x3)] <- as.numeric(imp$imp$x3[,1])
x4[is.na(x4)] <- as.numeric(imp$imp$x4[,1])
mi <- summary(lm(y~x1+x2+x3+x4))
results[i+300,1:4] <- mi$coefficients[2:5,2]
}
options(digits = 3)
results <- data.table(results)
results[, .(se_x1 = mean(se_x1),
se_x2 = mean(se_x2),
se_x3 = mean(se_x3),
se_x4 = mean(se_x4)), by = method]
And the output:
method se_x1 se_x2 se_x3 se_x4
1: Complete data 0.208 0.278 0.192 0.193
2: Case deletion 0.267 0.359 0.244 0.250
3: Mean value imputation 0.231 0.301 0.212 0.217
4: Randomized imputation 0.213 0.271 0.195 0.198
Note that the complete data method is as good as you can get with this data. Case deletion results in considerably less accurate parameter estimates, but the randomized imputation of mice gets you almost all the way back to the accuracy you would get with complete data. (These numbers are a little optimistic, as I'm not using the full multiple imputation approach, but this is just a simple example.) The mean value imputation in this case appears to have improved things considerably relative to case deletion, but is actually overly optimistic.
So the tl;dr version is: impute, unless you'd only be missing a very small fraction of your cases using case deletion (like 1%). The big caveat is: understand the assumptions that are required for imputation first! If data is not missing at random, and I'm using that phrase non-technically so look up what imputation requires in this respect, imputation won't help you, and may make things worse. But that's a topic for another question. Here are a couple of links which might be helpful: overview of imputation, missing data rates and imputation, different imputation algorithms.
|
Treating missing values in panel data set
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for exampl
|
47,896
|
Ancillary statistics:Beta distribution is free of $\beta$?
|
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For instance, your link shows why the ratio of two Gamma $\text{G}(\alpha_i,\beta)$ variates is a Beta $\text{B}(\alpha_1,\alpha_2)$ variate. (This reference can be confusing as it uses $\alpha$ and $\beta$ in the opposite of the standard way: $\alpha$ is the scale there!)
The reason why $Z$ does not depend on $\beta$ is that, when $$X_1,X_2\sim\text{G}(\alpha,\beta)$$ then
$$Y_1=\beta X_1,Y_2=\beta X_2\sim\text{G}(\alpha,1)$$ since $\beta$ is a scale factor. Therefore
$$Z=\dfrac{X_1}{X_1+X_2}=\dfrac{\beta X_1}{\beta X_1+\beta X_2}=\dfrac{Y_1}{Y_1+Y_2}$$
does not depend on $\beta$.
|
Ancillary statistics:Beta distribution is free of $\beta$?
|
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For i
|
Ancillary statistics:Beta distribution is free of $\beta$?
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For instance, your link shows why the ratio of two Gamma $\text{G}(\alpha_i,\beta)$ variates is a Beta $\text{B}(\alpha_1,\alpha_2)$ variate. (This reference can be confusing as it uses $\alpha$ and $\beta$ in the opposite of the standard way: $\alpha$ is the scale there!)
The reason why $Z$ does not depend on $\beta$ is that, when $$X_1,X_2\sim\text{G}(\alpha,\beta)$$ then
$$Y_1=\beta X_1,Y_2=\beta X_2\sim\text{G}(\alpha,1)$$ since $\beta$ is a scale factor. Therefore
$$Z=\dfrac{X_1}{X_1+X_2}=\dfrac{\beta X_1}{\beta X_1+\beta X_2}=\dfrac{Y_1}{Y_1+Y_2}$$
does not depend on $\beta$.
|
Ancillary statistics:Beta distribution is free of $\beta$?
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For i
|
47,897
|
Bound for weighted sum of Poisson random variables
|
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). See also the post How does saddlepoint approximation work?
Let $X_1, \dots, X_n$ be independent Poisson random variables with parameters $\lambda_1, \dots, \lambda_n$. Let $a_1, \dots, a_n$ be positive real numbers. We define the random variable $X=\sum_{i=1}^n a_i X_i$ and want an approximation for the distribution of $X$. When the weights $a_i$ are integers and $n$ is not to large, we can use numerical convolution. For the general case the saddlepoint approximation gives a good approximation for the density (probability) function.
The cumulant generating function for $X_i$ is given by $K_i(s) = \lambda_i (e^s - 1)$, $s \in (-\infty, +\infty)$. The cumulant generating function of $X$ is then
$$
K(s) = \sum_{i=1}^n K_i(a_i s) = \sum_{i=1}^n \lambda_i (e^{a_i s} - 1)
$$
We will need the first two derivatives, given by
$$
K'(s) = \sum \lambda_i a_i e^{a_i s} \\
K''(s) = \sum \lambda_i a_i^2 e^{a_i s}
$$
The saddlepoint equation is given by
$$
K'(\hat{s})=x
$$
which defines $\hat{s}=\hat{s}_x$ implicitly as a function of $x$.
The saddlepoint density function (for $x>0$) is now given by
$$
\hat{f}(x) = \frac1{\sqrt{2\pi K''(\hat{s})}} \exp\left(K \hat{s} - \hat{s} x\right)
$$
and the probability that $X=0$ is given (exactly) by
$$
\hat{f}(0) = \exp(-\sum \lambda_i)
$$
An implementation in R is below:
Saddlepoint approximation for a weighted sum of independent Poisson Random variables:
# Needs R 3.1.0 or newer (for extra argument of uniroot)
make_cumgenfun <- function(lambda, a) {
# we return list(lambda, a, K, K', K'')
n <- length(lambda)
m <- length(a)
stopifnot( n==m, lambda>0, a>0)
return( list(lambda=lambda, a=a,
Vectorize(function(s) {sum(lambda * (exp(a*s)-1))} ),
Vectorize(function(s) {sum(lambda * a * exp(a*s))} ),
Vectorize(function(s) {sum(lambda * a* a *exp(a*s))} )))
}
# Probability that X=0:
P0 <- exp(-sum(lambda))
Functions to get expectation and variance of X:
Ef <- function(lambda, a) sum(lambda*a)
Vf <- function(lambda, a) sum(lambda*a*a)
solve_speq <- function(x, cumgenfun) {
# Returns saddlepoint!
lambda <- cumgenfun[[1]]
a <- cumgenfun[[2]]
Kd <- cumgenfun[[4]]
uniroot(function(s) Kd(s)-x, lower=-100, upper=+100,
extendInt="yes")$root
}
# For an example, define
set.seed(1234)
lambda <- 1:10
a <- runif(10, 0.5, 3)
E <- Ef(lambda, a)
V <- Vf(lambda, a)
# Now, a function giving the (uncorrected ) saddlepoint density. We include the special case for x=0
make_fhat <- function(lambda, a) {
cgf1 <- make_cumgenfun(lambda, a)
K <- cgf1[[3]]
Kd <- cgf1[[4]]
Kdd <- cgf1[[5]]
# function finding fhat for one specific x:
fhat0 <- function(x) if (x==0) P0 else {
# solve saddlepoint equation:
s <- solve_speq(x, cgf1)
# calculating saddlepointdensity value:
(1/sqrt(2*pi*Kdd(s)))*exp(K(s)-s*x)}
#Returning a vectorized version:
return(Vectorize(fhat0))
} # end make_fhat
and running this code in R:
> fhat <- make_fhat(lambda, a)
> E
[1] 94.72556
> V
[1] 185.3017
> sqrt(V)
[1] 13.61256
> fhat(0)
[1] 1.299581e-24
> fhat(94)
[1] 0.02938575
> fhat(107)
[1] 0.01861648
> integrate(fhat, lower=0, upper=Inf)
1.001878 with absolute error < 3.6e-05
>
The last integration can be used to correct fhat to get integral 1 (not shown here).
Finally, we can get a plot of the approximate density:
> plot(fhat, from=60, to=130)
Now, you can yourself compare this with a normal approximation and with simulations. It should be quite accurate!
|
Bound for weighted sum of Poisson random variables
|
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint appr
|
Bound for weighted sum of Poisson random variables
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). See also the post How does saddlepoint approximation work?
Let $X_1, \dots, X_n$ be independent Poisson random variables with parameters $\lambda_1, \dots, \lambda_n$. Let $a_1, \dots, a_n$ be positive real numbers. We define the random variable $X=\sum_{i=1}^n a_i X_i$ and want an approximation for the distribution of $X$. When the weights $a_i$ are integers and $n$ is not to large, we can use numerical convolution. For the general case the saddlepoint approximation gives a good approximation for the density (probability) function.
The cumulant generating function for $X_i$ is given by $K_i(s) = \lambda_i (e^s - 1)$, $s \in (-\infty, +\infty)$. The cumulant generating function of $X$ is then
$$
K(s) = \sum_{i=1}^n K_i(a_i s) = \sum_{i=1}^n \lambda_i (e^{a_i s} - 1)
$$
We will need the first two derivatives, given by
$$
K'(s) = \sum \lambda_i a_i e^{a_i s} \\
K''(s) = \sum \lambda_i a_i^2 e^{a_i s}
$$
The saddlepoint equation is given by
$$
K'(\hat{s})=x
$$
which defines $\hat{s}=\hat{s}_x$ implicitly as a function of $x$.
The saddlepoint density function (for $x>0$) is now given by
$$
\hat{f}(x) = \frac1{\sqrt{2\pi K''(\hat{s})}} \exp\left(K \hat{s} - \hat{s} x\right)
$$
and the probability that $X=0$ is given (exactly) by
$$
\hat{f}(0) = \exp(-\sum \lambda_i)
$$
An implementation in R is below:
Saddlepoint approximation for a weighted sum of independent Poisson Random variables:
# Needs R 3.1.0 or newer (for extra argument of uniroot)
make_cumgenfun <- function(lambda, a) {
# we return list(lambda, a, K, K', K'')
n <- length(lambda)
m <- length(a)
stopifnot( n==m, lambda>0, a>0)
return( list(lambda=lambda, a=a,
Vectorize(function(s) {sum(lambda * (exp(a*s)-1))} ),
Vectorize(function(s) {sum(lambda * a * exp(a*s))} ),
Vectorize(function(s) {sum(lambda * a* a *exp(a*s))} )))
}
# Probability that X=0:
P0 <- exp(-sum(lambda))
Functions to get expectation and variance of X:
Ef <- function(lambda, a) sum(lambda*a)
Vf <- function(lambda, a) sum(lambda*a*a)
solve_speq <- function(x, cumgenfun) {
# Returns saddlepoint!
lambda <- cumgenfun[[1]]
a <- cumgenfun[[2]]
Kd <- cumgenfun[[4]]
uniroot(function(s) Kd(s)-x, lower=-100, upper=+100,
extendInt="yes")$root
}
# For an example, define
set.seed(1234)
lambda <- 1:10
a <- runif(10, 0.5, 3)
E <- Ef(lambda, a)
V <- Vf(lambda, a)
# Now, a function giving the (uncorrected ) saddlepoint density. We include the special case for x=0
make_fhat <- function(lambda, a) {
cgf1 <- make_cumgenfun(lambda, a)
K <- cgf1[[3]]
Kd <- cgf1[[4]]
Kdd <- cgf1[[5]]
# function finding fhat for one specific x:
fhat0 <- function(x) if (x==0) P0 else {
# solve saddlepoint equation:
s <- solve_speq(x, cgf1)
# calculating saddlepointdensity value:
(1/sqrt(2*pi*Kdd(s)))*exp(K(s)-s*x)}
#Returning a vectorized version:
return(Vectorize(fhat0))
} # end make_fhat
and running this code in R:
> fhat <- make_fhat(lambda, a)
> E
[1] 94.72556
> V
[1] 185.3017
> sqrt(V)
[1] 13.61256
> fhat(0)
[1] 1.299581e-24
> fhat(94)
[1] 0.02938575
> fhat(107)
[1] 0.01861648
> integrate(fhat, lower=0, upper=Inf)
1.001878 with absolute error < 3.6e-05
>
The last integration can be used to correct fhat to get integral 1 (not shown here).
Finally, we can get a plot of the approximate density:
> plot(fhat, from=60, to=130)
Now, you can yourself compare this with a normal approximation and with simulations. It should be quite accurate!
|
Bound for weighted sum of Poisson random variables
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint appr
|
47,898
|
Kendall's tau derivation from first principles?
|
Tau is an "indicator" version of covariance.
Recall this image from How would you explain covariance to someone who understands only the mean:
It shows two possible configurations of pairs of points in a scatterplot. The red pairs are "positively" oriented (or "concordant"): they are at the lower left and upper right of the rectangle they delimit. The smaller x-coordinate and the smaller y-coordinate are together; the larger x-coordinate and the larger y-coordinate are together. This is a situation of positive association.
The cyan pairs are "negatively" oriented (or "discordant"): the larger of the x-coordinates is associated with the smaller of the y-coordinates, and vice versa. The association is negative.
Covariance quantifies these associations in terms of the areas of the rectangles, assigning the cyan rectangles negative areas. Kendall's Tau also quantifies these associations, but it does so in a simpler way: it just counts them. A red rectangle counts as $+1$; a cyan rectangle as $-1$. Let's call these the "signs" of the rectangles.
Any scatterplot of $n$ points $(x_i, y_i), i=1, 2, \ldots, n$ determines $\binom{n}{2}=n(n-1)/2$ such rectangles, because each rectangle is uniquely associated with an unordered pair of those points. Whereas the covariance averages the (signed) areas of these rectangles, Kendall's Tau averages their signs. That's all the formulas are doing.
There are two ways to understand the initial probability statement. One is the empirical probability. It contemplates writing each unordered pair of points in a sample, $\{(x_i,y_i), (x_j,y_j)\},\ i\ne j,$ on a slip of paper. Also on each slip write $+1$ for the concordant pairs and $-1$ for the discordant pairs. Put those slips into a box, mix them up, and randomly draw one out. The expression
$$\Pr[(x_1-x_2)(y_1-y_2) \gt 0]$$
is the chance of observing a $+1$. The other expression in the question similarly is the chance of observing a $-1$. But look at the difference this way:
$$\tau = \Pr(\text{positive association})\times(+1) + \Pr(\text{negative association})\times(-1).$$
That probability-weighted sum of values on the tickets is, by definition, an expectation. Obviously it is the expected value of the signs on those slips of paper.
$\tau$ is the average value of the signs of the rectangles in the scatterplot.
The average in the box is obtained, of course, by adding up all the values and then dividing by the number of tickets in the box. That's precisely the formula for $\tau$ in the question.
This description extends to any box of pairs of tickets constructed in this manner. Thus, $\tau$ can also be thought of as a property of any bivariate distribution. For distributions with infinite support (such as continuous distributions), it would have to be computed as a double integral rather than a sum, but that changes nothing about the underlying ideas or interpretations.
Similarly, just as any sample of a bivariate distribution has a covariance--and we hope it might have some relationship to the covariance of the underlying distribution (if a covariance exists at all!)--any sample of a bivariate distribution has a $\tau$ statistic, and we hope it might have some relationship to the $\tau$ value of the underlying distribution (which always exists, whether or not the covariance exists).
|
Kendall's tau derivation from first principles?
|
Tau is an "indicator" version of covariance.
Recall this image from How would you explain covariance to someone who understands only the mean:
It shows two possible configurations of pairs of points
|
Kendall's tau derivation from first principles?
Tau is an "indicator" version of covariance.
Recall this image from How would you explain covariance to someone who understands only the mean:
It shows two possible configurations of pairs of points in a scatterplot. The red pairs are "positively" oriented (or "concordant"): they are at the lower left and upper right of the rectangle they delimit. The smaller x-coordinate and the smaller y-coordinate are together; the larger x-coordinate and the larger y-coordinate are together. This is a situation of positive association.
The cyan pairs are "negatively" oriented (or "discordant"): the larger of the x-coordinates is associated with the smaller of the y-coordinates, and vice versa. The association is negative.
Covariance quantifies these associations in terms of the areas of the rectangles, assigning the cyan rectangles negative areas. Kendall's Tau also quantifies these associations, but it does so in a simpler way: it just counts them. A red rectangle counts as $+1$; a cyan rectangle as $-1$. Let's call these the "signs" of the rectangles.
Any scatterplot of $n$ points $(x_i, y_i), i=1, 2, \ldots, n$ determines $\binom{n}{2}=n(n-1)/2$ such rectangles, because each rectangle is uniquely associated with an unordered pair of those points. Whereas the covariance averages the (signed) areas of these rectangles, Kendall's Tau averages their signs. That's all the formulas are doing.
There are two ways to understand the initial probability statement. One is the empirical probability. It contemplates writing each unordered pair of points in a sample, $\{(x_i,y_i), (x_j,y_j)\},\ i\ne j,$ on a slip of paper. Also on each slip write $+1$ for the concordant pairs and $-1$ for the discordant pairs. Put those slips into a box, mix them up, and randomly draw one out. The expression
$$\Pr[(x_1-x_2)(y_1-y_2) \gt 0]$$
is the chance of observing a $+1$. The other expression in the question similarly is the chance of observing a $-1$. But look at the difference this way:
$$\tau = \Pr(\text{positive association})\times(+1) + \Pr(\text{negative association})\times(-1).$$
That probability-weighted sum of values on the tickets is, by definition, an expectation. Obviously it is the expected value of the signs on those slips of paper.
$\tau$ is the average value of the signs of the rectangles in the scatterplot.
The average in the box is obtained, of course, by adding up all the values and then dividing by the number of tickets in the box. That's precisely the formula for $\tau$ in the question.
This description extends to any box of pairs of tickets constructed in this manner. Thus, $\tau$ can also be thought of as a property of any bivariate distribution. For distributions with infinite support (such as continuous distributions), it would have to be computed as a double integral rather than a sum, but that changes nothing about the underlying ideas or interpretations.
Similarly, just as any sample of a bivariate distribution has a covariance--and we hope it might have some relationship to the covariance of the underlying distribution (if a covariance exists at all!)--any sample of a bivariate distribution has a $\tau$ statistic, and we hope it might have some relationship to the $\tau$ value of the underlying distribution (which always exists, whether or not the covariance exists).
|
Kendall's tau derivation from first principles?
Tau is an "indicator" version of covariance.
Recall this image from How would you explain covariance to someone who understands only the mean:
It shows two possible configurations of pairs of points
|
47,899
|
Fisher's exact test vs kappa analysis
|
I know I answer the question two years later, but I hope some future readers may find the answer helpful.
Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classification table whereas Fisher's exact test evaluates the association between two categorical variables.
In some cases, Cohen's $\kappa$ might appear to converge to Fisher exact test. A simple case, will answer your question that the Fisher test is not appropriate for rater agreement.
Imagine a $2 \times 2$ matrix like
$\begin{matrix} 10 & 20 \\ 20 & 10\end{matrix}$.
It is clear that there is an association between both variables on the off-diagonal, but that raters do not agree more than chance. In other terms, raters systematicaly disagree. From the matrix, we should expect that the Fisher test is significant while the Cohen's $\kappa$ should not be. Carrying the analysis confirms the expectation, $p = 0.01938$ and $\kappa = -0.333$, $z =-4743$ and $p = 0.999$.
We can also carry another example where both outcomes diverge with the following matrix :
$\begin{matrix} 20 & 10 & 10 \\ 20 & 20 & 20 \\ 20 & 20 & 20 \end{matrix}$,
which gives $p = 0.4991$ and $\kappa = 0.0697$, $z =1.722$ and $p = 0.043$. So the raters likely agree, but there is no relation between categorical variables.
I don't have a more formal mathematical explanation on how they should or should not converge though.
Finally, given the actual state of knowledge on Cohen's $\kappa$ in the methodological literature (see this for instance), you might want to avoid it as a measure of agreement. The coefficient has a lot of issus. Careful training of raters and strong agreement on each categories (rather than the overall agreement) is, I believe, the way to go.
|
Fisher's exact test vs kappa analysis
|
I know I answer the question two years later, but I hope some future readers may find the answer helpful.
Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classi
|
Fisher's exact test vs kappa analysis
I know I answer the question two years later, but I hope some future readers may find the answer helpful.
Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classification table whereas Fisher's exact test evaluates the association between two categorical variables.
In some cases, Cohen's $\kappa$ might appear to converge to Fisher exact test. A simple case, will answer your question that the Fisher test is not appropriate for rater agreement.
Imagine a $2 \times 2$ matrix like
$\begin{matrix} 10 & 20 \\ 20 & 10\end{matrix}$.
It is clear that there is an association between both variables on the off-diagonal, but that raters do not agree more than chance. In other terms, raters systematicaly disagree. From the matrix, we should expect that the Fisher test is significant while the Cohen's $\kappa$ should not be. Carrying the analysis confirms the expectation, $p = 0.01938$ and $\kappa = -0.333$, $z =-4743$ and $p = 0.999$.
We can also carry another example where both outcomes diverge with the following matrix :
$\begin{matrix} 20 & 10 & 10 \\ 20 & 20 & 20 \\ 20 & 20 & 20 \end{matrix}$,
which gives $p = 0.4991$ and $\kappa = 0.0697$, $z =1.722$ and $p = 0.043$. So the raters likely agree, but there is no relation between categorical variables.
I don't have a more formal mathematical explanation on how they should or should not converge though.
Finally, given the actual state of knowledge on Cohen's $\kappa$ in the methodological literature (see this for instance), you might want to avoid it as a measure of agreement. The coefficient has a lot of issus. Careful training of raters and strong agreement on each categories (rather than the overall agreement) is, I believe, the way to go.
|
Fisher's exact test vs kappa analysis
I know I answer the question two years later, but I hope some future readers may find the answer helpful.
Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classi
|
47,900
|
Should I consider time as a fixed or random effect in GLMM?
|
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not known is whether that can be explained by values of existing variables in your regression. Ideally, you would include all the variables that could be influencing abundance and not the year, but it seems perhaps you have some unmeasured variables that are time-dependent.
If you include a coefficient for every possible year (treating it as a factor) this will lead to a fairly saturated model, and you may get biased coefficient estimates for other variables.
If you instead treat year as a random effect (i.e., for each year the effect is sampled randomly from a fixed Normal distribution) you are ignoring the requirement for random effects to be exchangeable, so that does not appear to be legitimate either.
If you instead include year as a linear predictor (i.e., have a single coefficient for the year, perhaps centred around the study midpoint year) you might run into problems if the actual effect of the unmeasured variables is non-linear. This could be checked by examining the prediction residuals versus the year covariate.
My advice would be to do the following:
Plot abundance (log transformed) versus year, to see what the overall structure looks like. If it seems to be linear then try adding year as a linear predictor (fixed effect) and examine the relationship between the residuals and year.
Run your model without year as a predictor and examine the relationship between the residuals from this model and year - if there is some form of structure then you need to account for it somehow.
Perhaps consider the use of fractional polynomials in your regression, as these can be quite flexible without increasing model complexity too dramatically. In this case you will need to rescale year so it is always positive but not too large.
Hope that is of some help...
|
Should I consider time as a fixed or random effect in GLMM?
|
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not know
|
Should I consider time as a fixed or random effect in GLMM?
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not known is whether that can be explained by values of existing variables in your regression. Ideally, you would include all the variables that could be influencing abundance and not the year, but it seems perhaps you have some unmeasured variables that are time-dependent.
If you include a coefficient for every possible year (treating it as a factor) this will lead to a fairly saturated model, and you may get biased coefficient estimates for other variables.
If you instead treat year as a random effect (i.e., for each year the effect is sampled randomly from a fixed Normal distribution) you are ignoring the requirement for random effects to be exchangeable, so that does not appear to be legitimate either.
If you instead include year as a linear predictor (i.e., have a single coefficient for the year, perhaps centred around the study midpoint year) you might run into problems if the actual effect of the unmeasured variables is non-linear. This could be checked by examining the prediction residuals versus the year covariate.
My advice would be to do the following:
Plot abundance (log transformed) versus year, to see what the overall structure looks like. If it seems to be linear then try adding year as a linear predictor (fixed effect) and examine the relationship between the residuals and year.
Run your model without year as a predictor and examine the relationship between the residuals from this model and year - if there is some form of structure then you need to account for it somehow.
Perhaps consider the use of fractional polynomials in your regression, as these can be quite flexible without increasing model complexity too dramatically. In this case you will need to rescale year so it is always positive but not too large.
Hope that is of some help...
|
Should I consider time as a fixed or random effect in GLMM?
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not know
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.