idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
47,401
|
Spurious correlation
|
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then
$$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$
and
$$Y_t=\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}+\frac{\delta_1}{\gamma_1}X_t+u_t-\frac{\delta_1}{\gamma_1}v_t$$
Now this is simply a regression
$$Y_t=\alpha_0+\alpha_1X_t+\varepsilon_t$$
and it is possible to show that OLS estimates $\hat\alpha_0$ and $\hat\alpha_1$ are consistent and assymptoticaly normal with means $\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}$ and $\frac{\delta_1}{\gamma_1}$ respectively, albeit with non-standard normalizing constants. The mathematical details can be found in this answer.
The consistency can be illustrated by the following code:
gend <- function(n) {
data.frame(x=1+2*1:n+rnorm(n),y=3+4*1:n+rnorm(n))
}
> set.seed(13)
> coef(lm(y~x,data=gend(10)))
(Intercept) x
-1.291464 2.067586
> coef(lm(y~x,data=gend(100)))
(Intercept) x
1.396720 1.997408
> coef(lm(y~x,data=gend(1000)))
(Intercept) x
0.9864317 1.9999570
> coef(lm(y~x,data=gend(10000)))
(Intercept) x
0.9595726 2.0000065
Here I generated two trend stationary variables with $\gamma_0=1$, $\gamma_1=2$, $\delta_0=3$ and $\delta_1=4$. As we see regression estimates approach the true values $\alpha_0=1$ and $\alpha_1=2$.
|
Spurious correlation
|
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then
$$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$
and
$$Y_t=\delt
|
Spurious correlation
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then
$$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$
and
$$Y_t=\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}+\frac{\delta_1}{\gamma_1}X_t+u_t-\frac{\delta_1}{\gamma_1}v_t$$
Now this is simply a regression
$$Y_t=\alpha_0+\alpha_1X_t+\varepsilon_t$$
and it is possible to show that OLS estimates $\hat\alpha_0$ and $\hat\alpha_1$ are consistent and assymptoticaly normal with means $\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}$ and $\frac{\delta_1}{\gamma_1}$ respectively, albeit with non-standard normalizing constants. The mathematical details can be found in this answer.
The consistency can be illustrated by the following code:
gend <- function(n) {
data.frame(x=1+2*1:n+rnorm(n),y=3+4*1:n+rnorm(n))
}
> set.seed(13)
> coef(lm(y~x,data=gend(10)))
(Intercept) x
-1.291464 2.067586
> coef(lm(y~x,data=gend(100)))
(Intercept) x
1.396720 1.997408
> coef(lm(y~x,data=gend(1000)))
(Intercept) x
0.9864317 1.9999570
> coef(lm(y~x,data=gend(10000)))
(Intercept) x
0.9595726 2.0000065
Here I generated two trend stationary variables with $\gamma_0=1$, $\gamma_1=2$, $\delta_0=3$ and $\delta_1=4$. As we see regression estimates approach the true values $\alpha_0=1$ and $\alpha_1=2$.
|
Spurious correlation
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then
$$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$
and
$$Y_t=\delt
|
47,402
|
Longitudinal predictive models
|
This is phase 1 of my answer. I want first to make sure that I understand the model.
Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writes, the model appears to be (for one customer and assuming linearity for the moment)
$$ s_{it} = g(a_1t,a_2t^2, a_3t^3,..) + X_{i3}\beta + u_{it},\qquad t=4,...,36$$
...where $g(b_1t,b_2t^2, b_3t^3,..)$ represents various time-variables (that enter additively), and $X_{i3}$ is the matrix including the "initial state data" (of time period $3$).
Is this an accurate depiction of the general idea? If it is I have one question and one remark:
Question: what happens to data from period 1,2,3? Aren't they used somehow (apart from $X_{i3}$)?
Remark: If all regressors in $X_{i3}$ are time-invariant, then their effect on $ s_{it}$ cannot be separated -they form a composite "intercept".
Waiting on OP's feedback.
PHASE 2
Given the OP's clarifications (although, I must note that it is not "for simplicity" that a time-invariant regressor "does not change with t"), let's explore the possible model
$$ s_{it} = \sum_{l=1}^{k}a_lt^l + X_i\beta + u_{it},\qquad t=4,...,36$$
My previous remark stands: whatever is in $X_i$, be it regressors specific to customer $i$, or even products of these regressors etc, or even some collective variables from periods $1,2,3$, to the degree that they are time-invariant for the time period $4,...,36$, then their effects on the dependent variable cannot be separated. Moreover $X_i$ is known, and if they are measured in the same units as the dependent variable, then their sum should be subtracted from the dependent variable, they do not belong in the RHS of the equation.
Moreover by a comment of OP in another answer, it appears that an assumption that customers exhibit similar behavior is made: namely, we could use some accounts to estimate the regression, and then use the estimated coefficients to predict the behavior of _other_accounts, presumably of other customers.
Taken all the above into account, we are led in a model that can be written
$$ s_{it} - \sum_{j=1}^{m}x_{ij} = \sum_{l=0}^{k}a_lt^l + u_{it},\qquad t=4,...,36\qquad i=1,...,n$$
...where $n$ is the number of customers comprising the training sample. This model could be estimated by one of the many panel data estimators available, given also the constraints imposed by the structure of the specification. But although the panel data would permit a better estimation of the coefficients involved, the main issue remains: we are depending only on the powers of $t$ to capture the variability of the dependent variable. Personally, I don't "trust" that kind of models: they are a classic case where you could obtain a "perfect fit" by adding more and more powers of $t$ -only to find out that this "perfectly fitted" model is the worst when it comes to prediction out-of-sample.
What then? I believe that the $VARMA$ modelling (also discussed in another answer) is a better way to go: namely a vector autoregressive-moving average model:
$$s_{it} - \sum_{j=1}^{m}x_{ij} = \phi(L)s_{it} + \psi(L)u_{it},\qquad t=4,...,36\qquad i=1,...,n$$
where $\phi(L)$ and $\psi(L)$ are polynomials in the lag operator operating on $t$.
|
Longitudinal predictive models
|
This is phase 1 of my answer. I want first to make sure that I understand the model.
Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writ
|
Longitudinal predictive models
This is phase 1 of my answer. I want first to make sure that I understand the model.
Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writes, the model appears to be (for one customer and assuming linearity for the moment)
$$ s_{it} = g(a_1t,a_2t^2, a_3t^3,..) + X_{i3}\beta + u_{it},\qquad t=4,...,36$$
...where $g(b_1t,b_2t^2, b_3t^3,..)$ represents various time-variables (that enter additively), and $X_{i3}$ is the matrix including the "initial state data" (of time period $3$).
Is this an accurate depiction of the general idea? If it is I have one question and one remark:
Question: what happens to data from period 1,2,3? Aren't they used somehow (apart from $X_{i3}$)?
Remark: If all regressors in $X_{i3}$ are time-invariant, then their effect on $ s_{it}$ cannot be separated -they form a composite "intercept".
Waiting on OP's feedback.
PHASE 2
Given the OP's clarifications (although, I must note that it is not "for simplicity" that a time-invariant regressor "does not change with t"), let's explore the possible model
$$ s_{it} = \sum_{l=1}^{k}a_lt^l + X_i\beta + u_{it},\qquad t=4,...,36$$
My previous remark stands: whatever is in $X_i$, be it regressors specific to customer $i$, or even products of these regressors etc, or even some collective variables from periods $1,2,3$, to the degree that they are time-invariant for the time period $4,...,36$, then their effects on the dependent variable cannot be separated. Moreover $X_i$ is known, and if they are measured in the same units as the dependent variable, then their sum should be subtracted from the dependent variable, they do not belong in the RHS of the equation.
Moreover by a comment of OP in another answer, it appears that an assumption that customers exhibit similar behavior is made: namely, we could use some accounts to estimate the regression, and then use the estimated coefficients to predict the behavior of _other_accounts, presumably of other customers.
Taken all the above into account, we are led in a model that can be written
$$ s_{it} - \sum_{j=1}^{m}x_{ij} = \sum_{l=0}^{k}a_lt^l + u_{it},\qquad t=4,...,36\qquad i=1,...,n$$
...where $n$ is the number of customers comprising the training sample. This model could be estimated by one of the many panel data estimators available, given also the constraints imposed by the structure of the specification. But although the panel data would permit a better estimation of the coefficients involved, the main issue remains: we are depending only on the powers of $t$ to capture the variability of the dependent variable. Personally, I don't "trust" that kind of models: they are a classic case where you could obtain a "perfect fit" by adding more and more powers of $t$ -only to find out that this "perfectly fitted" model is the worst when it comes to prediction out-of-sample.
What then? I believe that the $VARMA$ modelling (also discussed in another answer) is a better way to go: namely a vector autoregressive-moving average model:
$$s_{it} - \sum_{j=1}^{m}x_{ij} = \phi(L)s_{it} + \psi(L)u_{it},\qquad t=4,...,36\qquad i=1,...,n$$
where $\phi(L)$ and $\psi(L)$ are polynomials in the lag operator operating on $t$.
|
Longitudinal predictive models
This is phase 1 of my answer. I want first to make sure that I understand the model.
Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writ
|
47,403
|
Longitudinal predictive models
|
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." causes me some concern. If I observe 3 numbers say 8,10,12 ... the "best model" to predict this stream for the next 33 periods would be an ARIMA/Transfer Function model with a causal predictor(t) of the form y(t)=6+2*t+a(t) . Thus to me it is a "typical time series forecasting problem" as there is no requirement (although it is a good suggestion) that the data used to develop a model is "longer" than the length of the prediction. As an aside a functionally equivalent representation is y(t)=y(t-1)+2+a(t) both yielding the same forecasts.
As new observations become available e.g. the four values 8,10,12,10 , the best that could be done would be to identify the last observation (value=10) as being an anomaly and the equation would remain the same as the one that was developed using the first three observations. In terms of generality,the "8" could be considered an anomaly to the pattern "10,12,10" which just illustrates the problem with small data sets. This ambiguity would go away as the observed sample got larger.
One of our classic time series is 1,9,1,9,1,9,1,9,5,9 where the "5" although near the mean is nowhere near the expected value of "1".
Combining both ARIMA( the general term ) and empirically detected exceptions (Pulses,Level/Step,Seasonal Pulses/Local Time Trends) can lead to useful models.
|
Longitudinal predictive models
|
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." ca
|
Longitudinal predictive models
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." causes me some concern. If I observe 3 numbers say 8,10,12 ... the "best model" to predict this stream for the next 33 periods would be an ARIMA/Transfer Function model with a causal predictor(t) of the form y(t)=6+2*t+a(t) . Thus to me it is a "typical time series forecasting problem" as there is no requirement (although it is a good suggestion) that the data used to develop a model is "longer" than the length of the prediction. As an aside a functionally equivalent representation is y(t)=y(t-1)+2+a(t) both yielding the same forecasts.
As new observations become available e.g. the four values 8,10,12,10 , the best that could be done would be to identify the last observation (value=10) as being an anomaly and the equation would remain the same as the one that was developed using the first three observations. In terms of generality,the "8" could be considered an anomaly to the pattern "10,12,10" which just illustrates the problem with small data sets. This ambiguity would go away as the observed sample got larger.
One of our classic time series is 1,9,1,9,1,9,1,9,5,9 where the "5" although near the mean is nowhere near the expected value of "1".
Combining both ARIMA( the general term ) and empirically detected exceptions (Pulses,Level/Step,Seasonal Pulses/Local Time Trends) can lead to useful models.
|
Longitudinal predictive models
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." ca
|
47,404
|
Longitudinal predictive models
|
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R.
Some of your constraints might be that you can use only information till month 3 which will hurt your estimation.
Hope this helps. Would like to hear from you if I have not understood your problem correctly.
|
Longitudinal predictive models
|
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R.
Some of
|
Longitudinal predictive models
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R.
Some of your constraints might be that you can use only information till month 3 which will hurt your estimation.
Hope this helps. Would like to hear from you if I have not understood your problem correctly.
|
Longitudinal predictive models
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R.
Some of
|
47,405
|
Critical effect sizes and power for paired t test
|
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
The effect size as a difference in standard deviation units is usually referred to as $d$. We can apply a correction factor to $d$ to incorporate the information about the aforementioned correlation, and then we can use our standard power formulae with this corrected $d$ (making sure to also mind the change in degrees of freedom associated with moving to the paired design) to compute power. The corrected $d$ is
$$
d_o = \frac{d}{\sqrt{1-r}},
$$
where $r$ is the correlation. I have called this $d_o$ because this is sometimes referred to as the "operative effect size."
Here is a little R routine that computes a table of minimum number of PAIRS as a function of the assumed correlation and the desired power level, with $d=2$ assumed.
library(pwr) # package for pwr.t.test() function
# may need to install first with install.packages()
# define a function to get the minimum number of pairs
# for a given correlation and desired power level
getN <- function(r,p){
unlist(mapply(pwr.t.test, d=2/sqrt(1-r), power=p,
MoreArgs=list(n=NULL, sig.level=.05, type="paired"))["n",])
}
# apply this function to all combinations of the parameters below
tab <- outer(seq(0,.95,.05), c(.7,.8,.9,.95,.99,.999), "getN")
dimnames(tab) <- list("Correlation"=seq(0,.95,.05),
"DesiredPower"=c(.7,.8,.9,.95,.99,.999))
tab
Which returns the following:
DesiredPower
Correlation 0.7 0.8 0.9 0.95 0.99 0.999
0 3.767546 4.220731 4.912411 5.544223 6.888820 8.656788
0.05 3.691858 4.126240 4.787326 5.389850 6.669683 8.350091
0.1 3.615930 4.031562 4.662220 5.235637 6.451021 8.044096
0.15 3.539645 3.936653 4.537050 5.081483 6.232774 7.738792
0.2 3.462940 3.841433 4.411750 4.927417 6.014903 7.434270
0.25 3.385708 3.745774 4.286234 4.773338 5.797404 7.130529
0.3 3.307922 3.649640 4.160447 4.619143 5.580267 6.827580
0.35 3.229382 3.552889 4.034209 4.464751 5.363362 6.525430
0.4 3.149970 3.455310 3.907393 4.309986 5.146613 6.224026
0.45 3.069435 3.356743 3.779777 4.154653 4.929824 5.923282
0.5 2.987581 3.256903 3.651065 3.998456 4.712773 5.623032
0.55 2.904079 3.155423 3.520841 3.841020 4.495111 5.323066
0.6 2.818472 3.051834 3.388672 3.681805 4.276260 5.022875
0.65 2.730145 2.945449 3.253781 3.520048 4.055501 4.721751
0.7 2.638237 2.835369 3.115118 3.354639 3.831565 4.418560
0.75 2.541442 2.720152 2.971074 3.183823 3.602697 4.111397
0.8 2.437713 2.597460 2.819127 3.004879 3.365682 3.796879
0.85 2.323340 2.463226 2.654597 2.812710 3.114890 3.468745
0.9 2.190677 2.309002 2.467897 2.596901 2.838233 3.113596
0.95 2.018024 2.110699 2.231866 2.327720 2.501567 2.692358
Note that $d=2$ is considered in many fields quite a large effect size, so the resulting minimum numbers of pairs are all quite low.
|
Critical effect sizes and power for paired t test
|
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
|
Critical effect sizes and power for paired t test
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
The effect size as a difference in standard deviation units is usually referred to as $d$. We can apply a correction factor to $d$ to incorporate the information about the aforementioned correlation, and then we can use our standard power formulae with this corrected $d$ (making sure to also mind the change in degrees of freedom associated with moving to the paired design) to compute power. The corrected $d$ is
$$
d_o = \frac{d}{\sqrt{1-r}},
$$
where $r$ is the correlation. I have called this $d_o$ because this is sometimes referred to as the "operative effect size."
Here is a little R routine that computes a table of minimum number of PAIRS as a function of the assumed correlation and the desired power level, with $d=2$ assumed.
library(pwr) # package for pwr.t.test() function
# may need to install first with install.packages()
# define a function to get the minimum number of pairs
# for a given correlation and desired power level
getN <- function(r,p){
unlist(mapply(pwr.t.test, d=2/sqrt(1-r), power=p,
MoreArgs=list(n=NULL, sig.level=.05, type="paired"))["n",])
}
# apply this function to all combinations of the parameters below
tab <- outer(seq(0,.95,.05), c(.7,.8,.9,.95,.99,.999), "getN")
dimnames(tab) <- list("Correlation"=seq(0,.95,.05),
"DesiredPower"=c(.7,.8,.9,.95,.99,.999))
tab
Which returns the following:
DesiredPower
Correlation 0.7 0.8 0.9 0.95 0.99 0.999
0 3.767546 4.220731 4.912411 5.544223 6.888820 8.656788
0.05 3.691858 4.126240 4.787326 5.389850 6.669683 8.350091
0.1 3.615930 4.031562 4.662220 5.235637 6.451021 8.044096
0.15 3.539645 3.936653 4.537050 5.081483 6.232774 7.738792
0.2 3.462940 3.841433 4.411750 4.927417 6.014903 7.434270
0.25 3.385708 3.745774 4.286234 4.773338 5.797404 7.130529
0.3 3.307922 3.649640 4.160447 4.619143 5.580267 6.827580
0.35 3.229382 3.552889 4.034209 4.464751 5.363362 6.525430
0.4 3.149970 3.455310 3.907393 4.309986 5.146613 6.224026
0.45 3.069435 3.356743 3.779777 4.154653 4.929824 5.923282
0.5 2.987581 3.256903 3.651065 3.998456 4.712773 5.623032
0.55 2.904079 3.155423 3.520841 3.841020 4.495111 5.323066
0.6 2.818472 3.051834 3.388672 3.681805 4.276260 5.022875
0.65 2.730145 2.945449 3.253781 3.520048 4.055501 4.721751
0.7 2.638237 2.835369 3.115118 3.354639 3.831565 4.418560
0.75 2.541442 2.720152 2.971074 3.183823 3.602697 4.111397
0.8 2.437713 2.597460 2.819127 3.004879 3.365682 3.796879
0.85 2.323340 2.463226 2.654597 2.812710 3.114890 3.468745
0.9 2.190677 2.309002 2.467897 2.596901 2.838233 3.113596
0.95 2.018024 2.110699 2.231866 2.327720 2.501567 2.692358
Note that $d=2$ is considered in many fields quite a large effect size, so the resulting minimum numbers of pairs are all quite low.
|
Critical effect sizes and power for paired t test
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
|
47,406
|
Critical effect sizes and power for paired t test
|
Any effect size is possible, it's hard to tell what you mean by "makes sense".
There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often recommended here.
Conceptually you calculate power more or less the same way. In the independent case you're probably using the pooled variance of your conditions to get an SD. That's a Cohen's D. However, with the repeated measures case it will be the SD of the effect, which incorporates the covariance between the conditions. If you have a well justified repeated measures design the covariance should be fairly large, which will result in a small effect SD. Therefore, you'll require a smaller N. You need to somehow find this effect SD or covariance (or correlation). If there are no repeated measures studies from which to derive it or you don't have prior data you might better off taking a different approach (at least for your first repeated measures study).
You don't have to calculate power before conducting a study. You could take the following approach. You could decide how much of an effect you care about. Let's say it's 4 (not 4SDs, 4 on your scale, whatever it is...it could even be your original 2 SDs, although that's rather large). Effects smaller than that wouldn't really mean much even if the test was significant and you want to be able to detect effects at least that size. Run subjects until you can detect an effect of that size (calculate the width of an effect confidence interval). Stop once you can detect the desired value. I've previously posted the simulations that show that this does not inflate alpha appreciably, and gives you the expected power and N on average. (Keep in mind that you are not allowed to make decisions on significance of a test, only sensitivity. That requires discipline.)
|
Critical effect sizes and power for paired t test
|
Any effect size is possible, it's hard to tell what you mean by "makes sense".
There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often rec
|
Critical effect sizes and power for paired t test
Any effect size is possible, it's hard to tell what you mean by "makes sense".
There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often recommended here.
Conceptually you calculate power more or less the same way. In the independent case you're probably using the pooled variance of your conditions to get an SD. That's a Cohen's D. However, with the repeated measures case it will be the SD of the effect, which incorporates the covariance between the conditions. If you have a well justified repeated measures design the covariance should be fairly large, which will result in a small effect SD. Therefore, you'll require a smaller N. You need to somehow find this effect SD or covariance (or correlation). If there are no repeated measures studies from which to derive it or you don't have prior data you might better off taking a different approach (at least for your first repeated measures study).
You don't have to calculate power before conducting a study. You could take the following approach. You could decide how much of an effect you care about. Let's say it's 4 (not 4SDs, 4 on your scale, whatever it is...it could even be your original 2 SDs, although that's rather large). Effects smaller than that wouldn't really mean much even if the test was significant and you want to be able to detect effects at least that size. Run subjects until you can detect an effect of that size (calculate the width of an effect confidence interval). Stop once you can detect the desired value. I've previously posted the simulations that show that this does not inflate alpha appreciably, and gives you the expected power and N on average. (Keep in mind that you are not allowed to make decisions on significance of a test, only sensitivity. That requires discipline.)
|
Critical effect sizes and power for paired t test
Any effect size is possible, it's hard to tell what you mean by "makes sense".
There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often rec
|
47,407
|
Ideal number of variables for PCA
|
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables (gene expression data) and it works very well.
What can happen is that when analysing PCA you will have to look into more than the first two or three components. Often, the factor that you would like to understand does not contribute to the majority of variance and for example you will see nice clustering of your samples in the fifth or tenth component (yes, I have seen cases like this).
|
Ideal number of variables for PCA
|
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables
|
Ideal number of variables for PCA
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables (gene expression data) and it works very well.
What can happen is that when analysing PCA you will have to look into more than the first two or three components. Often, the factor that you would like to understand does not contribute to the majority of variance and for example you will see nice clustering of your samples in the fifth or tenth component (yes, I have seen cases like this).
|
Ideal number of variables for PCA
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables
|
47,408
|
Interpreting 3D scatter plot
|
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is missing the colors.
As a.desantos already pointed out, the individual scatterplots in the second image are projections on different planes. If you think how the points in the 3D-scatterplot have to be located in order to give these projections, it maybe becomes clearer.
The projection to the plane x1, x3 would look roughly like this (can't get the image to load up, colors are marked with letters as follows: r=red, g=green, y=yellow, b=blue):
X3 |y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
+-------------------
X1
|
Interpreting 3D scatter plot
|
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is
|
Interpreting 3D scatter plot
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is missing the colors.
As a.desantos already pointed out, the individual scatterplots in the second image are projections on different planes. If you think how the points in the 3D-scatterplot have to be located in order to give these projections, it maybe becomes clearer.
The projection to the plane x1, x3 would look roughly like this (can't get the image to load up, colors are marked with letters as follows: r=red, g=green, y=yellow, b=blue):
X3 |y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|y y y y y b b b b b
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
|r r r r r g g g g g
+-------------------
X1
|
Interpreting 3D scatter plot
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is
|
47,409
|
Interpreting 3D scatter plot
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I think there are missing colours on the second image. They are the projections on the different planes $x_1,x_2,x_3$.
|
Interpreting 3D scatter plot
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Interpreting 3D scatter plot
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I think there are missing colours on the second image. They are the projections on the different planes $x_1,x_2,x_3$.
|
Interpreting 3D scatter plot
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
47,410
|
How do I find data to show whether a shaved die is really loaded?
|
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the difference (a large difference is easy to see) and the number of observations (a large number can detect smaller differences).
The shaving purportedly increases the chance of landing one or six (keeping their chances approximately equal). A natural way to measure the effect of shaving, then, is in terms of the increase in the chance of a one. Let's call this $\varepsilon$. The chances of $(1,2,3,4,5,6)$ are therefore $(1/6+\varepsilon, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6+\varepsilon)$. Clearly $-1/6\le \varepsilon\le 1/3$; here I will examine only cases where $\varepsilon \ge 0$ (shaving favors one and six).
It's not so easy to work out the results of the chi-squared test theoretically in this situation (although it is possible). That's typical of power studies, which usually then resort to Monte-Carlo simulation. Lots of simulations, actually: we need to contemplate various combinations of $\varepsilon$ (which, after all, we don't know at the outset) and sample size $n$. This will give us the information needed to identify the smallest sample size (number of rolls) that will detect an "interesting" difference $\varepsilon$ "reliably."
The individual simulations run and re-run your experiment with $n$ throws of a die, doing this thousands of times. In each experiment a chi-squared statistic is computed. This is a measure of the discrepancy between the distribution of observations and the hypothetical fair distribution ($1/6$ of the total for each face). The proportion of times this statistic is significantly large is the Monte-Carlo estimate of the power of the test (for this particular $n$ for this particular die).
Here, for example, are the outcomes of a set of simulations for $n=30$, setting $\varepsilon$ to a range of values (including $0$, corresponding to a fair die). Each simulation ran $10,000$ times, which is enough to produce robust, reproducible results.
Each simulation is depicted by a histogram of its chi-squared values. The "critical value" for 95% confidence is shown by a vertical red line. All outcomes larger than the critical value are painted in red. In the upper right--the case of no effect--the red values should comprise about 5% of the total, by design, because having 95% confidence means there is a 100-95 = 5% chance of detecting an effect that isn't really there.
As the effect size increases, the chance of detecting it--as indicated by the proportion of red in the histogram--also increases, as expected. It never reaches 100%, though, even for $\varepsilon=1/6=16.7$%. This is an extreme effect: now the one has a $1/6+1/6=1/3$ chance, the six also has a $1/3$ chance, and the other four faces only have a $1/12$ chance apiece. Obviously, tossing the die only $n=30$ times will only be able to detect a gross bias.
Upon repeating the same exercise for a large range of sample sizes $n$, for each possible effect size $\varepsilon$ we can plot the power against $n$. Here are the results:
The same set of effects is represented. We can tell which curve belongs to which effect, because the lower curves (with less power) must correspond to the smaller effects. Thus,
The bottom (orange) curve is close to $0.05$, especially for large $n$. This is because the tests are run with $100 - 5 =95$% confidence.
The next lowest curves (light green and teal) show the power for effects of $\varepsilon=1$% and $2.1$%, respectively. Even for sample sizes of $1000$ (that is, $1000$ tosses of the die) we are not likely to detect this much difference in the probabilities. Notice that out of $1000$ tosses with $\varepsilon=2.1$%, we expect there to be about $188$ ones, $188$ sixes, and $156$ each of two through five: that's a pretty big discrepancy for a gambler.
With just $n=30$ tosses, the only effect we have a reasonable chance of detecting is greater than $8.3$%, where the power is still only about $1/3$.
(Notice that for smaller sample sizes, less than $30$ or so, the power for $\varepsilon=0$ drops noticeably below the nominal value of $5$%. This is because the chi-squared statistic does not exactly follow a chi-squared distribution for small sample sizes, whence the critical value (which is computed from an assumed chi-squared distribution) is incorrect. If it were to be corrected, the power for the smaller sample sizes would drop a little.)
Evidently, to detect small changes (say of $\varepsilon=1$% or less) many thousands of rolls will be needed. You can work out the power for any $n$ and $\varepsilon$ yourself by modifying and re-running the R code used to produce these figures. Be careful! The study reported here required almost four minutes to run. Test any modifications first on smaller simulations, reducing the number of iterations 1e4 to $1000$ (1e3) or even $100$ at first until you know the code will do exactly what you need.
simulate <- function(eps, n=1) {
# Run a single experiment and return its chi-squared statistic.
d <- sample.int(6, n, prob=rep(1/6,6)+eps*c(1,rep(-1/2,4),1), replace=TRUE)
d <- factor(d, levels=1:6)
chisq.test(tabulate(d))$statistic #$
}
power <- function(eps, n.sample, n.iter) {
# Return the results of `n.iter` experiments.
replicate(n.iter, simulate(eps, n.sample))
}
power.plot <- function(eps, n.sample, n.iter, alpha=0.05, plot=TRUE) {
# Optionally plot a histogram from `power` and return its power
# for a test run at 100 - `alpha`% confidence.
title <- paste("n=", n.sample, " at eps=", round(eps*100, 1), "%", sep="")
d <- power(eps, n.sample, n.iter)
q <- qchisq(1-alpha,5)
if (plot) {
h <- hist(d, plot=FALSE)
hist(d, breaks=h$breaks, main=title, col="#c0c0c0",
xlab="Chi-squared", freq=FALSE)
h2 <- hist(d[d > q], breaks=h$breaks, plot=FALSE)
h2$density <- h2$density * sum(d>q)/n.iter
plot(h2, add=TRUE, col="#ff404080", freq=FALSE)
abline(v = qchisq(1-alpha,5), col="#ff4040", lwd=2)
}
return (sum(d > q) / length(d))
}
#
# Set up the effects and sample sizes to study.
#
eps <- c(0, 1, 2, 4, 8, 16)/16 * 1/6
n <- c(10, 15, 20, 30, 40, 60, 80, 120, 160, 240, 320, 480, 640, 960)
alpha <- 0.05
#
# Figure 1.
#
par(mfrow=c(2,3))
system.time(p <- sapply(eps, function(eps)
power.plot(eps, n.sample=30, n.iter=1e4, alpha=alpha))
)
#
# Figure 2.
#
system.time(p <- sapply(eps, function(eps)
sapply(n, function(n)
power.plot(eps, n.sample=n, n.iter=1e4, alpha=alpha, plot=FALSE))
))
dimnames(p) <- list(n=n, eps=eps)
colors <- hsv(seq(1/10, 9/10, length.out=length(eps)), .8, 1)
par(mfrow=c(1,1))
plot(c(min(n), max(n)), c(0,1), type="n", log="x",
xlab="Sample size", ylab="Power", main="Power vs. Sample Size")
abline(h=alpha, lwd=2, col="#505050")
tmp <- sapply(1:length(eps),
function(j) lines(n, p[, j], ylim=c(0,1),
col=colors[j], lwd=3, lty=3))
|
How do I find data to show whether a shaved die is really loaded?
|
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the diffe
|
How do I find data to show whether a shaved die is really loaded?
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the difference (a large difference is easy to see) and the number of observations (a large number can detect smaller differences).
The shaving purportedly increases the chance of landing one or six (keeping their chances approximately equal). A natural way to measure the effect of shaving, then, is in terms of the increase in the chance of a one. Let's call this $\varepsilon$. The chances of $(1,2,3,4,5,6)$ are therefore $(1/6+\varepsilon, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6-\varepsilon/2, 1/6+\varepsilon)$. Clearly $-1/6\le \varepsilon\le 1/3$; here I will examine only cases where $\varepsilon \ge 0$ (shaving favors one and six).
It's not so easy to work out the results of the chi-squared test theoretically in this situation (although it is possible). That's typical of power studies, which usually then resort to Monte-Carlo simulation. Lots of simulations, actually: we need to contemplate various combinations of $\varepsilon$ (which, after all, we don't know at the outset) and sample size $n$. This will give us the information needed to identify the smallest sample size (number of rolls) that will detect an "interesting" difference $\varepsilon$ "reliably."
The individual simulations run and re-run your experiment with $n$ throws of a die, doing this thousands of times. In each experiment a chi-squared statistic is computed. This is a measure of the discrepancy between the distribution of observations and the hypothetical fair distribution ($1/6$ of the total for each face). The proportion of times this statistic is significantly large is the Monte-Carlo estimate of the power of the test (for this particular $n$ for this particular die).
Here, for example, are the outcomes of a set of simulations for $n=30$, setting $\varepsilon$ to a range of values (including $0$, corresponding to a fair die). Each simulation ran $10,000$ times, which is enough to produce robust, reproducible results.
Each simulation is depicted by a histogram of its chi-squared values. The "critical value" for 95% confidence is shown by a vertical red line. All outcomes larger than the critical value are painted in red. In the upper right--the case of no effect--the red values should comprise about 5% of the total, by design, because having 95% confidence means there is a 100-95 = 5% chance of detecting an effect that isn't really there.
As the effect size increases, the chance of detecting it--as indicated by the proportion of red in the histogram--also increases, as expected. It never reaches 100%, though, even for $\varepsilon=1/6=16.7$%. This is an extreme effect: now the one has a $1/6+1/6=1/3$ chance, the six also has a $1/3$ chance, and the other four faces only have a $1/12$ chance apiece. Obviously, tossing the die only $n=30$ times will only be able to detect a gross bias.
Upon repeating the same exercise for a large range of sample sizes $n$, for each possible effect size $\varepsilon$ we can plot the power against $n$. Here are the results:
The same set of effects is represented. We can tell which curve belongs to which effect, because the lower curves (with less power) must correspond to the smaller effects. Thus,
The bottom (orange) curve is close to $0.05$, especially for large $n$. This is because the tests are run with $100 - 5 =95$% confidence.
The next lowest curves (light green and teal) show the power for effects of $\varepsilon=1$% and $2.1$%, respectively. Even for sample sizes of $1000$ (that is, $1000$ tosses of the die) we are not likely to detect this much difference in the probabilities. Notice that out of $1000$ tosses with $\varepsilon=2.1$%, we expect there to be about $188$ ones, $188$ sixes, and $156$ each of two through five: that's a pretty big discrepancy for a gambler.
With just $n=30$ tosses, the only effect we have a reasonable chance of detecting is greater than $8.3$%, where the power is still only about $1/3$.
(Notice that for smaller sample sizes, less than $30$ or so, the power for $\varepsilon=0$ drops noticeably below the nominal value of $5$%. This is because the chi-squared statistic does not exactly follow a chi-squared distribution for small sample sizes, whence the critical value (which is computed from an assumed chi-squared distribution) is incorrect. If it were to be corrected, the power for the smaller sample sizes would drop a little.)
Evidently, to detect small changes (say of $\varepsilon=1$% or less) many thousands of rolls will be needed. You can work out the power for any $n$ and $\varepsilon$ yourself by modifying and re-running the R code used to produce these figures. Be careful! The study reported here required almost four minutes to run. Test any modifications first on smaller simulations, reducing the number of iterations 1e4 to $1000$ (1e3) or even $100$ at first until you know the code will do exactly what you need.
simulate <- function(eps, n=1) {
# Run a single experiment and return its chi-squared statistic.
d <- sample.int(6, n, prob=rep(1/6,6)+eps*c(1,rep(-1/2,4),1), replace=TRUE)
d <- factor(d, levels=1:6)
chisq.test(tabulate(d))$statistic #$
}
power <- function(eps, n.sample, n.iter) {
# Return the results of `n.iter` experiments.
replicate(n.iter, simulate(eps, n.sample))
}
power.plot <- function(eps, n.sample, n.iter, alpha=0.05, plot=TRUE) {
# Optionally plot a histogram from `power` and return its power
# for a test run at 100 - `alpha`% confidence.
title <- paste("n=", n.sample, " at eps=", round(eps*100, 1), "%", sep="")
d <- power(eps, n.sample, n.iter)
q <- qchisq(1-alpha,5)
if (plot) {
h <- hist(d, plot=FALSE)
hist(d, breaks=h$breaks, main=title, col="#c0c0c0",
xlab="Chi-squared", freq=FALSE)
h2 <- hist(d[d > q], breaks=h$breaks, plot=FALSE)
h2$density <- h2$density * sum(d>q)/n.iter
plot(h2, add=TRUE, col="#ff404080", freq=FALSE)
abline(v = qchisq(1-alpha,5), col="#ff4040", lwd=2)
}
return (sum(d > q) / length(d))
}
#
# Set up the effects and sample sizes to study.
#
eps <- c(0, 1, 2, 4, 8, 16)/16 * 1/6
n <- c(10, 15, 20, 30, 40, 60, 80, 120, 160, 240, 320, 480, 640, 960)
alpha <- 0.05
#
# Figure 1.
#
par(mfrow=c(2,3))
system.time(p <- sapply(eps, function(eps)
power.plot(eps, n.sample=30, n.iter=1e4, alpha=alpha))
)
#
# Figure 2.
#
system.time(p <- sapply(eps, function(eps)
sapply(n, function(n)
power.plot(eps, n.sample=n, n.iter=1e4, alpha=alpha, plot=FALSE))
))
dimnames(p) <- list(n=n, eps=eps)
colors <- hsv(seq(1/10, 9/10, length.out=length(eps)), .8, 1)
par(mfrow=c(1,1))
plot(c(min(n), max(n)), c(0,1), type="n", log="x",
xlab="Sample size", ylab="Power", main="Power vs. Sample Size")
abline(h=alpha, lwd=2, col="#505050")
tmp <- sapply(1:length(eps),
function(j) lines(n, p[, j], ylim=c(0,1),
col=colors[j], lwd=3, lty=3))
|
How do I find data to show whether a shaved die is really loaded?
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the diffe
|
47,411
|
How do I find data to show whether a shaved die is really loaded?
|
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$ times, you can test the hypothesis that $p = 1/6$ using the test statistic
$$ Z = \frac{ \hat p - 1/6}{\sqrt{\frac{ (1/6) \times (5/6) }{n} }} $$
where $\hat p$ is the fraction of rolls that come up six.
A quick calculation tells you that if you roll the die 100 times you will get a p-value < 0.05 (often considered "strong evidence" against the null hypothesis that the die is fair) only if your observed fraction of sixes is more than about 0.24 or less than about 0.1. If you roll the die 1000 times, observed fractions > 0.19 or < 0.145 will yield evidence at the 0.05 level to reject the "fair die" hypothesis.
You can reduce the number of required tosses somewhat (but not much) by counting two shaved faces (e.g., one and six), where the null hypothesis is that these faces will come up 1/6 + 1/6 = 1/3 of the time and the relevant statistic is
$$ Z = \frac{ \hat p - 1/3}{\sqrt{\frac{ (1/3) \times (2/3) }{n} }} $$
I suspect that the amount of shaving which can go undetected by the naked eye does not modify roll probabilities substantially, so you might be looking at a lot of throws to test out your dice!
|
How do I find data to show whether a shaved die is really loaded?
|
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$
|
How do I find data to show whether a shaved die is really loaded?
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$ times, you can test the hypothesis that $p = 1/6$ using the test statistic
$$ Z = \frac{ \hat p - 1/6}{\sqrt{\frac{ (1/6) \times (5/6) }{n} }} $$
where $\hat p$ is the fraction of rolls that come up six.
A quick calculation tells you that if you roll the die 100 times you will get a p-value < 0.05 (often considered "strong evidence" against the null hypothesis that the die is fair) only if your observed fraction of sixes is more than about 0.24 or less than about 0.1. If you roll the die 1000 times, observed fractions > 0.19 or < 0.145 will yield evidence at the 0.05 level to reject the "fair die" hypothesis.
You can reduce the number of required tosses somewhat (but not much) by counting two shaved faces (e.g., one and six), where the null hypothesis is that these faces will come up 1/6 + 1/6 = 1/3 of the time and the relevant statistic is
$$ Z = \frac{ \hat p - 1/3}{\sqrt{\frac{ (1/3) \times (2/3) }{n} }} $$
I suspect that the amount of shaving which can go undetected by the naked eye does not modify roll probabilities substantially, so you might be looking at a lot of throws to test out your dice!
|
How do I find data to show whether a shaved die is really loaded?
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$
|
47,412
|
A question from a test in statistics?
|
None are correct.
The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, the Pearson correlation cannot be $1$, either.
In the second answer the covariance and $S_x$ are measured in different units and so are not even comparable.
In the third answer the percentage of variance explained must be 0.
In the fourth answer, the regression slope is not related to the value of Pearson's correlation coefficient. The PCC can be any value between $0$ and $1$ (not including $0$) and when the slope is $1$ the slope can be any positive value when the PCC is $1$
|
A question from a test in statistics?
|
None are correct.
The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, t
|
A question from a test in statistics?
None are correct.
The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, the Pearson correlation cannot be $1$, either.
In the second answer the covariance and $S_x$ are measured in different units and so are not even comparable.
In the third answer the percentage of variance explained must be 0.
In the fourth answer, the regression slope is not related to the value of Pearson's correlation coefficient. The PCC can be any value between $0$ and $1$ (not including $0$) and when the slope is $1$ the slope can be any positive value when the PCC is $1$
|
A question from a test in statistics?
None are correct.
The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, t
|
47,413
|
Variance and covariance of binary data
|
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}$ is the number of pairs in which $x=y=1$.
Both that formula and the formula you gave are usually called "population" formulas. They give the variance (or covariance) of the numbers in the data.
The answer that Matlab gave, and the corresponding answer it would give for the covariance, replace the $n^2$ in the denominator by $n(n-1)$. They are usually called "sample" formulas because they treat the data as a sample from some population and give an unbiased estimate of the variance (or covariance) in that population.
|
Variance and covariance of binary data
|
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}
|
Variance and covariance of binary data
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}$ is the number of pairs in which $x=y=1$.
Both that formula and the formula you gave are usually called "population" formulas. They give the variance (or covariance) of the numbers in the data.
The answer that Matlab gave, and the corresponding answer it would give for the covariance, replace the $n^2$ in the denominator by $n(n-1)$. They are usually called "sample" formulas because they treat the data as a sample from some population and give an unbiased estimate of the variance (or covariance) in that population.
|
Variance and covariance of binary data
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}
|
47,414
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
|
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$
is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior distribution $P(x \mid \mathcal{D})$ using a given factorization of the joint,
$$P(x) \prod_i P(y_i \mid x).$$
To reduce clutter, the dependency on the data $\mathcal{D} = \{ y_1, ..., y_n \}$ is often dropped in the notation. Instead of a posterior distribution, it might actually be less confusing to just think about approximating any distribution whose unnormalized density is given by
$$\phi_0(x) \prod_i \phi_i(x).$$
The first moment of the true distribution would be
$$\text{E}[X] = \int x \, P(x) \, dx = \frac{\int x \, \phi_0(x) \prod_i \phi_i(x) \, dx}{\int \phi_0(x) \prod_i \phi_i(x) \, dx}.$$
EP works by iteratively refining the distribution with one of the factors and approximating the distribution by only keeping some of the moments.
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
|
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$
is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$
is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior distribution $P(x \mid \mathcal{D})$ using a given factorization of the joint,
$$P(x) \prod_i P(y_i \mid x).$$
To reduce clutter, the dependency on the data $\mathcal{D} = \{ y_1, ..., y_n \}$ is often dropped in the notation. Instead of a posterior distribution, it might actually be less confusing to just think about approximating any distribution whose unnormalized density is given by
$$\phi_0(x) \prod_i \phi_i(x).$$
The first moment of the true distribution would be
$$\text{E}[X] = \int x \, P(x) \, dx = \frac{\int x \, \phi_0(x) \prod_i \phi_i(x) \, dx}{\int \phi_0(x) \prod_i \phi_i(x) \, dx}.$$
EP works by iteratively refining the distribution with one of the factors and approximating the distribution by only keeping some of the moments.
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$
is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior
|
47,415
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
|
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated further on the next slides. I have actually seen this bad approach used in papers, so I thought it was worth pointing out. This is one of those cases where "you had to be there."
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
|
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated fu
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated further on the next slides. I have actually seen this bad approach used in papers, so I thought it was worth pointing out. This is one of those cases where "you had to be there."
|
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated fu
|
47,416
|
How is the working correlation matrix estimated for GEE?
|
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjects."
A good reference for your question is Liang and Zeger (1986) on Biometrika. Section 3.3 shows that the correlation parameters $\alpha$ can be estimated from the Pearson residuals $\hat{r}_{it}$. The specific estimator depends on the choice of working correlation matrix $R(\alpha)$ (independent, exchangeable, autoregressive, M-dependent or unstructured). The general approach is $$\hat{R}_{uv}=\Sigma_{i=1}^K\hat{r}_{iu}\hat{r}_{iv}/(N-p).$$
Specific estimators are given in section 4.
|
How is the working correlation matrix estimated for GEE?
|
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjec
|
How is the working correlation matrix estimated for GEE?
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjects."
A good reference for your question is Liang and Zeger (1986) on Biometrika. Section 3.3 shows that the correlation parameters $\alpha$ can be estimated from the Pearson residuals $\hat{r}_{it}$. The specific estimator depends on the choice of working correlation matrix $R(\alpha)$ (independent, exchangeable, autoregressive, M-dependent or unstructured). The general approach is $$\hat{R}_{uv}=\Sigma_{i=1}^K\hat{r}_{iu}\hat{r}_{iv}/(N-p).$$
Specific estimators are given in section 4.
|
How is the working correlation matrix estimated for GEE?
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjec
|
47,417
|
Modelling zero-inflated proportion data in R using GAMLSS
|
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model.
The response variable for the $\nu$ component of the model is a ratio of probabilities (an odds) given by
$\nu = p_0 / (1-p_0-p_1)$
where $p_0$ is the probability of zero response and $p_1$ the probability of one response. Hence $\nu$ can take values > 0.
The $\tau$ component is given by
$\tau = p_1 / (1-p_0-p_1)$
Using the predicted response values for the $\nu$ and $\tau$ components, the probability of zero response can be computed as
$p_0 = \nu / (1+\nu+\tau)$
Hope this helps.
|
Modelling zero-inflated proportion data in R using GAMLSS
|
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model.
T
|
Modelling zero-inflated proportion data in R using GAMLSS
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model.
The response variable for the $\nu$ component of the model is a ratio of probabilities (an odds) given by
$\nu = p_0 / (1-p_0-p_1)$
where $p_0$ is the probability of zero response and $p_1$ the probability of one response. Hence $\nu$ can take values > 0.
The $\tau$ component is given by
$\tau = p_1 / (1-p_0-p_1)$
Using the predicted response values for the $\nu$ and $\tau$ components, the probability of zero response can be computed as
$p_0 = \nu / (1+\nu+\tau)$
Hope this helps.
|
Modelling zero-inflated proportion data in R using GAMLSS
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model.
T
|
47,418
|
Modelling zero-inflated proportion data in R using GAMLSS
|
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$!
This means that in the above answer one needs to use the exponentials of $\nu$ and $\tau$:
$p_0 = \frac{e^\nu}{1 + e^{\nu} + e^{\tau}}$
and similarly for $p_1$.
|
Modelling zero-inflated proportion data in R using GAMLSS
|
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$!
This means that in the above answer one needs to use the exponentials o
|
Modelling zero-inflated proportion data in R using GAMLSS
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$!
This means that in the above answer one needs to use the exponentials of $\nu$ and $\tau$:
$p_0 = \frac{e^\nu}{1 + e^{\nu} + e^{\tau}}$
and similarly for $p_1$.
|
Modelling zero-inflated proportion data in R using GAMLSS
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$!
This means that in the above answer one needs to use the exponentials o
|
47,419
|
Weighted standard deviation of average
|
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you can write $\rm{Var}(aX+bY) = a^2\rm{Var}(X) + b^2\rm{Var}(Y)$, noting the coefficients are squared. This would apply (approximately) to samples as follows
set.seed(1)
n <- 100
mu = 1
sigma = 1
a<-rnorm(n,mu,sigma)
b<-rnorm(n,mu,sigma)
weight_a <- 1/4
weight_b <- 3/4
sqrt(sd(a)^2*weight_a^2+sd(b)^2*weight_b^2)
[1] 0.7526849
sd(weight_a*a + weight_b*b)
[1] 0.7524718
For concatenation of samples you need to apply the formula for variance $\rm{Var}(X) = E(X^2) - (E(X))^2$, adjusted appropriately for the fact that you're using samples, i.e. multiply by $n/(n-1)$. This is illustrated with R code below.
suma2 = sum(a^2)
sumb2 = sum(b^2)
suma = sum(a)
sumb = sum(b)
sumc2 = suma2 + sumb2
sumc = suma + sumb
nc = n + n
(sumb2/n - (sumb/n)^2) * n/(n-1)
[1] 0.9175323
sd(b)^2
[1] 0.9175323
sumc2 = suma2 + sumb2
sumc = suma + sumb
(sumc2/nc - (sumc/nc)^2) * nc/(nc-1)
[1] 0.8632217
sd(c(a,b))^2
[1] 0.8632217
When you are concatenating weighted series, the weights can be incorporated naturally to give the formula.
$$ \rm{Var}(\mathbf{w}) = \left(\frac{w_a^2 S_{aa} + w_b^2 S_{bb}}{n_a + n_b} - \left(\frac{w_a S_a + w_b S_b}{n_a + n_b}\right)^2 \right) \times \frac{n_a + n_b}{n_a + n_b -1}, $$
where $\mathbf{w}$ is the series formed by concatenating series $\mathbf{a}$ of length $n_a$ weighted by $w_a$ and series $\mathbf{b}$ of length $n_b$ weighted by $w_b$. $S_{aa}$ is the sum of squares of series $\mathbf{a}$, $S_{a}$ is the sum of series $\mathbf{a}$, and likewise for $\mathbf{b}$ .
This is illustrated in the following R code.
sumw2 = weight_a^2*suma2 + weight_b^2*sumb2
sumw = weight_a*suma + weight_b*sumb
(sumw2/nc - (sumw/nc)^2) * nc/(nc-1)
[1] 0.3314697
sd(c(weight_a*a, weight_b*b))^2
[1] 0.3314697
Note that in the first case, addition, I had to use the same size of series. For convenience, I carried on with the same series. But the code used for the second case, concatenation, should work for different-sized series.
However, as @Peter Ellis states in his answer, the problem you are ultimately trying to solve may be something like a two-sample t test, where you assume that the two samples are from the same population, and so the underlying variance of each sample is the same. In this case, you want to estimate the population variance. The formula for the pooled variance is well-known and can be found on wikipedia (wikipedia also provides a generalization to multiple samples).
|
Weighted standard deviation of average
|
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you
|
Weighted standard deviation of average
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you can write $\rm{Var}(aX+bY) = a^2\rm{Var}(X) + b^2\rm{Var}(Y)$, noting the coefficients are squared. This would apply (approximately) to samples as follows
set.seed(1)
n <- 100
mu = 1
sigma = 1
a<-rnorm(n,mu,sigma)
b<-rnorm(n,mu,sigma)
weight_a <- 1/4
weight_b <- 3/4
sqrt(sd(a)^2*weight_a^2+sd(b)^2*weight_b^2)
[1] 0.7526849
sd(weight_a*a + weight_b*b)
[1] 0.7524718
For concatenation of samples you need to apply the formula for variance $\rm{Var}(X) = E(X^2) - (E(X))^2$, adjusted appropriately for the fact that you're using samples, i.e. multiply by $n/(n-1)$. This is illustrated with R code below.
suma2 = sum(a^2)
sumb2 = sum(b^2)
suma = sum(a)
sumb = sum(b)
sumc2 = suma2 + sumb2
sumc = suma + sumb
nc = n + n
(sumb2/n - (sumb/n)^2) * n/(n-1)
[1] 0.9175323
sd(b)^2
[1] 0.9175323
sumc2 = suma2 + sumb2
sumc = suma + sumb
(sumc2/nc - (sumc/nc)^2) * nc/(nc-1)
[1] 0.8632217
sd(c(a,b))^2
[1] 0.8632217
When you are concatenating weighted series, the weights can be incorporated naturally to give the formula.
$$ \rm{Var}(\mathbf{w}) = \left(\frac{w_a^2 S_{aa} + w_b^2 S_{bb}}{n_a + n_b} - \left(\frac{w_a S_a + w_b S_b}{n_a + n_b}\right)^2 \right) \times \frac{n_a + n_b}{n_a + n_b -1}, $$
where $\mathbf{w}$ is the series formed by concatenating series $\mathbf{a}$ of length $n_a$ weighted by $w_a$ and series $\mathbf{b}$ of length $n_b$ weighted by $w_b$. $S_{aa}$ is the sum of squares of series $\mathbf{a}$, $S_{a}$ is the sum of series $\mathbf{a}$, and likewise for $\mathbf{b}$ .
This is illustrated in the following R code.
sumw2 = weight_a^2*suma2 + weight_b^2*sumb2
sumw = weight_a*suma + weight_b*sumb
(sumw2/nc - (sumw/nc)^2) * nc/(nc-1)
[1] 0.3314697
sd(c(weight_a*a, weight_b*b))^2
[1] 0.3314697
Note that in the first case, addition, I had to use the same size of series. For convenience, I carried on with the same series. But the code used for the second case, concatenation, should work for different-sized series.
However, as @Peter Ellis states in his answer, the problem you are ultimately trying to solve may be something like a two-sample t test, where you assume that the two samples are from the same population, and so the underlying variance of each sample is the same. In this case, you want to estimate the population variance. The formula for the pooled variance is well-known and can be found on wikipedia (wikipedia also provides a generalization to multiple samples).
|
Weighted standard deviation of average
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you
|
47,420
|
Weighted standard deviation of average
|
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples, and for some reason (eg sampling method) you want to give more weight to one of the samples. The main point you go wrong is thinking that this can be created in some way from the standard deviations of the subsamples. Consider that the variance of the underlying population is a reflection of the average squared distance of each point from the pooled mean (not the two means of the two subsamples, which what sd(a) etc gets you). So you need an estimate of that pooled mean; and you need a weighted average of the square of the distances of each point from that pooled mean.
|
Weighted standard deviation of average
|
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples
|
Weighted standard deviation of average
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples, and for some reason (eg sampling method) you want to give more weight to one of the samples. The main point you go wrong is thinking that this can be created in some way from the standard deviations of the subsamples. Consider that the variance of the underlying population is a reflection of the average squared distance of each point from the pooled mean (not the two means of the two subsamples, which what sd(a) etc gets you). So you need an estimate of that pooled mean; and you need a weighted average of the square of the distances of each point from that pooled mean.
|
Weighted standard deviation of average
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples
|
47,421
|
Weighted standard deviation of average
|
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions.
$$
Z= 0.25A+0.75B \\
V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] + 2\cdot 0.25\cdot 0.75 \cdot Cov[A,B]
$$
And because both are iid we have
$$
V[Z] = 0.25^2V[A]+0.75^2V[B] \\
\widehat{\mathrm{SD}}[Z] = \sqrt{0.25^2V[A]+0.75^2V[B]}
$$
Now, R will calculate the standard deviation of $Z$ and it will be based this on this variance, but it will be actually not necessarily be the $\widehat{\mathrm{SD}}[Z]$, I think, because that is a biased estimate.
And this is your other formula
$$
\mathrm{SD}_{weighted} = \sqrt{0.25\hat{V}[A]+0.75\hat{V}[B]} \\
$$
There are a couple of things.
1. The formula isn't correct. Your weights should be squared.
2. The standard deviation is a biased estimate if estimated via the variance. Because of this, correction might have been made by R. The square of this is then no longer guaranteed to be the same.
3. While the distributions are iid, the sample covariance will probably not be zero. It is reasonable to expect, that the results are not completely independent.
So those are the reasons you get different results. Even if you corrected your code and applied the correct formula, you will probably still have minor differences because of the corrected downward bias of the SD estimator.
Edit: sorry I misread the code. It's still wrong though ;)
|
Weighted standard deviation of average
|
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions.
$$
Z= 0.25A+0.75B \\
V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] +
|
Weighted standard deviation of average
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions.
$$
Z= 0.25A+0.75B \\
V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] + 2\cdot 0.25\cdot 0.75 \cdot Cov[A,B]
$$
And because both are iid we have
$$
V[Z] = 0.25^2V[A]+0.75^2V[B] \\
\widehat{\mathrm{SD}}[Z] = \sqrt{0.25^2V[A]+0.75^2V[B]}
$$
Now, R will calculate the standard deviation of $Z$ and it will be based this on this variance, but it will be actually not necessarily be the $\widehat{\mathrm{SD}}[Z]$, I think, because that is a biased estimate.
And this is your other formula
$$
\mathrm{SD}_{weighted} = \sqrt{0.25\hat{V}[A]+0.75\hat{V}[B]} \\
$$
There are a couple of things.
1. The formula isn't correct. Your weights should be squared.
2. The standard deviation is a biased estimate if estimated via the variance. Because of this, correction might have been made by R. The square of this is then no longer guaranteed to be the same.
3. While the distributions are iid, the sample covariance will probably not be zero. It is reasonable to expect, that the results are not completely independent.
So those are the reasons you get different results. Even if you corrected your code and applied the correct formula, you will probably still have minor differences because of the corrected downward bias of the SD estimator.
Edit: sorry I misread the code. It's still wrong though ;)
|
Weighted standard deviation of average
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions.
$$
Z= 0.25A+0.75B \\
V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] +
|
47,422
|
How to interpret multimodal distribution of bootstrapped correlation?
|
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the right mode corresponds to the samples that exclude both the point with the smallest value of $x$ and the point with the largest value of $x$ in your scatterplot. Similar patterns can also occur in larger samples.
|
How to interpret multimodal distribution of bootstrapped correlation?
|
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the
|
How to interpret multimodal distribution of bootstrapped correlation?
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the right mode corresponds to the samples that exclude both the point with the smallest value of $x$ and the point with the largest value of $x$ in your scatterplot. Similar patterns can also occur in larger samples.
|
How to interpret multimodal distribution of bootstrapped correlation?
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the
|
47,423
|
Why is k called representer of evaluation in the definition of kernel functions
|
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definition (that's why it uses the := notation instead of just an equals sign.) The definition says that if $f = \sum_i \alpha_i k(.,x_i)$ and $g = \sum_j \beta_j k(.,y_j)$ then $\langle f,g\rangle$ is defined to be $\sum_{i,j} \alpha_i \beta_j k(x_i, y_j)$. In particular, if $f=k(.,x)$ and $g=k(.,x')$ then the definiton in Equation 2.24 says that
$$\langle f, g \rangle = k(x,x')$$
(think of one $\alpha_i$ and one $\beta_j$ being $1$ and the rest $0$). This is Equation 2.30.
Equation 2.29 is similar but more general. Suppose we have some $f = \sum_j \beta_j k(.,x_j)$. Then
$$\langle k(,.x), f \rangle = \langle k(.,x), \sum\beta_j k(.,x_j) \rangle$$
and again Definition 2.24 says that this equals
$$\sum \beta_j k(x, x_j)$$
by definition. But this is just $f(x)$, so that gives (2.29).
|
Why is k called representer of evaluation in the definition of kernel functions
|
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definitio
|
Why is k called representer of evaluation in the definition of kernel functions
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definition (that's why it uses the := notation instead of just an equals sign.) The definition says that if $f = \sum_i \alpha_i k(.,x_i)$ and $g = \sum_j \beta_j k(.,y_j)$ then $\langle f,g\rangle$ is defined to be $\sum_{i,j} \alpha_i \beta_j k(x_i, y_j)$. In particular, if $f=k(.,x)$ and $g=k(.,x')$ then the definiton in Equation 2.24 says that
$$\langle f, g \rangle = k(x,x')$$
(think of one $\alpha_i$ and one $\beta_j$ being $1$ and the rest $0$). This is Equation 2.30.
Equation 2.29 is similar but more general. Suppose we have some $f = \sum_j \beta_j k(.,x_j)$. Then
$$\langle k(,.x), f \rangle = \langle k(.,x), \sum\beta_j k(.,x_j) \rangle$$
and again Definition 2.24 says that this equals
$$\sum \beta_j k(x, x_j)$$
by definition. But this is just $f(x)$, so that gives (2.29).
|
Why is k called representer of evaluation in the definition of kernel functions
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definitio
|
47,424
|
Why is k called representer of evaluation in the definition of kernel functions
|
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ itself.
i.e, in detail, if you dont know the result of the function $f's$ Hilbert-norm minimizing minimizer, all you need to do in a reproducing kernel Hilbert space, (RKHS) is that you take the reproducing p.s.d kernel $K(\cdot,\cdot)$ which is a function of two arguments (pairwise), where in one argument you fix $x$; the point where you would like to evaluate $f(\cdot)$ such that $x$ is the minimizer and then in the other argument you choose each of the other points apart from $x$ that you have in your dataset, as $K(x,\cdot)$ and then the "Representer Theorem" ensures that there exists a set of $\alpha's$ which go along with $K(x,\cdot)$ such that the result $f(x)$ "can be" represented/${evaluated}/admits$ the following property: $f(x)=\sum{K(x,\cdot)}$ where the placeholder $\cdot$ takes the rest of the data points and the summation is over each of those points.
|
Why is k called representer of evaluation in the definition of kernel functions
|
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ its
|
Why is k called representer of evaluation in the definition of kernel functions
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ itself.
i.e, in detail, if you dont know the result of the function $f's$ Hilbert-norm minimizing minimizer, all you need to do in a reproducing kernel Hilbert space, (RKHS) is that you take the reproducing p.s.d kernel $K(\cdot,\cdot)$ which is a function of two arguments (pairwise), where in one argument you fix $x$; the point where you would like to evaluate $f(\cdot)$ such that $x$ is the minimizer and then in the other argument you choose each of the other points apart from $x$ that you have in your dataset, as $K(x,\cdot)$ and then the "Representer Theorem" ensures that there exists a set of $\alpha's$ which go along with $K(x,\cdot)$ such that the result $f(x)$ "can be" represented/${evaluated}/admits$ the following property: $f(x)=\sum{K(x,\cdot)}$ where the placeholder $\cdot$ takes the rest of the data points and the summation is over each of those points.
|
Why is k called representer of evaluation in the definition of kernel functions
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ its
|
47,425
|
Estimating a distribution from above/below observations
|
You could try to directly estimate the CDF via a binomial rate smoother ?
Here is an idealized example for x stemming from a normal distribution:
ci = seq(from=-3,to=3,length=500)
X = rnorm(500)
Y = rep(NA, 500)
for (i in 1:500) Y[i] = as.numeric(X[i] < ci[i] )
plot(ci,Y, type="s")
library(mgcv)
library(boot)
fit=gam(Y~s(ci), family=binomial(link="logit"))
plot(fit, trans=inv.logit, shade = TRUE)
To enforce monotonic behavior, in the above example, change the code to:
library(scam)
fitMonotone=scam(Y~s(ci,bs="mpi"), family=binomial(link="logit"))
InvLogit = function(x, SCALE=TRUE) {
if (SCALE) x = x -mean(x)
return(exp(x)/(1+exp(x)))
}
plot(fitMonotone, trans=InvLogit, shade = TRUE)
|
Estimating a distribution from above/below observations
|
You could try to directly estimate the CDF via a binomial rate smoother ?
Here is an idealized example for x stemming from a normal distribution:
ci = seq(from=-3,to=3,length=500)
X = rnorm(500)
|
Estimating a distribution from above/below observations
You could try to directly estimate the CDF via a binomial rate smoother ?
Here is an idealized example for x stemming from a normal distribution:
ci = seq(from=-3,to=3,length=500)
X = rnorm(500)
Y = rep(NA, 500)
for (i in 1:500) Y[i] = as.numeric(X[i] < ci[i] )
plot(ci,Y, type="s")
library(mgcv)
library(boot)
fit=gam(Y~s(ci), family=binomial(link="logit"))
plot(fit, trans=inv.logit, shade = TRUE)
To enforce monotonic behavior, in the above example, change the code to:
library(scam)
fitMonotone=scam(Y~s(ci,bs="mpi"), family=binomial(link="logit"))
InvLogit = function(x, SCALE=TRUE) {
if (SCALE) x = x -mean(x)
return(exp(x)/(1+exp(x)))
}
plot(fitMonotone, trans=InvLogit, shade = TRUE)
|
Estimating a distribution from above/below observations
You could try to directly estimate the CDF via a binomial rate smoother ?
Here is an idealized example for x stemming from a normal distribution:
ci = seq(from=-3,to=3,length=500)
X = rnorm(500)
|
47,426
|
What does "chi" mean and come from in "chi-squared distribution"?
|
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Helmert in 1876 deserves more than a nod.
http://jeff560.tripod.com/c.html is a good start, especially if other historical bits and pieces are of interest. It includes links. Books such as Anders Hald's histories say more.
Chi appears to be just notation that Pearson used.
|
What does "chi" mean and come from in "chi-squared distribution"?
|
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Hel
|
What does "chi" mean and come from in "chi-squared distribution"?
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Helmert in 1876 deserves more than a nod.
http://jeff560.tripod.com/c.html is a good start, especially if other historical bits and pieces are of interest. It includes links. Books such as Anders Hald's histories say more.
Chi appears to be just notation that Pearson used.
|
What does "chi" mean and come from in "chi-squared distribution"?
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Hel
|
47,427
|
Confidence error bars and "central point": Should we emphasize the median?
|
Median!
Note these advantages:
the median and it's C.I. (see below) are equivariant to
monotone transformation of your data:
$$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$
for any function $g$ monotone on the domain of $x$ (i.e. $\log()$ if $x>0$).
It's robust in the sense that it's minimally changed when you replace
any fraction $\varepsilon<1/2$ of your observations by arbitrary points (the min maxbias property of the median).
the median is interpretable without reference to an underlying distribution for your data --and so are its confidence intervals --see below.
The 95% confidence intervals for the median are the smallest observations with rank $j$ and $k$ where:
$$j=\lceil n/2-1.96\sqrt{n/4}\rceil$$
$$k=\lceil n/2+1.96\sqrt{n/4}\rceil$$
for fat tailed and/or asymmetric distributions this yields C.I. that are much more precise than the Gaussian ones (and not that much less precise when the underlying data is narrow tailed and drawn from a symmetric distribution). These C.I. remain meaningful in many cases (bounded or discrete distributions) where that much cannot be said of the mean/sd based ones.
|
Confidence error bars and "central point": Should we emphasize the median?
|
Median!
Note these advantages:
the median and it's C.I. (see below) are equivariant to
monotone transformation of your data:
$$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$
for any function $g$ monotone
|
Confidence error bars and "central point": Should we emphasize the median?
Median!
Note these advantages:
the median and it's C.I. (see below) are equivariant to
monotone transformation of your data:
$$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$
for any function $g$ monotone on the domain of $x$ (i.e. $\log()$ if $x>0$).
It's robust in the sense that it's minimally changed when you replace
any fraction $\varepsilon<1/2$ of your observations by arbitrary points (the min maxbias property of the median).
the median is interpretable without reference to an underlying distribution for your data --and so are its confidence intervals --see below.
The 95% confidence intervals for the median are the smallest observations with rank $j$ and $k$ where:
$$j=\lceil n/2-1.96\sqrt{n/4}\rceil$$
$$k=\lceil n/2+1.96\sqrt{n/4}\rceil$$
for fat tailed and/or asymmetric distributions this yields C.I. that are much more precise than the Gaussian ones (and not that much less precise when the underlying data is narrow tailed and drawn from a symmetric distribution). These C.I. remain meaningful in many cases (bounded or discrete distributions) where that much cannot be said of the mean/sd based ones.
|
Confidence error bars and "central point": Should we emphasize the median?
Median!
Note these advantages:
the median and it's C.I. (see below) are equivariant to
monotone transformation of your data:
$$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$
for any function $g$ monotone
|
47,428
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
|
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first.
Discrete data is a larger issue. K-means is meant for continuous data. The mean will not be discrete, so the cluster centers will likely be anomalous. You have a high chance that the clustering algorithms ends up discovering the discreteness of your data, instead of a sensible structure.
Categorical variables are worse. K-means can't handle them at all; a popular hack is to turn them into multiple binary variables (male, female). This will however expose above problems just at an even worse scale, because now it's multiple highly correlated binary variables.
Since you apparently are dealing with survey data, consider using hierarchical clustering. With an appropriate distance function, it can deal with all above issues. You just need to spend some effort on finding a good measure of similarity.
Cluster 3.0 - I have never even seen it. I figure it is an okay choice for non data science people. Probably similar to other tools such as Matlab. It will be missing all the modern algorithms, but it probably has an easy to use user interface.
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
|
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first.
Discrete data is a larger issue. K-means is meant for continuous da
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first.
Discrete data is a larger issue. K-means is meant for continuous data. The mean will not be discrete, so the cluster centers will likely be anomalous. You have a high chance that the clustering algorithms ends up discovering the discreteness of your data, instead of a sensible structure.
Categorical variables are worse. K-means can't handle them at all; a popular hack is to turn them into multiple binary variables (male, female). This will however expose above problems just at an even worse scale, because now it's multiple highly correlated binary variables.
Since you apparently are dealing with survey data, consider using hierarchical clustering. With an appropriate distance function, it can deal with all above issues. You just need to spend some effort on finding a good measure of similarity.
Cluster 3.0 - I have never even seen it. I figure it is an okay choice for non data science people. Probably similar to other tools such as Matlab. It will be missing all the modern algorithms, but it probably has an easy to use user interface.
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first.
Discrete data is a larger issue. K-means is meant for continuous da
|
47,429
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
|
For number one:
Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reason for this is because otherwise the feature with the highest range will have more weight on the clustering process. For example, if you have a feature with range (0,100) and another with range (0,1), the last will have no effect on the clustering. Since clustering relies on distances you can see how the feature with the smallest range contributes almost nothing when a distance is calculated.
For number two:
Yes you can use categorical variables by using a binary representation. For example, if you have three colors: blue, brown, green and say some other two continuous variables like age and weight here is how you would represent the data before standardizing:
blue, brown, green, age, weight
0 , 1 , 0 , 25 , 150
1 , 0 , 0 , 26 , 140
0 , 0 , 1 , 26 , 130
After standardizing it should look more or less like this
blue, brown, green, age, weight
0 , 1 , 0 , 0.8 , 1
1 , 0 , 0 , 1 , 0.8
0 , 0 , 1 , 1 , 0.6
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
|
For number one:
Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reaso
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
For number one:
Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reason for this is because otherwise the feature with the highest range will have more weight on the clustering process. For example, if you have a feature with range (0,100) and another with range (0,1), the last will have no effect on the clustering. Since clustering relies on distances you can see how the feature with the smallest range contributes almost nothing when a distance is calculated.
For number two:
Yes you can use categorical variables by using a binary representation. For example, if you have three colors: blue, brown, green and say some other two continuous variables like age and weight here is how you would represent the data before standardizing:
blue, brown, green, age, weight
0 , 1 , 0 , 25 , 150
1 , 0 , 0 , 26 , 140
0 , 0 , 1 , 26 , 130
After standardizing it should look more or less like this
blue, brown, green, age, weight
0 , 1 , 0 , 0.8 , 1
1 , 0 , 0 , 1 , 0.8
0 , 0 , 1 , 1 , 0.6
|
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
For number one:
Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reaso
|
47,430
|
Chi-squared distribution for dice not returning expected values?
|
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, something really unlikely just happened" (the usual conclusion is then usually that something less remarkable happened under the alternative that they're not equally probable).
The correct chisquare value for cell counts of 1500 1500 1500 1500 is 0 and the correct p-value is 1:
chisq.test(c(1500,1500,1500,1500))
Chi-squared test for given probabilities
data: c(1500, 1500, 1500, 1500)
X-squared = 0, df = 3, p-value = 1
Your formula for the chi square statistic is wrong. One of the images you posted had counts of 1500 1500 0 1500. In that case, the chi-squared value is 1500, and the p-value is effectively 0:
chisq.test(c(1500,1500,0,1500))
Chi-squared test for given probabilities
data: c(1500, 1500, 0, 1500)
X-squared = 1500, df = 3, p-value < 2.2e-16
So you calculated a chi-square of 1 when you should have got 0 and calculated 0 when you should have got 1500.
What formula are you using?
(An additional check - on these four more typical counts
1 2 3 4
1481 1542 1450 1527
you should get a chi-square of 3.5693 )
---
On the use of chi-square tests to test dice for fairness see this question
I gave an answer there which points out that - if you really want to test dice - you might consider other tests.
---
As you noted in your response, the CHITEST function in Excel returns the p-value rather than the test statistic (which seems kind of odd to me, given you can get it with CHIDIST).
In the case where all the expected values are the same, a quick way to get the chi-squared value itself is to use =SUMXMY2(obs.range,exp.range)/exp.range.1,
where obs.range is the range for the observed values and exp.range is the range
of the corresponding expected values, and where exp.range.1 is the first (or indeed any other) value in exp.range, giving something like this:
1 2 3 4
Exp 1500 1500 1500 1500
Obs 1481 1542 1450 1527
chi-sq. p-value
3.5693 0.31188
A mildly clunky but even easier alternative is to use CHIINV(p.value,3), to obtain the chi-square statistic, where p.value is the range of the value returned by CHITEST.
--
Edit: Here's a couple of plots for the two sets of data you posted.
The solid grey lines are the expected numbers of times each face comes up on a fair die.
The inner pair of dashed lines are approximate limits between which the individual face results should lie 95% of the time, if the dice were fair. (So if you rolled a d20, you'd expect the count on one face - on average - to be outside the inner limits.)
The outer pair of dotted lines are approximate Bonferroni limits within which the results on all faces would lie about 95% of the time, if the dice were fair. If any count were to lie outside those bounds, you'd have some ground to suspect the die could be biased.
Edit to add some explanation --
The basic plot is just a plot of the counts. For a die with $k$ sides, the count of the number of times a given face comes up (an individual count) is effectively distributed as a binomial. If the null hypothesis (of a fair die) is true, it's binomial$(N,1/k)$.
The binomial$(N,p)$ has mean $\mu=Np$ and standard deviation $\sigma=\sqrt{Np(1-p)}$.
So the central grey line is marked at $\mu=N/k$. This much you already know how to do.
If $N$ is large and $p = 1/k$ is not too close to 0 or 1, then the standardized $i^\text{th}$ count ($\frac{(X_i-\mu)}{\sigma}$) will be well approximated by a standard normal distribution. About 95% of a normal distribution lies within 2 standard deviations of the mean. So I drew the inner dashed lines at $\mu \pm 2\sigma$. The counts are not independent of each other (since they add to the total number of rolls they must be negatively correlated), but individually they have approximately those limits at the 95% level.
See here, but I used $z=2$ instead of $z=1.96$ - this is of little consequence because I didn't care if it was a 95.4% interval instead of a 95%, and it's only approximate anyway.
So that gives the dashed lines.
Correspondingly, if the toss counts were all independent (which they aren't quite, as I mentioned), the probability that they all lie in some bound is the $k^\text{th}$ power of the probability that one does. To a rough approximation, for independent trials if you want the overall rate (across all faces) of falling outside the bounds to be about $5\%$, the individual probabilities should be about $0.05/k$ (there are several stages of approximation here, not even counting the normal approximation; to work really accurately you need lots of faces and small probabilities of falling outside the limits).
So with $k$ faces and an overall $5\%$ rate of being outside the limits, I want the probability of the individual ones being above each limit to be roughly half that or $0.025/k$. With 4 faces that's a proportion of $0.025/4 = 0.00625$ above the upper limit and $0.00625$ below the lower one.
So I drew the outer limits for the four-sided die (d4) case at $\mu \pm c\sigma$, where $c$ cuts off $0.00625$ of the standard normal distribution in the upper tail. That's about $2.5$ (again, it doesn't pay to be overly accurate about the limit, since we're approximating this thing all over).
The d6 works similarly, but the limit for it cuts off an upper tail area of $0.025/6$, which roughly corresponds to $c=2.64$ at the normal distribution.
For the d4, if the null hypothesis were true, the actual proportion of cases with any of the counts falling outside the outer bounds of 2.5 standard deviations from the mean would actually be about 4.2% (I got this by simulation) - easily close enough for my purposes since I don't especially need it to be 5%. The results for the d6 will be closer to 5% (turns out to be around 4.5%). So all those levels of approximation worked well enough (ignoring the existence of negative dependence between counts, the Bonferroni approximation to the individual tail probability, and the normal approximation to the binomial, and then rounding those cutoffs to some convenient value).
These graphs are not intended to replace the chi-squared test (though they could), but more to give a visual assessment, and to help identify the major contributions to the size of the chi-square result.
|
Chi-squared distribution for dice not returning expected values?
|
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, somethi
|
Chi-squared distribution for dice not returning expected values?
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, something really unlikely just happened" (the usual conclusion is then usually that something less remarkable happened under the alternative that they're not equally probable).
The correct chisquare value for cell counts of 1500 1500 1500 1500 is 0 and the correct p-value is 1:
chisq.test(c(1500,1500,1500,1500))
Chi-squared test for given probabilities
data: c(1500, 1500, 1500, 1500)
X-squared = 0, df = 3, p-value = 1
Your formula for the chi square statistic is wrong. One of the images you posted had counts of 1500 1500 0 1500. In that case, the chi-squared value is 1500, and the p-value is effectively 0:
chisq.test(c(1500,1500,0,1500))
Chi-squared test for given probabilities
data: c(1500, 1500, 0, 1500)
X-squared = 1500, df = 3, p-value < 2.2e-16
So you calculated a chi-square of 1 when you should have got 0 and calculated 0 when you should have got 1500.
What formula are you using?
(An additional check - on these four more typical counts
1 2 3 4
1481 1542 1450 1527
you should get a chi-square of 3.5693 )
---
On the use of chi-square tests to test dice for fairness see this question
I gave an answer there which points out that - if you really want to test dice - you might consider other tests.
---
As you noted in your response, the CHITEST function in Excel returns the p-value rather than the test statistic (which seems kind of odd to me, given you can get it with CHIDIST).
In the case where all the expected values are the same, a quick way to get the chi-squared value itself is to use =SUMXMY2(obs.range,exp.range)/exp.range.1,
where obs.range is the range for the observed values and exp.range is the range
of the corresponding expected values, and where exp.range.1 is the first (or indeed any other) value in exp.range, giving something like this:
1 2 3 4
Exp 1500 1500 1500 1500
Obs 1481 1542 1450 1527
chi-sq. p-value
3.5693 0.31188
A mildly clunky but even easier alternative is to use CHIINV(p.value,3), to obtain the chi-square statistic, where p.value is the range of the value returned by CHITEST.
--
Edit: Here's a couple of plots for the two sets of data you posted.
The solid grey lines are the expected numbers of times each face comes up on a fair die.
The inner pair of dashed lines are approximate limits between which the individual face results should lie 95% of the time, if the dice were fair. (So if you rolled a d20, you'd expect the count on one face - on average - to be outside the inner limits.)
The outer pair of dotted lines are approximate Bonferroni limits within which the results on all faces would lie about 95% of the time, if the dice were fair. If any count were to lie outside those bounds, you'd have some ground to suspect the die could be biased.
Edit to add some explanation --
The basic plot is just a plot of the counts. For a die with $k$ sides, the count of the number of times a given face comes up (an individual count) is effectively distributed as a binomial. If the null hypothesis (of a fair die) is true, it's binomial$(N,1/k)$.
The binomial$(N,p)$ has mean $\mu=Np$ and standard deviation $\sigma=\sqrt{Np(1-p)}$.
So the central grey line is marked at $\mu=N/k$. This much you already know how to do.
If $N$ is large and $p = 1/k$ is not too close to 0 or 1, then the standardized $i^\text{th}$ count ($\frac{(X_i-\mu)}{\sigma}$) will be well approximated by a standard normal distribution. About 95% of a normal distribution lies within 2 standard deviations of the mean. So I drew the inner dashed lines at $\mu \pm 2\sigma$. The counts are not independent of each other (since they add to the total number of rolls they must be negatively correlated), but individually they have approximately those limits at the 95% level.
See here, but I used $z=2$ instead of $z=1.96$ - this is of little consequence because I didn't care if it was a 95.4% interval instead of a 95%, and it's only approximate anyway.
So that gives the dashed lines.
Correspondingly, if the toss counts were all independent (which they aren't quite, as I mentioned), the probability that they all lie in some bound is the $k^\text{th}$ power of the probability that one does. To a rough approximation, for independent trials if you want the overall rate (across all faces) of falling outside the bounds to be about $5\%$, the individual probabilities should be about $0.05/k$ (there are several stages of approximation here, not even counting the normal approximation; to work really accurately you need lots of faces and small probabilities of falling outside the limits).
So with $k$ faces and an overall $5\%$ rate of being outside the limits, I want the probability of the individual ones being above each limit to be roughly half that or $0.025/k$. With 4 faces that's a proportion of $0.025/4 = 0.00625$ above the upper limit and $0.00625$ below the lower one.
So I drew the outer limits for the four-sided die (d4) case at $\mu \pm c\sigma$, where $c$ cuts off $0.00625$ of the standard normal distribution in the upper tail. That's about $2.5$ (again, it doesn't pay to be overly accurate about the limit, since we're approximating this thing all over).
The d6 works similarly, but the limit for it cuts off an upper tail area of $0.025/6$, which roughly corresponds to $c=2.64$ at the normal distribution.
For the d4, if the null hypothesis were true, the actual proportion of cases with any of the counts falling outside the outer bounds of 2.5 standard deviations from the mean would actually be about 4.2% (I got this by simulation) - easily close enough for my purposes since I don't especially need it to be 5%. The results for the d6 will be closer to 5% (turns out to be around 4.5%). So all those levels of approximation worked well enough (ignoring the existence of negative dependence between counts, the Bonferroni approximation to the individual tail probability, and the normal approximation to the binomial, and then rounding those cutoffs to some convenient value).
These graphs are not intended to replace the chi-squared test (though they could), but more to give a visual assessment, and to help identify the major contributions to the size of the chi-square result.
|
Chi-squared distribution for dice not returning expected values?
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, somethi
|
47,431
|
Getting rid of a huge categorical factor in multiple regression
|
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.model.matrix() from Matrix to build the design frame and then pass that into glmnet() from glmnet package. (lme4 naturally builds the sparse design matrix so you don't need to use the sparse.model.matrix() before you go into it.)
If you really want to do the 'average for each level' trick, then be sure to calculate each observation's average excluding itself and include a few extra observations with each factor level at the population mean. Then use this derived variable as a feature in your models instead of the categorical variable. If the factor was the only feature, then this result would be identical to lme4 or glmnet (assuming you solved for how many average observations to add).
There are a few blog entries out there that call the 'average for each level' trick impact coding. Also from my experience, if there is a strong dense feature, you might want to fit a simple model on that feature and impact code the residuals by level of the huge categorical factor instead of the pure response.
As mentioned above, this is more practical advice. Other people will probably come along with some stronger theoretical advice.
|
Getting rid of a huge categorical factor in multiple regression
|
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.mode
|
Getting rid of a huge categorical factor in multiple regression
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.model.matrix() from Matrix to build the design frame and then pass that into glmnet() from glmnet package. (lme4 naturally builds the sparse design matrix so you don't need to use the sparse.model.matrix() before you go into it.)
If you really want to do the 'average for each level' trick, then be sure to calculate each observation's average excluding itself and include a few extra observations with each factor level at the population mean. Then use this derived variable as a feature in your models instead of the categorical variable. If the factor was the only feature, then this result would be identical to lme4 or glmnet (assuming you solved for how many average observations to add).
There are a few blog entries out there that call the 'average for each level' trick impact coding. Also from my experience, if there is a strong dense feature, you might want to fit a simple model on that feature and impact code the residuals by level of the huge categorical factor instead of the pure response.
As mentioned above, this is more practical advice. Other people will probably come along with some stronger theoretical advice.
|
Getting rid of a huge categorical factor in multiple regression
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.mode
|
47,432
|
Theoretical objections to hypothesis testing [duplicate]
|
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good example of that. I used to remember relevant titles (I still remember my first one: E.T. Jaynes "Probability theory - the logic of science". Jaynes definitely wrote with passion, and the controversy over significance testing is something, that really made me do statistics professionally ;-) ), but soon I got overwhelmed by the abundance of literature on this topic.
Try also the phrase +bayesian significance testing fisher in Google Scholar and I guarantee you will find at least dozen good references (and some interesting info too).
|
Theoretical objections to hypothesis testing [duplicate]
|
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good e
|
Theoretical objections to hypothesis testing [duplicate]
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good example of that. I used to remember relevant titles (I still remember my first one: E.T. Jaynes "Probability theory - the logic of science". Jaynes definitely wrote with passion, and the controversy over significance testing is something, that really made me do statistics professionally ;-) ), but soon I got overwhelmed by the abundance of literature on this topic.
Try also the phrase +bayesian significance testing fisher in Google Scholar and I guarantee you will find at least dozen good references (and some interesting info too).
|
Theoretical objections to hypothesis testing [duplicate]
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good e
|
47,433
|
Theoretical objections to hypothesis testing [duplicate]
|
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that it is--if at all--more philosophical than anything else. But maybe it helps.
|
Theoretical objections to hypothesis testing [duplicate]
|
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that
|
Theoretical objections to hypothesis testing [duplicate]
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that it is--if at all--more philosophical than anything else. But maybe it helps.
|
Theoretical objections to hypothesis testing [duplicate]
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that
|
47,434
|
Theoretical objections to hypothesis testing [duplicate]
|
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like the likelihood principle(LP). They go through a concrete example of coin-tossing and show how the concept of statistical significance violates the LP.
|
Theoretical objections to hypothesis testing [duplicate]
|
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like
|
Theoretical objections to hypothesis testing [duplicate]
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like the likelihood principle(LP). They go through a concrete example of coin-tossing and show how the concept of statistical significance violates the LP.
|
Theoretical objections to hypothesis testing [duplicate]
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like
|
47,435
|
Simulate ARIMA by hand
|
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at the end of your ARIMA fit. Something like arima(x.ts,order=c(p,d,q),list(maxit = 1000)). I think the default for ? is 500 but you can increase it.
|
Simulate ARIMA by hand
|
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at t
|
Simulate ARIMA by hand
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at the end of your ARIMA fit. Something like arima(x.ts,order=c(p,d,q),list(maxit = 1000)). I think the default for ? is 500 but you can increase it.
|
Simulate ARIMA by hand
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at t
|
47,436
|
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
|
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. This is done for backpropagation to work properly, since it uses activation function derivatives, and ~ 0 derivatives imply extremely small (insignificant) changes to NN weights (no learning). For sigmoid, the active range lies somewhere between -sqrt(3) and sqrt(3). You may scale inputs to that range.
Also, yes: sigmoid will always output values in (0;1), because that's sigmoid's range. You will need to scale NN outputs to the necessary ranges.
|
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
|
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. Thi
|
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. This is done for backpropagation to work properly, since it uses activation function derivatives, and ~ 0 derivatives imply extremely small (insignificant) changes to NN weights (no learning). For sigmoid, the active range lies somewhere between -sqrt(3) and sqrt(3). You may scale inputs to that range.
Also, yes: sigmoid will always output values in (0;1), because that's sigmoid's range. You will need to scale NN outputs to the necessary ranges.
|
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. Thi
|
47,437
|
Inequality involving interquartile range and standard deviation
|
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio.
Upper bound for SD:IQR
The Cauchy distribution with PDF
$$\frac{dx / \sigma}{\pi(1 + (x/\sigma)^2)}$$
has infinite SD and quartiles at $\pm\sigma$. From it we can create, via truncation on the left and right, a distribution with arbitrarily large SD while (by adjusting $\sigma$) we can separately make the IQR arbitrarily short. Therefore, for any given IQR there is no upper bound on the SD and for any given SD there is no lower bound on the IQR.
Lower bound for SD:IQR
For any given IQR, we can reduce the SD in two ways: (1) by shifting the middle 50% of the values towards the mid-point of the quartiles and (2) by shifting the outer 50% of the values towards the quartiles. The lower limit of the SD for a fixed IQR is achieved by the family of (discrete) distributions having $25 + \varepsilon$% probability at $-1$ and $1$ and $50 - 2\varepsilon$% probability at $0$ ($0 \lt \varepsilon \lt 25$); members of this family have quartiles at $\pm 1$--whence an IQR of $2$ and SDs of $(50 + 2\varepsilon)/100$; the (lower) limiting ratio of SD to IQR therefore is $1/4$.
(Notice that no member of this family violates Chebyshev's Inequality, provided some care is taken in its statement: $100$% of the probability lies strictly within 2 SDs of the mean ($0$) in every case and in every case there is no ambiguity concerning the positions of the quartiles. However, in the limit as $\varepsilon \to 0$, the ratio of SD to IQR approaches $1/4$. Incorrectly interpreted, this would seem to imply that $50$% of the probability lies beyond $2$ SDs of the mean, whereas Chebyshev's Inequality asserts that no more than $25$% of the probability can lie beyond $2$ SDs of the mean. However, the positions of the quartiles for the limiting distribution with $\varepsilon=0$ are ambiguous: the lower one could be anywhere between $-1$ and $0$ and the upper anywhere between $0$ and $1$ and none of the probability is strictly beyond $2$ SDs from the mean.)
Summary
Because the empirical distribution of a sufficiently large finite sample can approach any given distribution arbitrarily closely, the conclusion--both for theoretical distributions and empirical distributions of data--is that
$$\frac{1}{4} \le \frac{SD}{IQR} \le \infty$$
and these are the best bounds possible.
|
Inequality involving interquartile range and standard deviation
|
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio.
Upper bound for SD:IQR
The Cauchy distribution with PDF
$$\frac{dx / \
|
Inequality involving interquartile range and standard deviation
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio.
Upper bound for SD:IQR
The Cauchy distribution with PDF
$$\frac{dx / \sigma}{\pi(1 + (x/\sigma)^2)}$$
has infinite SD and quartiles at $\pm\sigma$. From it we can create, via truncation on the left and right, a distribution with arbitrarily large SD while (by adjusting $\sigma$) we can separately make the IQR arbitrarily short. Therefore, for any given IQR there is no upper bound on the SD and for any given SD there is no lower bound on the IQR.
Lower bound for SD:IQR
For any given IQR, we can reduce the SD in two ways: (1) by shifting the middle 50% of the values towards the mid-point of the quartiles and (2) by shifting the outer 50% of the values towards the quartiles. The lower limit of the SD for a fixed IQR is achieved by the family of (discrete) distributions having $25 + \varepsilon$% probability at $-1$ and $1$ and $50 - 2\varepsilon$% probability at $0$ ($0 \lt \varepsilon \lt 25$); members of this family have quartiles at $\pm 1$--whence an IQR of $2$ and SDs of $(50 + 2\varepsilon)/100$; the (lower) limiting ratio of SD to IQR therefore is $1/4$.
(Notice that no member of this family violates Chebyshev's Inequality, provided some care is taken in its statement: $100$% of the probability lies strictly within 2 SDs of the mean ($0$) in every case and in every case there is no ambiguity concerning the positions of the quartiles. However, in the limit as $\varepsilon \to 0$, the ratio of SD to IQR approaches $1/4$. Incorrectly interpreted, this would seem to imply that $50$% of the probability lies beyond $2$ SDs of the mean, whereas Chebyshev's Inequality asserts that no more than $25$% of the probability can lie beyond $2$ SDs of the mean. However, the positions of the quartiles for the limiting distribution with $\varepsilon=0$ are ambiguous: the lower one could be anywhere between $-1$ and $0$ and the upper anywhere between $0$ and $1$ and none of the probability is strictly beyond $2$ SDs from the mean.)
Summary
Because the empirical distribution of a sufficiently large finite sample can approach any given distribution arbitrarily closely, the conclusion--both for theoretical distributions and empirical distributions of data--is that
$$\frac{1}{4} \le \frac{SD}{IQR} \le \infty$$
and these are the best bounds possible.
|
Inequality involving interquartile range and standard deviation
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio.
Upper bound for SD:IQR
The Cauchy distribution with PDF
$$\frac{dx / \
|
47,438
|
Distribution of ratio of sample means from two independent normal variables?
|
This framework is a particular case of Cox's model
http://www.jstor.org/stable/2530661
studied here
http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
|
Distribution of ratio of sample means from two independent normal variables?
|
This framework is a particular case of Cox's model
http://www.jstor.org/stable/2530661
studied here
http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
|
Distribution of ratio of sample means from two independent normal variables?
This framework is a particular case of Cox's model
http://www.jstor.org/stable/2530661
studied here
http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
|
Distribution of ratio of sample means from two independent normal variables?
This framework is a particular case of Cox's model
http://www.jstor.org/stable/2530661
studied here
http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
|
47,439
|
Distribution of ratio of sample means from two independent normal variables?
|
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$; then use these improved estimates to get a better estimate of c, and repeat until it converges. This ducks the question of the theoretical best estimator but might still be a useful approach.
You could use simulation based on your model (if you are confident in it) to work out the approximate distribution of any estimator, or mix of estimators, you choose.
Then I'd use a bootstrap to estimate the variances of your estimates of $\mu$ and $c$. This has the advantage of not being as dependent on the distributional assumptions of your model.
It's easier for me to illustrate this general approach than to try to explain:
###
# Create a function that does the iterative thing
RatioEst <- function(x,y, verbose=FALSE){
mu_latest <- mean(x)
sigma2_latest <- var(x)
for (i in 1:5){
c_latest <- mean(c(
mean(y / mu_latest),
sqrt(var(y)/sigma2_latest)))
mu_latest <- mean(c(x, y/c_latest))
sigma2_latest <- var(c(x, y/c_latest))
if(verbose){print(c(mu_latest, c_latest, sigma2_latest))}
}
return(c(mu_latest, c_latest))
}
#### Simulation to get an idea of the distribution of estimates.
# Simulate data many times and see the results of our estimation technique.
# True values of mu and c are 30 and 2
reps <- 10000
results <- matrix(0, nrow=reps, ncol=2)
for (i in 1:reps){
x <- rnorm(20,30,5)
y <- rnorm(30,60,10)
results[i,] <- RatioEst(x,y, verbose=FALSE)
}
summary(results)
par(mfrow=c(1,2))
plot(density(results[,1]), bty="l", main="Simulated estimates of mu",
xlab="True value=30")
plot(density(results[,2]), bty="l", main="Simulated estimates of c",
xlab="True value=2")
This gives the results below which suggest that the estimators I've chosen are biased (for mu upwards; for c downwards) although the median of repeated estimates is very good.
mu c
Min. :24.43 Min. :0.5937
1st Qu.:28.85 1st Qu.:1.8256
Median :30.01 Median :2.0072
Mean :31.21 Mean :1.9340
3rd Qu.:31.87 3rd Qu.:2.1284
Max. :73.57 Max. :2.6688
So that was a simulation to show the properties of the estimators I'd chosen (which you'll see included a funny sort of estimate of c that is an average of two estimates). Now below is how you'd go about the actual estimation, if you used this approach:
#### Actual estimation
set.seed(123)
x <- rnorm(20,30,5)
y <- rnorm(30,60,10)
# point estimates
RatioEst(x, y, verbose=TRUE)
which gives these results (including showing how the iteration works):
[1] 31.12087 1.89926 22.66501
[1] 31.050508 1.906381 22.529121
[1] 31.001155 1.911407 22.438041
[1] 30.967360 1.914864 22.377693
[1] 30.944615 1.917198 22.337999
[1] 30.944615 1.917198
To get a confidence interval here is the bootstrap:
# bootstrap
# Simulate data *once* and then resample from it many times.
# Has the advantage that will work even if original specification
# of distribution is incorrect
reps <- 699
boot.results <- matrix(0, nrow=reps, ncol=2)
for (i in 1:reps){
boot.results[i,] <- RatioEst(
x=sample(x, replace=TRUE),
y=sample(y, replace=TRUE))
}
summary(boot.results)
apply(boot.results, 2, quantile, probs=c(0.025, 0.975))
which gives these results for a (non symmetrical) 95% confidence interval:
mu c
2.5% 28.02008 1.109987
97.5% 44.38868 2.236229
|
Distribution of ratio of sample means from two independent normal variables?
|
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$
|
Distribution of ratio of sample means from two independent normal variables?
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$; then use these improved estimates to get a better estimate of c, and repeat until it converges. This ducks the question of the theoretical best estimator but might still be a useful approach.
You could use simulation based on your model (if you are confident in it) to work out the approximate distribution of any estimator, or mix of estimators, you choose.
Then I'd use a bootstrap to estimate the variances of your estimates of $\mu$ and $c$. This has the advantage of not being as dependent on the distributional assumptions of your model.
It's easier for me to illustrate this general approach than to try to explain:
###
# Create a function that does the iterative thing
RatioEst <- function(x,y, verbose=FALSE){
mu_latest <- mean(x)
sigma2_latest <- var(x)
for (i in 1:5){
c_latest <- mean(c(
mean(y / mu_latest),
sqrt(var(y)/sigma2_latest)))
mu_latest <- mean(c(x, y/c_latest))
sigma2_latest <- var(c(x, y/c_latest))
if(verbose){print(c(mu_latest, c_latest, sigma2_latest))}
}
return(c(mu_latest, c_latest))
}
#### Simulation to get an idea of the distribution of estimates.
# Simulate data many times and see the results of our estimation technique.
# True values of mu and c are 30 and 2
reps <- 10000
results <- matrix(0, nrow=reps, ncol=2)
for (i in 1:reps){
x <- rnorm(20,30,5)
y <- rnorm(30,60,10)
results[i,] <- RatioEst(x,y, verbose=FALSE)
}
summary(results)
par(mfrow=c(1,2))
plot(density(results[,1]), bty="l", main="Simulated estimates of mu",
xlab="True value=30")
plot(density(results[,2]), bty="l", main="Simulated estimates of c",
xlab="True value=2")
This gives the results below which suggest that the estimators I've chosen are biased (for mu upwards; for c downwards) although the median of repeated estimates is very good.
mu c
Min. :24.43 Min. :0.5937
1st Qu.:28.85 1st Qu.:1.8256
Median :30.01 Median :2.0072
Mean :31.21 Mean :1.9340
3rd Qu.:31.87 3rd Qu.:2.1284
Max. :73.57 Max. :2.6688
So that was a simulation to show the properties of the estimators I'd chosen (which you'll see included a funny sort of estimate of c that is an average of two estimates). Now below is how you'd go about the actual estimation, if you used this approach:
#### Actual estimation
set.seed(123)
x <- rnorm(20,30,5)
y <- rnorm(30,60,10)
# point estimates
RatioEst(x, y, verbose=TRUE)
which gives these results (including showing how the iteration works):
[1] 31.12087 1.89926 22.66501
[1] 31.050508 1.906381 22.529121
[1] 31.001155 1.911407 22.438041
[1] 30.967360 1.914864 22.377693
[1] 30.944615 1.917198 22.337999
[1] 30.944615 1.917198
To get a confidence interval here is the bootstrap:
# bootstrap
# Simulate data *once* and then resample from it many times.
# Has the advantage that will work even if original specification
# of distribution is incorrect
reps <- 699
boot.results <- matrix(0, nrow=reps, ncol=2)
for (i in 1:reps){
boot.results[i,] <- RatioEst(
x=sample(x, replace=TRUE),
y=sample(y, replace=TRUE))
}
summary(boot.results)
apply(boot.results, 2, quantile, probs=c(0.025, 0.975))
which gives these results for a (non symmetrical) 95% confidence interval:
mu c
2.5% 28.02008 1.109987
97.5% 44.38868 2.236229
|
Distribution of ratio of sample means from two independent normal variables?
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$
|
47,440
|
quantile in scipy library
|
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quantiles?
If you want quantiles, try
scipy.stats.norm.ppf( [.05,.5, .95], 2, 9)
will give you the quantiles at the points 0.05, .5 and .95. For example, the solution to $P( N_{2,9} < q ) = 0.05$ is scipy.stats.norm.ppf(.05, 2,9).
|
quantile in scipy library
|
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quan
|
quantile in scipy library
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quantiles?
If you want quantiles, try
scipy.stats.norm.ppf( [.05,.5, .95], 2, 9)
will give you the quantiles at the points 0.05, .5 and .95. For example, the solution to $P( N_{2,9} < q ) = 0.05$ is scipy.stats.norm.ppf(.05, 2,9).
|
quantile in scipy library
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quan
|
47,441
|
Percent correctly predicted of logit model
|
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomial likelihood isn't also the best one according to percent accuracy.
Edited to add: He's also right in the comments below that setting a cutpoint has serious problems. What I've proposed below is a workaround that gets at the intuition of percent accuracy but avoids setting an arbitrary threshold between the underlying continuous model predictions.
On the other hand, percent accuracy seems like a perfectly reasonable loss function as well, and it might be worth knowing how logistic regression performs with it. Percent accuracy can be more intuitive, and it isn't as susceptible to outliers and the occasional prediction that was off by a very large amount.
Finding this value is pretty straightforward. First, find the probabilities the model assigns to each outcome:
probabilities <- predict(model.results, type = "response")
If your glm flipped a bunch of biased coins for each response, it would give heads this percent of the time. Then all you need to do is find the proportion of the time that the coin will come up the wrong way. The simplest way is probably:
1 - mean(abs(probabilities - binary.outcome))
You can prove to yourself that this gives the right answer by simulating the biased coinflips yourself with rbinom.
|
Percent correctly predicted of logit model
|
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomi
|
Percent correctly predicted of logit model
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomial likelihood isn't also the best one according to percent accuracy.
Edited to add: He's also right in the comments below that setting a cutpoint has serious problems. What I've proposed below is a workaround that gets at the intuition of percent accuracy but avoids setting an arbitrary threshold between the underlying continuous model predictions.
On the other hand, percent accuracy seems like a perfectly reasonable loss function as well, and it might be worth knowing how logistic regression performs with it. Percent accuracy can be more intuitive, and it isn't as susceptible to outliers and the occasional prediction that was off by a very large amount.
Finding this value is pretty straightforward. First, find the probabilities the model assigns to each outcome:
probabilities <- predict(model.results, type = "response")
If your glm flipped a bunch of biased coins for each response, it would give heads this percent of the time. Then all you need to do is find the proportion of the time that the coin will come up the wrong way. The simplest way is probably:
1 - mean(abs(probabilities - binary.outcome))
You can prove to yourself that this gives the right answer by simulating the biased coinflips yourself with rbinom.
|
Percent correctly predicted of logit model
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomi
|
47,442
|
t-distribution confidence intervals for non-Gaussian data but large n
|
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something).
First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$ has a Normal distribution, $S_n$ has a $\chi$-distribution with $n-1$ degrees of freedom, and $\overline{X}_n$ and $S_n$ are independent. In that sense, normality of the single $X_i$ that lead to the sample mean $\overline{X}_n$ is not required.
For non-normal data, the distribution of the sample mean converges in distribution to a Normal distribution by the central limit theorem. By the law of large numbers the sample variance $S^2_n$ converges almost surely to the distribution variance $\sigma^2$. Since almost sure convergence is preserved under continuous mappings, the sample standard deviation also converges almost surely to the distribution standard deviation $S_n \rightarrow_{a.s.} \sigma$.
Since almost sure convergence implies convergence in probability, we can now apply Slutsky's theorem which states (for this case) that if $X_n \rightarrow_D X$ and $Y_n \rightarrow_P c$, then $X_n/Y_n\rightarrow_D X/c$. For our case, this would mean that
$$\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}} \rightarrow_D \frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}}.$$
So this means that even if the standard deviation is not $\chi$-distributed the t-statistic will still converge to a Normal distribution by the central limit theorem and Slutsky's theorem. So I guess what my statistics book meant is kind of what I expected
for large $n$ the sample mean converges in distribution to a Normal distribution
for large degrees of freedom the t-distribution converges to a Normal distribution
even if the sample standard deviation is not $\chi$-distributed and not independent of the sample mean, it does not ''disturb'' the convergence of the sample mean distribution to the Normal distribution
Therefore, we use the t-distribution for computing confidence intervals for large $n$ even though it is basically using the Normal distribution. I guess the t-distribution is just used because it is a bit more conservative.
|
t-distribution confidence intervals for non-Gaussian data but large n
|
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something).
First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$
|
t-distribution confidence intervals for non-Gaussian data but large n
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something).
First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$ has a Normal distribution, $S_n$ has a $\chi$-distribution with $n-1$ degrees of freedom, and $\overline{X}_n$ and $S_n$ are independent. In that sense, normality of the single $X_i$ that lead to the sample mean $\overline{X}_n$ is not required.
For non-normal data, the distribution of the sample mean converges in distribution to a Normal distribution by the central limit theorem. By the law of large numbers the sample variance $S^2_n$ converges almost surely to the distribution variance $\sigma^2$. Since almost sure convergence is preserved under continuous mappings, the sample standard deviation also converges almost surely to the distribution standard deviation $S_n \rightarrow_{a.s.} \sigma$.
Since almost sure convergence implies convergence in probability, we can now apply Slutsky's theorem which states (for this case) that if $X_n \rightarrow_D X$ and $Y_n \rightarrow_P c$, then $X_n/Y_n\rightarrow_D X/c$. For our case, this would mean that
$$\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}} \rightarrow_D \frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}}.$$
So this means that even if the standard deviation is not $\chi$-distributed the t-statistic will still converge to a Normal distribution by the central limit theorem and Slutsky's theorem. So I guess what my statistics book meant is kind of what I expected
for large $n$ the sample mean converges in distribution to a Normal distribution
for large degrees of freedom the t-distribution converges to a Normal distribution
even if the sample standard deviation is not $\chi$-distributed and not independent of the sample mean, it does not ''disturb'' the convergence of the sample mean distribution to the Normal distribution
Therefore, we use the t-distribution for computing confidence intervals for large $n$ even though it is basically using the Normal distribution. I guess the t-distribution is just used because it is a bit more conservative.
|
t-distribution confidence intervals for non-Gaussian data but large n
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something).
First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$
|
47,443
|
How to explain the connection between SVD and clustering?
|
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD):
"Relation between PCA and K-means clustering
It has been shown recently (2001,2004) that the relaxed solution of K-means clustering, specified by the cluster indicators, is given by the PCA principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace specified by the between-class scatter matrix. Thus PCA automatically projects to the subspace where the global solution of K-means clustering lies, and thus facilitates K-means clustering to find near-optimal solutions."
The way I see it, dimensionality reduction is distinct from clustering but it can help clustering high-dimensional data by reducing to a "useful" low dimensional representation.
|
How to explain the connection between SVD and clustering?
|
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD):
"Relation between PCA and K-means clustering
It has been shown recently (2001,2004) that the relaxed solut
|
How to explain the connection between SVD and clustering?
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD):
"Relation between PCA and K-means clustering
It has been shown recently (2001,2004) that the relaxed solution of K-means clustering, specified by the cluster indicators, is given by the PCA principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace specified by the between-class scatter matrix. Thus PCA automatically projects to the subspace where the global solution of K-means clustering lies, and thus facilitates K-means clustering to find near-optimal solutions."
The way I see it, dimensionality reduction is distinct from clustering but it can help clustering high-dimensional data by reducing to a "useful" low dimensional representation.
|
How to explain the connection between SVD and clustering?
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD):
"Relation between PCA and K-means clustering
It has been shown recently (2001,2004) that the relaxed solut
|
47,444
|
How to explain the connection between SVD and clustering?
|
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization).
To concur Bitwise's point, the NMF wikipedia page http://en.wikipedia.org/wiki/Non-negative_matrix_factorization states that "It has been shown [27][28] NMF is equivalent to a relaxed form of K-means clustering: matrix factor W contains cluster centroids and H contains cluster membership indicators, when using the least square as NMF objective. This provides theoretical foundation for using NMF for data clustering."
The citations are:
[27] C. Ding, X. He, H.D. Simon (2005). "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering". Proc. SIAM Int'l Conf. Data Mining, pp. 606-610. May 2005
[28] Ron Zass and Amnon Shashua (2005). "A Unifying Approach to Hard and Probabilistic Clustering". International Conference on Computer Vision (ICCV) Beijing, China, Oct., 2005.
|
How to explain the connection between SVD and clustering?
|
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization).
To concur Bitwise's point, the NMF w
|
How to explain the connection between SVD and clustering?
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization).
To concur Bitwise's point, the NMF wikipedia page http://en.wikipedia.org/wiki/Non-negative_matrix_factorization states that "It has been shown [27][28] NMF is equivalent to a relaxed form of K-means clustering: matrix factor W contains cluster centroids and H contains cluster membership indicators, when using the least square as NMF objective. This provides theoretical foundation for using NMF for data clustering."
The citations are:
[27] C. Ding, X. He, H.D. Simon (2005). "On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering". Proc. SIAM Int'l Conf. Data Mining, pp. 606-610. May 2005
[28] Ron Zass and Amnon Shashua (2005). "A Unifying Approach to Hard and Probabilistic Clustering". International Conference on Computer Vision (ICCV) Beijing, China, Oct., 2005.
|
How to explain the connection between SVD and clustering?
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization).
To concur Bitwise's point, the NMF w
|
47,445
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
|
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper
J. Marcinkiewicz, Sur une propriete de la loi de Gauss,
Mathematische Zeitschrift 44 (1939) 612-618.
The result is also proved on p.213 of
E. Lukacs,
Characteristic Functions, 2nd ed.,
Griffin, London 1970.
The proof is quite lengthy.
The remarks on p.224 there imply that the nonexistence of polynomial cumulant generating functions of degree $>2$ is a consequence of the fact that every entire cumulant generating functions $f(x)$ must satisfy a ridge property of the form $\Re f(x+it) \le f(x)$ for all real $x$ and $t$. (This is equivalent to the traditional ridge property $|c(t+iy)|\le c(iy)$ for the characteristic function $c(x)$, and can be obtained from the latter by taking the logarithm.)
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
|
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper
J. Marcinkiewicz, Sur une propriete de la loi de Gauss,
Mathematische Ze
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper
J. Marcinkiewicz, Sur une propriete de la loi de Gauss,
Mathematische Zeitschrift 44 (1939) 612-618.
The result is also proved on p.213 of
E. Lukacs,
Characteristic Functions, 2nd ed.,
Griffin, London 1970.
The proof is quite lengthy.
The remarks on p.224 there imply that the nonexistence of polynomial cumulant generating functions of degree $>2$ is a consequence of the fact that every entire cumulant generating functions $f(x)$ must satisfy a ridge property of the form $\Re f(x+it) \le f(x)$ for all real $x$ and $t$. (This is equivalent to the traditional ridge property $|c(t+iy)|\le c(iy)$ for the characteristic function $c(x)$, and can be obtained from the latter by taking the logarithm.)
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper
J. Marcinkiewicz, Sur une propriete de la loi de Gauss,
Mathematische Ze
|
47,446
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
|
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem:
https://math.uc.edu/~brycw/probab/charakt/charakt.pdf
(and excels by being neither paywalled nor in French)
Seems the key is to reframe the problem as showing that the characteristic function for the difference between two variables with the polynomial characteristic must be a Gaussian, which can only be true if the original characteristic was that of a Gaussian itself. They then only have to work with even ordered polynomials as the difference variable induces cancellation. They then bound the characteristic function from below by showing the highest order must have a negative coefficient (for the whole function to be bounded) and from above by using Jensens inequality. This then leads to a contradiction unless the polynomial is a quadratic.
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
|
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem:
https://math.uc.edu/~brycw/probab/charakt/charakt.pdf
(and excels by being neither paywal
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem:
https://math.uc.edu/~brycw/probab/charakt/charakt.pdf
(and excels by being neither paywalled nor in French)
Seems the key is to reframe the problem as showing that the characteristic function for the difference between two variables with the polynomial characteristic must be a Gaussian, which can only be true if the original characteristic was that of a Gaussian itself. They then only have to work with even ordered polynomials as the difference variable induces cancellation. They then bound the characteristic function from below by showing the highest order must have a negative coefficient (for the whole function to be bounded) and from above by using Jensens inequality. This then leads to a contradiction unless the polynomial is a quadratic.
|
Why can a polynomial of degree $>2$ not be a cumulant generating function?
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem:
https://math.uc.edu/~brycw/probab/charakt/charakt.pdf
(and excels by being neither paywal
|
47,447
|
How to compute the standard error of the mean of an AR(1) process?
|
Well there are three things as i see it with this question :
1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression .. i didnt consider the auto covariance earlier ..sorry about that
$$
Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho^2} + \sum\limits_{t=0}^{N-1}\sum\limits_{t\neq j}^{N-1}\frac{\sigma_{\varepsilon}^2}{N^2} \frac{1}{1 - \rho^2}\rho^{|j-t|}$$
2) In your code you have calculated the variance of xbar ... for the standard error the code should include taking the sqrt of the answer given
3) You have assumed that the white noise has been generated from a (0,1) distribution when in fact the white noise only has to have constant variance .. i dont know what values of the constant variance R uses to generate the time series ... perhaps you could check on that ..
Hope this helps you :)
|
How to compute the standard error of the mean of an AR(1) process?
|
Well there are three things as i see it with this question :
1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression ..
|
How to compute the standard error of the mean of an AR(1) process?
Well there are three things as i see it with this question :
1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression .. i didnt consider the auto covariance earlier ..sorry about that
$$
Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho^2} + \sum\limits_{t=0}^{N-1}\sum\limits_{t\neq j}^{N-1}\frac{\sigma_{\varepsilon}^2}{N^2} \frac{1}{1 - \rho^2}\rho^{|j-t|}$$
2) In your code you have calculated the variance of xbar ... for the standard error the code should include taking the sqrt of the answer given
3) You have assumed that the white noise has been generated from a (0,1) distribution when in fact the white noise only has to have constant variance .. i dont know what values of the constant variance R uses to generate the time series ... perhaps you could check on that ..
Hope this helps you :)
|
How to compute the standard error of the mean of an AR(1) process?
Well there are three things as i see it with this question :
1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression ..
|
47,448
|
How to compute the standard error of the mean of an AR(1) process?
|
Well actually when you take the following
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
\end{align*}
It is easier to derive an implicit value rather than an explicit value in this case..your answer and mine are the same ..it's just that yours is a bit more difficult to handle because of the expansion of rho's..but some algebraic manipulation should be able to do the trick i guess ....I derived the answer as follows
since $ Var(\overline{x})$ is a linear combination.....
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}Cov\left(\sum\limits_{t=0}^{N-1}\sum\limits_{j=0}^{N-1} x_t x_j\right) \\
\end{align*}
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}\sum\limits_{t=0}^{N-1}Var\left( x_t \right) + \frac{1}{N^2}\sum\limits_{t=0}^{N-1}\sum\limits_{j \ne t}^{N-1}Cov\left( x_t x_j\right) \\
\end{align*}
Now for an AR(1) process $Var(x_t) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2} $ and $Cov(x_tx_j) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2}\rho^{|j-t|} $....
Substituting in the above gives the required equation... hope this answers your question :)
|
How to compute the standard error of the mean of an AR(1) process?
|
Well actually when you take the following
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
\end{align*}
It is easier to derive an implicit value rather t
|
How to compute the standard error of the mean of an AR(1) process?
Well actually when you take the following
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
\end{align*}
It is easier to derive an implicit value rather than an explicit value in this case..your answer and mine are the same ..it's just that yours is a bit more difficult to handle because of the expansion of rho's..but some algebraic manipulation should be able to do the trick i guess ....I derived the answer as follows
since $ Var(\overline{x})$ is a linear combination.....
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}Cov\left(\sum\limits_{t=0}^{N-1}\sum\limits_{j=0}^{N-1} x_t x_j\right) \\
\end{align*}
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}\sum\limits_{t=0}^{N-1}Var\left( x_t \right) + \frac{1}{N^2}\sum\limits_{t=0}^{N-1}\sum\limits_{j \ne t}^{N-1}Cov\left( x_t x_j\right) \\
\end{align*}
Now for an AR(1) process $Var(x_t) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2} $ and $Cov(x_tx_j) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2}\rho^{|j-t|} $....
Substituting in the above gives the required equation... hope this answers your question :)
|
How to compute the standard error of the mean of an AR(1) process?
Well actually when you take the following
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
\end{align*}
It is easier to derive an implicit value rather t
|
47,449
|
How to compute the standard error of the mean of an AR(1) process?
|
This is the R code btw ..
nrMCS <- 10000
N <- 100
pers <- 0.9
means <- numeric(nrMCS)
for (i in 1:nrMCS) {
means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1))
}
#Simulation answer
ans1 <-sd(means)
#This should be the standard error according to the given formula
cov <- 0
for(i in 1:N){
for(j in 1:N){
cov <- cov +(1/((N^2)*(1-pers^2)))*pers^abs(j-i)
}
}
ans2 <- sqrt(cov)
|
How to compute the standard error of the mean of an AR(1) process?
|
This is the R code btw ..
nrMCS <- 10000
N <- 100
pers <- 0.9
means <- numeric(nrMCS)
for (i in 1:nrMCS) {
means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1))
}
#Simulation ans
|
How to compute the standard error of the mean of an AR(1) process?
This is the R code btw ..
nrMCS <- 10000
N <- 100
pers <- 0.9
means <- numeric(nrMCS)
for (i in 1:nrMCS) {
means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1))
}
#Simulation answer
ans1 <-sd(means)
#This should be the standard error according to the given formula
cov <- 0
for(i in 1:N){
for(j in 1:N){
cov <- cov +(1/((N^2)*(1-pers^2)))*pers^abs(j-i)
}
}
ans2 <- sqrt(cov)
|
How to compute the standard error of the mean of an AR(1) process?
This is the R code btw ..
nrMCS <- 10000
N <- 100
pers <- 0.9
means <- numeric(nrMCS)
for (i in 1:nrMCS) {
means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1))
}
#Simulation ans
|
47,450
|
How to compute the standard error of the mean of an AR(1) process?
|
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give:
> .9459876/sqrt(100)
[1] 0.09459876
Not sure why you were using sd(means) and calling it standard error (if I understood the code comment right). It would make more sense to call the value SE(u) rather than Var(u) in the derivation as well, since I think that's what you intended?
The variance of an AR(1) process is the variance of the error term divided by (1-phi^2), where you had N*(1-phi) (the N wouldn't be there if it was just variance).
.
I'll have to dig deeper to try to find a derivation of that.
varianceAR1 simple AR(1) derivation on p. 36
|
How to compute the standard error of the mean of an AR(1) process?
|
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give:
> .9459876/sqrt(100)
[1] 0.094598
|
How to compute the standard error of the mean of an AR(1) process?
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give:
> .9459876/sqrt(100)
[1] 0.09459876
Not sure why you were using sd(means) and calling it standard error (if I understood the code comment right). It would make more sense to call the value SE(u) rather than Var(u) in the derivation as well, since I think that's what you intended?
The variance of an AR(1) process is the variance of the error term divided by (1-phi^2), where you had N*(1-phi) (the N wouldn't be there if it was just variance).
.
I'll have to dig deeper to try to find a derivation of that.
varianceAR1 simple AR(1) derivation on p. 36
|
How to compute the standard error of the mean of an AR(1) process?
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give:
> .9459876/sqrt(100)
[1] 0.094598
|
47,451
|
Cox regression when reference group had zero events
|
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incident rate (IR) is $IR=\text{number of cases }/\text{ total person-time}$.
$$IRR_{\text{quartile 4 vs. quartile 1}}=\frac{IR_{\text{quartile 4}}}{IR_{\text{quartile 1}}}$$
What happens if $IR_{\text{quartile 1}}=0$?
You can change your categorisation (use tertiles or some other meaningful categorisation) or, even better, if you have a continuous predictor you can treat it as such and examine potential nonlinear relationships using polynomial terms, fractional polynomials or restricted cubic splines, for example.
|
Cox regression when reference group had zero events
|
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incid
|
Cox regression when reference group had zero events
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incident rate (IR) is $IR=\text{number of cases }/\text{ total person-time}$.
$$IRR_{\text{quartile 4 vs. quartile 1}}=\frac{IR_{\text{quartile 4}}}{IR_{\text{quartile 1}}}$$
What happens if $IR_{\text{quartile 1}}=0$?
You can change your categorisation (use tertiles or some other meaningful categorisation) or, even better, if you have a continuous predictor you can treat it as such and examine potential nonlinear relationships using polynomial terms, fractional polynomials or restricted cubic splines, for example.
|
Cox regression when reference group had zero events
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incid
|
47,452
|
Cox regression when reference group had zero events
|
To supplement andrea's response by extending it a bit to hazard ratios:
The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously occurred.
Your problem should be clear instantly - with no events, the probability is zero. Borrowing from andrea's example, the incident rate is equivalent to a constant hazard - in your case, a constant hazard of zero.
Dividing by zero tends to make software angry.
You need to switch your reference category. My suggestion is to use "Quartile 4" or the other high value of the category, and step down, rather than using Quartile 1 and stepping up. If you were hoping to, for example, show an increase in the HR as you moved up a category, you're now showing the equivalent protective effect from moving down one.
I would also suggest taking a moment to consider why you have no events.
It's possible you're simply having a run of "bad luck", at which point there's nothing you can do but increase the study size or follow the population for longer in hopes of accumulating more events. But you should make sure there's no reason that the probability of having an outcome in your population isn't zero for a reason. For cardiac events I can imagine one, but it is always worth stopping to consider when you have zero events in some level of a covariate.
|
Cox regression when reference group had zero events
|
To supplement andrea's response by extending it a bit to hazard ratios:
The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously
|
Cox regression when reference group had zero events
To supplement andrea's response by extending it a bit to hazard ratios:
The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously occurred.
Your problem should be clear instantly - with no events, the probability is zero. Borrowing from andrea's example, the incident rate is equivalent to a constant hazard - in your case, a constant hazard of zero.
Dividing by zero tends to make software angry.
You need to switch your reference category. My suggestion is to use "Quartile 4" or the other high value of the category, and step down, rather than using Quartile 1 and stepping up. If you were hoping to, for example, show an increase in the HR as you moved up a category, you're now showing the equivalent protective effect from moving down one.
I would also suggest taking a moment to consider why you have no events.
It's possible you're simply having a run of "bad luck", at which point there's nothing you can do but increase the study size or follow the population for longer in hopes of accumulating more events. But you should make sure there's no reason that the probability of having an outcome in your population isn't zero for a reason. For cardiac events I can imagine one, but it is always worth stopping to consider when you have zero events in some level of a covariate.
|
Cox regression when reference group had zero events
To supplement andrea's response by extending it a bit to hazard ratios:
The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously
|
47,453
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
|
Short Answer:
Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "reduce the dimension of the problem". The original inputs can then be accordingly "cut down" before being fed into a clustering algorithm to optimally segment the now lower order problem. Because the dimensions have been reduced, the result of clustering can now be readily visualised in 2 or 3 dimensions.
In "input-[process]-output" form, we have a linked, three-step filter:
input (high dimensional, mixed source) -> [Blind signal separation] ...
-> ranking of features -> [Dimension reduction] -> lower dimensional inputs ...
-> [Clustering] -> optimal segmentation.
To elaborate:
Suppose that your inputs are vectors, i.e. each of your data points / samples has a number of attributes, say $n$ of them.
Clustering:
In simple terms, clustering takes the inputs, considers them in an $n$-dimensional space, and, given a target number of clusters, runs a mathematical algorithm to decide what should be the centre of each cluster and which points should be assigned to belong to which cluster.
So clustering is essentially mathematical segmentation of your data into groups (optimal segmentation if you will).
But the challenge with using clustering on your raw vector inputs is that the algorithm is having to work in $n$ dimensional space -- which means its difficult to visualise, and, if many of the attributes are correlated, then those extra dimensions are not adding much value in the problem of identifying the best clusters.
Enter blind source separation...
Blind Source Separation:
Blind signal separation (BSS), on the other hand, is about separating a mixture into individual components.
Again, in simple terms, suppose that you have a process that "mixes" or confounds a number of pure signals into an aggregate whole. As an example, think of taking a recording from a number of microphones situated in an orchestra hall where there are a number of instruments all playing the same melody but where there is also quite a bit of local chattering among the audience. The resulting recording is a mixture of all of this.
The question in this case is, from the mixed input, and without knowing how the mixture is composed, can you take the output (recording) and separate out the individual input vectors?
So BSS is essentially an inverse problem in which you start with a mixed input and attempt to separate out the individual elements that went into the mixing process.
BSS, Dimension Reduction, and Clustering:
I mentioned at the start that clustering and BSS are often used together. The reason for that brings in the concept of dimensionality reduction.
The input into BSS consists of mixed signals plus noise (uncorrelated, white noise for example, or low correlated sources of little interest).
BSS works by identifying, from a number of 'features' about the signals (mathematical expressions involving the individual attributes of each vector), those features that 'explain' the greatest variation in the data.
These features can then be ranked in descending order. By taking the top three features, for example, one arrives at a much more manageable number of dimensions in which to perform clustering.
A typical example and real-world application:
So in a typical example, one might first apply PCA (principal components analysis) -- which is a type of BSS algorithm -- to a data set to "discover" the top 3 features that are most useful in explaining the variation in the data set, and then use mathematical clustering on just those 3 features to identify the segments into which the data can be split.
I've seen this combination approach used very successfully in the problem of classification (unsupervised learning) of bottom-oriented sonar signals to determine automatically the type of sea bottom that a vessel is travelling over: is it muddy, sandy, rocky, without having to send a diver down to check.
So, when used in combination, these techniques can become quite powerful tools.
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
|
Short Answer:
Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "red
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
Short Answer:
Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "reduce the dimension of the problem". The original inputs can then be accordingly "cut down" before being fed into a clustering algorithm to optimally segment the now lower order problem. Because the dimensions have been reduced, the result of clustering can now be readily visualised in 2 or 3 dimensions.
In "input-[process]-output" form, we have a linked, three-step filter:
input (high dimensional, mixed source) -> [Blind signal separation] ...
-> ranking of features -> [Dimension reduction] -> lower dimensional inputs ...
-> [Clustering] -> optimal segmentation.
To elaborate:
Suppose that your inputs are vectors, i.e. each of your data points / samples has a number of attributes, say $n$ of them.
Clustering:
In simple terms, clustering takes the inputs, considers them in an $n$-dimensional space, and, given a target number of clusters, runs a mathematical algorithm to decide what should be the centre of each cluster and which points should be assigned to belong to which cluster.
So clustering is essentially mathematical segmentation of your data into groups (optimal segmentation if you will).
But the challenge with using clustering on your raw vector inputs is that the algorithm is having to work in $n$ dimensional space -- which means its difficult to visualise, and, if many of the attributes are correlated, then those extra dimensions are not adding much value in the problem of identifying the best clusters.
Enter blind source separation...
Blind Source Separation:
Blind signal separation (BSS), on the other hand, is about separating a mixture into individual components.
Again, in simple terms, suppose that you have a process that "mixes" or confounds a number of pure signals into an aggregate whole. As an example, think of taking a recording from a number of microphones situated in an orchestra hall where there are a number of instruments all playing the same melody but where there is also quite a bit of local chattering among the audience. The resulting recording is a mixture of all of this.
The question in this case is, from the mixed input, and without knowing how the mixture is composed, can you take the output (recording) and separate out the individual input vectors?
So BSS is essentially an inverse problem in which you start with a mixed input and attempt to separate out the individual elements that went into the mixing process.
BSS, Dimension Reduction, and Clustering:
I mentioned at the start that clustering and BSS are often used together. The reason for that brings in the concept of dimensionality reduction.
The input into BSS consists of mixed signals plus noise (uncorrelated, white noise for example, or low correlated sources of little interest).
BSS works by identifying, from a number of 'features' about the signals (mathematical expressions involving the individual attributes of each vector), those features that 'explain' the greatest variation in the data.
These features can then be ranked in descending order. By taking the top three features, for example, one arrives at a much more manageable number of dimensions in which to perform clustering.
A typical example and real-world application:
So in a typical example, one might first apply PCA (principal components analysis) -- which is a type of BSS algorithm -- to a data set to "discover" the top 3 features that are most useful in explaining the variation in the data set, and then use mathematical clustering on just those 3 features to identify the segments into which the data can be split.
I've seen this combination approach used very successfully in the problem of classification (unsupervised learning) of bottom-oriented sonar signals to determine automatically the type of sea bottom that a vessel is travelling over: is it muddy, sandy, rocky, without having to send a diver down to check.
So, when used in combination, these techniques can become quite powerful tools.
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
Short Answer:
Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "red
|
47,454
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
|
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years.
If you want to learn about clustering, do not approach it from the learning side.
To the machine learning side, unsupervised learning is the ugly duckling they resort to when they don't have any labeled training data. But they do not really like or understand it. Because actually it is doing something very different. Note that most of the clustering work is done outside of the machine learning community (but in the knowledge discovery community), and they would not call it unsupervised learning.
In learning, you have an objective. You want e.g. to be able to predict the value of future observations. The clear objective in particular helps with evaluation, but also narrows down the search space a lot.
In cluster analysis, you don't have a strict objective. It is an explorative method, with unfortunately a very big search space, so you need a lot of heuristics and assumptions. You want to explore your data, and learn something new - discover some new structure in this case. If you had a clustering method that gives your the structure you already know, it actually failed the objective to some kind. Yet, this is how cluster analysis is often approached and evaluated: can it discover the structure that I already knew?
Dimensionality reduction is a technique one would prefer to be able to avoid (as it means dropping some of your data), but higher data complexity usually means much worse processing time. And if there is redundancy in your data, it is very reasonable to reduce the dimensionality first.
Reducing the number of dimensions makes data tractable that was not tractable before. It also helps with finding an appropriate distance function, because the popular distances such as Euclidean distance don't work well in high-dimensional data due to what is known as the "curse of dimensionality". With increasing dimensionality, the distances in your dataset concentrate and become more similar. As most clustering algorithms are based on distances, they fail to find clusters then, as the difference between objects blurs. There are various aspects involved (I remember having just seen an article discussing like ~9 different views of it!), but naively you can see it as a consequence of the central limit theorem. Given enough dimensions, the distances become normally distributed around some mean, and the variance is largely the amount of noise you have across the dimensions. If there is too much noise, the distances are only determined by the noise (not the signal) and your algorithms fail.
BSS I can't tell you much about. It has not been on my radar much. I remember having seen the basic idea in a purely audio analysis domain (isolating 4 voices from an 5-channel audio signal, I remember something about needing n+1 microphones to isolate n voices based on the delay alone?) It definitely is not the essential technique of unsupervised learning...
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
|
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years.
If you want to learn about clustering, do not approach it from the learning side.
To the machine le
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years.
If you want to learn about clustering, do not approach it from the learning side.
To the machine learning side, unsupervised learning is the ugly duckling they resort to when they don't have any labeled training data. But they do not really like or understand it. Because actually it is doing something very different. Note that most of the clustering work is done outside of the machine learning community (but in the knowledge discovery community), and they would not call it unsupervised learning.
In learning, you have an objective. You want e.g. to be able to predict the value of future observations. The clear objective in particular helps with evaluation, but also narrows down the search space a lot.
In cluster analysis, you don't have a strict objective. It is an explorative method, with unfortunately a very big search space, so you need a lot of heuristics and assumptions. You want to explore your data, and learn something new - discover some new structure in this case. If you had a clustering method that gives your the structure you already know, it actually failed the objective to some kind. Yet, this is how cluster analysis is often approached and evaluated: can it discover the structure that I already knew?
Dimensionality reduction is a technique one would prefer to be able to avoid (as it means dropping some of your data), but higher data complexity usually means much worse processing time. And if there is redundancy in your data, it is very reasonable to reduce the dimensionality first.
Reducing the number of dimensions makes data tractable that was not tractable before. It also helps with finding an appropriate distance function, because the popular distances such as Euclidean distance don't work well in high-dimensional data due to what is known as the "curse of dimensionality". With increasing dimensionality, the distances in your dataset concentrate and become more similar. As most clustering algorithms are based on distances, they fail to find clusters then, as the difference between objects blurs. There are various aspects involved (I remember having just seen an article discussing like ~9 different views of it!), but naively you can see it as a consequence of the central limit theorem. Given enough dimensions, the distances become normally distributed around some mean, and the variance is largely the amount of noise you have across the dimensions. If there is too much noise, the distances are only determined by the noise (not the signal) and your algorithms fail.
BSS I can't tell you much about. It has not been on my radar much. I remember having seen the basic idea in a purely audio analysis domain (isolating 4 voices from an 5-channel audio signal, I remember something about needing n+1 microphones to isolate n voices based on the delay alone?) It definitely is not the essential technique of unsupervised learning...
|
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years.
If you want to learn about clustering, do not approach it from the learning side.
To the machine le
|
47,455
|
Resource to read about distributions
|
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion.
From NIST: http://www.itl.nist.gov/div898/handbook/eda/section3/eda366.htm
From Dr. M.P. McLaughlin: http://www.causascientia.org/math_stat/Dists/Compendium.pdf
|
Resource to read about distributions
|
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion.
From NIST: http://www.itl.nist.gov/div898/handbook/eda/sect
|
Resource to read about distributions
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion.
From NIST: http://www.itl.nist.gov/div898/handbook/eda/section3/eda366.htm
From Dr. M.P. McLaughlin: http://www.causascientia.org/math_stat/Dists/Compendium.pdf
|
Resource to read about distributions
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion.
From NIST: http://www.itl.nist.gov/div898/handbook/eda/sect
|
47,456
|
Resource to read about distributions
|
Look at the series "Distributions in Statistics" by Johnson and Kotz.
Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics)
Continuous Univariate Distributions, Vol. 2 (Wiley Series in Probability and Statistics)
Univariate Discrete Distributions (Wiley Series in Probability and Statistics)
Continuous Multivariate Distributions, Volume 1, Models and Applications, 2nd Edition
They have a volume on discrete distributions, two volumes on univariate continuous distributions and one on multivariate continuous distributions. The various statistical encyclopedias are good sources and so is
Kendall's Advanced Theory of Statistics, Distribution Theory (Volume 1)
Free online information can be found in Wikipedia or through Google searches. There is a lot of good stuff out there.
From Google
From Wikipedia
|
Resource to read about distributions
|
Look at the series "Distributions in Statistics" by Johnson and Kotz.
Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics)
Continuous Univariate Distributions, V
|
Resource to read about distributions
Look at the series "Distributions in Statistics" by Johnson and Kotz.
Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics)
Continuous Univariate Distributions, Vol. 2 (Wiley Series in Probability and Statistics)
Univariate Discrete Distributions (Wiley Series in Probability and Statistics)
Continuous Multivariate Distributions, Volume 1, Models and Applications, 2nd Edition
They have a volume on discrete distributions, two volumes on univariate continuous distributions and one on multivariate continuous distributions. The various statistical encyclopedias are good sources and so is
Kendall's Advanced Theory of Statistics, Distribution Theory (Volume 1)
Free online information can be found in Wikipedia or through Google searches. There is a lot of good stuff out there.
From Google
From Wikipedia
|
Resource to read about distributions
Look at the series "Distributions in Statistics" by Johnson and Kotz.
Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics)
Continuous Univariate Distributions, V
|
47,457
|
How do I decide which family of variance/link functions to use in a generalized linear model?
|
It depends on the nature of your dependent variable:
Gaussian is for continuous DV (this is ordinary least squares)
Binomial, as you note, is for logistic regression .
Poisson is for count data (non-negative integers). See also quasipoisson.
Gamma is for continuous DV that is always positive (although often you can use Gaussian here, if the mean is $>> 0$ and the sd isn't huge - that is, if all the values are quite far from 0).
Inverse Gaussian is, I believe, used for survival data (time to event).
|
How do I decide which family of variance/link functions to use in a generalized linear model?
|
It depends on the nature of your dependent variable:
Gaussian is for continuous DV (this is ordinary least squares)
Binomial, as you note, is for logistic regression .
Poisson is for count data (non-n
|
How do I decide which family of variance/link functions to use in a generalized linear model?
It depends on the nature of your dependent variable:
Gaussian is for continuous DV (this is ordinary least squares)
Binomial, as you note, is for logistic regression .
Poisson is for count data (non-negative integers). See also quasipoisson.
Gamma is for continuous DV that is always positive (although often you can use Gaussian here, if the mean is $>> 0$ and the sd isn't huge - that is, if all the values are quite far from 0).
Inverse Gaussian is, I believe, used for survival data (time to event).
|
How do I decide which family of variance/link functions to use in a generalized linear model?
It depends on the nature of your dependent variable:
Gaussian is for continuous DV (this is ordinary least squares)
Binomial, as you note, is for logistic regression .
Poisson is for count data (non-n
|
47,458
|
"Running it" multiple times in No-Limit Hold'em poker
|
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original, so there is no advantage in expected winnings to running the hand $n$ times instead of once. Although some people find it counterintuitive, running the hand multiple times doesn't favor either the drawing hand or the hand which is ahead, whether the draw is likely to hit or not.
The variance is reduced significantly when you run the hand several times. It can greatly affect the probability that the player is ahead or behind for the day, or the probability of reaching or keeping other thresholds of psychological importance.
Some people feel that allowing your opponents to run a hand multiple times decreases their risk aversion, and mistakes due to risk aversion, earlier in the hand. This is one reason some players such as Barry Greenstein might have the policy of not letting people run the hand multiple times.
Another effect of running the hand multiple times is to decrease the probability that players bust out, and have to decide whether to rebuy or leave. Leaving may bring in other players you would prefer to play against or may leave the table shorthanded, which may suit you if you are relatively strong at shorthanded play. If a player wins the whole pot, he may also reach an uncomfortable stack depth where he might make more errors. Running the hand multiple times also takes time which could be spent playing poker instead.
On the other hand, if someone asks to run the hand multiple times, and you refuse, this may look unfriendly, and a casual player may decide to spend his money elsewhere.
|
"Running it" multiple times in No-Limit Hold'em poker
|
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original,
|
"Running it" multiple times in No-Limit Hold'em poker
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original, so there is no advantage in expected winnings to running the hand $n$ times instead of once. Although some people find it counterintuitive, running the hand multiple times doesn't favor either the drawing hand or the hand which is ahead, whether the draw is likely to hit or not.
The variance is reduced significantly when you run the hand several times. It can greatly affect the probability that the player is ahead or behind for the day, or the probability of reaching or keeping other thresholds of psychological importance.
Some people feel that allowing your opponents to run a hand multiple times decreases their risk aversion, and mistakes due to risk aversion, earlier in the hand. This is one reason some players such as Barry Greenstein might have the policy of not letting people run the hand multiple times.
Another effect of running the hand multiple times is to decrease the probability that players bust out, and have to decide whether to rebuy or leave. Leaving may bring in other players you would prefer to play against or may leave the table shorthanded, which may suit you if you are relatively strong at shorthanded play. If a player wins the whole pot, he may also reach an uncomfortable stack depth where he might make more errors. Running the hand multiple times also takes time which could be spent playing poker instead.
On the other hand, if someone asks to run the hand multiple times, and you refuse, this may look unfriendly, and a casual player may decide to spend his money elsewhere.
|
"Running it" multiple times in No-Limit Hold'em poker
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original,
|
47,459
|
"Running it" multiple times in No-Limit Hold'em poker
|
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only"
On the contrary, approximations are easy on the flop and turn.
From the flop, with 2 cards to come. Percent to win = (# outs) x 4. Example, if you have 9 clean flush outs on the flop, then you are about 36% (actually, slightly less than 36%).
On the turn, with 1 card to come, percent to win = (# outs) x 2 - 1. Example, you missed the the flush on the turn but still have 9 clean outs, then it's (9 x 2) - 1 or about 17%.
Re: running it twice.
E.g., Let's say that I have a flush draw (9 outs and thus, about 36% on the flop to win) against a made hand on the flop. We agree to run it twice.
At this point, I am behind and will lose 36% of the time if we run it once. But if we run it twice, my chances of winning or splitting the pot is greater than 36%. Because at this point, I'd be happy with a split pot and all I'm looking to do is win at least one of the runs.
So run it once, I am 36%.
Run it twice, given that I have some extra chance (an additional) 36% chance, my chance to chop OR win is greater than the 36% to win in a single run. Ergo, there is a benefit to run it twice for the person getting the chips in bad. It doesn't increase the chance of winning the pot but provided there is now a better than 36% chance of winning OR chopping the pot, this is surely a benefit.
Thus, to answer the OP's question. There is a benefit to running it twice for the person getting the chips in bad. It doesn't increase their chances of winning the pot. But there is the added value/possibility of a chopped pot, which should be clearly welcome for anyone getting their chips in bad.
|
"Running it" multiple times in No-Limit Hold'em poker
|
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only"
On the contrary, approximations are easy on the flop and turn.
From the flop, with 2 cards to
|
"Running it" multiple times in No-Limit Hold'em poker
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only"
On the contrary, approximations are easy on the flop and turn.
From the flop, with 2 cards to come. Percent to win = (# outs) x 4. Example, if you have 9 clean flush outs on the flop, then you are about 36% (actually, slightly less than 36%).
On the turn, with 1 card to come, percent to win = (# outs) x 2 - 1. Example, you missed the the flush on the turn but still have 9 clean outs, then it's (9 x 2) - 1 or about 17%.
Re: running it twice.
E.g., Let's say that I have a flush draw (9 outs and thus, about 36% on the flop to win) against a made hand on the flop. We agree to run it twice.
At this point, I am behind and will lose 36% of the time if we run it once. But if we run it twice, my chances of winning or splitting the pot is greater than 36%. Because at this point, I'd be happy with a split pot and all I'm looking to do is win at least one of the runs.
So run it once, I am 36%.
Run it twice, given that I have some extra chance (an additional) 36% chance, my chance to chop OR win is greater than the 36% to win in a single run. Ergo, there is a benefit to run it twice for the person getting the chips in bad. It doesn't increase the chance of winning the pot but provided there is now a better than 36% chance of winning OR chopping the pot, this is surely a benefit.
Thus, to answer the OP's question. There is a benefit to running it twice for the person getting the chips in bad. It doesn't increase their chances of winning the pot. But there is the added value/possibility of a chopped pot, which should be clearly welcome for anyone getting their chips in bad.
|
"Running it" multiple times in No-Limit Hold'em poker
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only"
On the contrary, approximations are easy on the flop and turn.
From the flop, with 2 cards to
|
47,460
|
Correlation and non-normal distributions
|
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article includes R code.
Reference:
Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43, 355–381. doi:10.1080/00273170802285693
|
Correlation and non-normal distributions
|
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article inc
|
Correlation and non-normal distributions
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article includes R code.
Reference:
Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43, 355–381. doi:10.1080/00273170802285693
|
Correlation and non-normal distributions
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article inc
|
47,461
|
Frequentist properties of p-values in relation to type I error
|
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead.
As noted in the comments, it seems that what caused the confusion is that the applet runs under both the alternative and the null hypothesis. To check that the type I error rate really is $0.05$ you need to run it under the null hypothesis only.
Here is an example where we use the $t$-test to test whether the mean $\mu$ of a normal distribution equals $0$. That is, we test $H_0: \mu=0.$ We simulate $10,000$ samples from ${\rm N}(0,\sigma^2)$ and compute the $p$-value for each sample.
We also simulate $10,000$ samples from the ${\rm N}(0.25,\sigma^2)$ and ${\rm N}(0.5,\sigma^2)$ distributions and compute the $p$-values.
set.seed(201208)
B<-10000
p.values1<-p.values2<-p.values3<-vector(length=B)
for(i in 1:B)
{
x1<-rnorm(25)
p.values1[i]<-t.test(x1)$p.value
x2<-rnorm(25,0.25)
p.values2[i]<-t.test(x2)$p.value
x3<-rnorm(25,0.5)
p.values3[i]<-t.test(x3)$p.value
}
We can now compute the proportion of samples that lead to a rejection of $H_0: \mu=0$ at the $5~\%$ level:
sum(p.values1<=0.05)/B
sum(p.values2<=0.05)/B
sum(p.values3<=0.05)/B
In this case, the answers are $0.505$ under the null hypothesis ($\approx 0.05$, just as we would expect!), $0.2187$ when $\mu=0.25$ and $0.6754$ when $\mu=0.5$.
We can visualize the results by plotting histograms of the $p$-values:
For $\mu=0$, the $p$-values are uniformly distributed on $\lbrack 0,1\rbrack$. Under the alternatives, the distribution of the $p$-values has more mass closer to $0$ (more so the further away from $0$ that $\mu$ is).
We can also compare the distribution of the $p$-values using box-and-whiskers-plots:
Hopefully it is clear from the picture that the probability of rejection, i.e. the probability that the $p$-value is lower than $0.05$ depends on whether the null hypothesis or an alternative hypothesis is true. In this case, you should only expect the rejection rate to be $0.05$ when $\mu=0$.
The code for producing these plots is:
#Boxplots:
boxplot(p.values1,p.values2,p.values3,names=c("mu=0","mu=0.25","mu=0.5"))
# Histograms:
par(mfrow=c(1,3))
hist(p.values1,main="mu=0")
hist(p.values2,main="mu=0.25")
hist(p.values3,main="mu=0.5")
|
Frequentist properties of p-values in relation to type I error
|
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead.
As noted in the comments, it seems that what caused the confusion is that the applet ru
|
Frequentist properties of p-values in relation to type I error
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead.
As noted in the comments, it seems that what caused the confusion is that the applet runs under both the alternative and the null hypothesis. To check that the type I error rate really is $0.05$ you need to run it under the null hypothesis only.
Here is an example where we use the $t$-test to test whether the mean $\mu$ of a normal distribution equals $0$. That is, we test $H_0: \mu=0.$ We simulate $10,000$ samples from ${\rm N}(0,\sigma^2)$ and compute the $p$-value for each sample.
We also simulate $10,000$ samples from the ${\rm N}(0.25,\sigma^2)$ and ${\rm N}(0.5,\sigma^2)$ distributions and compute the $p$-values.
set.seed(201208)
B<-10000
p.values1<-p.values2<-p.values3<-vector(length=B)
for(i in 1:B)
{
x1<-rnorm(25)
p.values1[i]<-t.test(x1)$p.value
x2<-rnorm(25,0.25)
p.values2[i]<-t.test(x2)$p.value
x3<-rnorm(25,0.5)
p.values3[i]<-t.test(x3)$p.value
}
We can now compute the proportion of samples that lead to a rejection of $H_0: \mu=0$ at the $5~\%$ level:
sum(p.values1<=0.05)/B
sum(p.values2<=0.05)/B
sum(p.values3<=0.05)/B
In this case, the answers are $0.505$ under the null hypothesis ($\approx 0.05$, just as we would expect!), $0.2187$ when $\mu=0.25$ and $0.6754$ when $\mu=0.5$.
We can visualize the results by plotting histograms of the $p$-values:
For $\mu=0$, the $p$-values are uniformly distributed on $\lbrack 0,1\rbrack$. Under the alternatives, the distribution of the $p$-values has more mass closer to $0$ (more so the further away from $0$ that $\mu$ is).
We can also compare the distribution of the $p$-values using box-and-whiskers-plots:
Hopefully it is clear from the picture that the probability of rejection, i.e. the probability that the $p$-value is lower than $0.05$ depends on whether the null hypothesis or an alternative hypothesis is true. In this case, you should only expect the rejection rate to be $0.05$ when $\mu=0$.
The code for producing these plots is:
#Boxplots:
boxplot(p.values1,p.values2,p.values3,names=c("mu=0","mu=0.25","mu=0.5"))
# Histograms:
par(mfrow=c(1,3))
hist(p.values1,main="mu=0")
hist(p.values2,main="mu=0.25")
hist(p.values3,main="mu=0.5")
|
Frequentist properties of p-values in relation to type I error
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead.
As noted in the comments, it seems that what caused the confusion is that the applet ru
|
47,462
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
|
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think that it's on CRAN anymore, but you can find it here.
For more information, see the description of the package in Journal of Statistical Software and the related paper by Karlis and Ntzoufrasin in The Statistician (which is a continuation of the work by Dixon and Coles).
The package contains examples where you can see how to format your data. It's been a few years since I played around with it, but from what I remember it was quite easy to use.
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
|
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think that it's on CRAN anymore, but you can find it here.
For more information, see the description of the package in Journal of Statistical Software and the related paper by Karlis and Ntzoufrasin in The Statistician (which is a continuation of the work by Dixon and Coles).
The package contains examples where you can see how to format your data. It's been a few years since I played around with it, but from what I remember it was quite easy to use.
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think
|
47,463
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
|
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
|
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
|
Maximum likelihood estimation in a Poisson model for football (soccer) scores
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
|
47,464
|
How to combine values based on standard errors?
|
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis.
If you believe that the means are estimating different true values, then things get more tricky. If there were more means, you could do random-effects meta-analysis. In principle that still works with only two means, but your estimate of the between-sample variance will be very imprecise. A fully Bayesian analysis would put an informative prior on the between-sample variance.
|
How to combine values based on standard errors?
|
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis.
If you believe that the
|
How to combine values based on standard errors?
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis.
If you believe that the means are estimating different true values, then things get more tricky. If there were more means, you could do random-effects meta-analysis. In principle that still works with only two means, but your estimate of the between-sample variance will be very imprecise. A fully Bayesian analysis would put an informative prior on the between-sample variance.
|
How to combine values based on standard errors?
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis.
If you believe that the
|
47,465
|
How can I check if my time series data is zero mean, stationary and independent identically distributed?
|
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses and no local time trends. (2) The variance of the errors from the final model should be constant which means no structural shift in error variance or dependency on the level of the original series. (3) The parameters of the model must be invariant over time. The appropriate tests for (1) is available via Intervention Detection Tests ( Tiao , Tsay and others ). The appropriate test for (2) is both the Tsay Test for constant error variance AND the Box-Cox test for transformations. The test for (3) is the Chow test . In case you don't want to program these tests yourself as they are not freely available you might want to look at AUTOBOX. All of these have been seamlessly integrated into a piece of software that I helped write , available from http://www.autobox.com. Hope this helps.
|
How can I check if my time series data is zero mean, stationary and independent identically distribu
|
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses an
|
How can I check if my time series data is zero mean, stationary and independent identically distributed?
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses and no local time trends. (2) The variance of the errors from the final model should be constant which means no structural shift in error variance or dependency on the level of the original series. (3) The parameters of the model must be invariant over time. The appropriate tests for (1) is available via Intervention Detection Tests ( Tiao , Tsay and others ). The appropriate test for (2) is both the Tsay Test for constant error variance AND the Box-Cox test for transformations. The test for (3) is the Chow test . In case you don't want to program these tests yourself as they are not freely available you might want to look at AUTOBOX. All of these have been seamlessly integrated into a piece of software that I helped write , available from http://www.autobox.com. Hope this helps.
|
How can I check if my time series data is zero mean, stationary and independent identically distribu
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses an
|
47,466
|
How can I check if my time series data is zero mean, stationary and independent identically distributed?
|
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary components (periodic components and polynomial trend). The rule for testing for nonstationarity is to compute the autocorrelation function and if the correlations are large and drop off slowly that is an indication of nonstationarity. This is somewhat subjective but IrishStat does this in a more formal automated way with AUTOBOX. As he mentioned level shifts and pulses can also be indications of nonstationary behavior which the AUTOBOX procedures will detect and incorporate into the model. Once you fit the model the diagnostic checking looks at the residuals to determine if they are independent with 0 mean and constant variance. Independence cannot be tested for but you can test for significant autocorrelation in the residuals and such correlation is an indication of lack of independence. The Ljung-Box test is a general test for correlation in the residual series. If zero autocorrelation is rejected higher order AR and or MA terms may be needed. If the residual series also looks nonstationarity in terms of slowly decaying autocorrelation then differencing of the series 1st 2nd or 3rd for linear quadratic or cubic trends respectively can be tried. If the appear to be periodic components remaining in the residual series then seasonal differencing can be used.
|
How can I check if my time series data is zero mean, stationary and independent identically distribu
|
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary c
|
How can I check if my time series data is zero mean, stationary and independent identically distributed?
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary components (periodic components and polynomial trend). The rule for testing for nonstationarity is to compute the autocorrelation function and if the correlations are large and drop off slowly that is an indication of nonstationarity. This is somewhat subjective but IrishStat does this in a more formal automated way with AUTOBOX. As he mentioned level shifts and pulses can also be indications of nonstationary behavior which the AUTOBOX procedures will detect and incorporate into the model. Once you fit the model the diagnostic checking looks at the residuals to determine if they are independent with 0 mean and constant variance. Independence cannot be tested for but you can test for significant autocorrelation in the residuals and such correlation is an indication of lack of independence. The Ljung-Box test is a general test for correlation in the residual series. If zero autocorrelation is rejected higher order AR and or MA terms may be needed. If the residual series also looks nonstationarity in terms of slowly decaying autocorrelation then differencing of the series 1st 2nd or 3rd for linear quadratic or cubic trends respectively can be tried. If the appear to be periodic components remaining in the residual series then seasonal differencing can be used.
|
How can I check if my time series data is zero mean, stationary and independent identically distribu
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary c
|
47,467
|
How to simulate Signal-Noise Ratio?
|
Given a model
$$
Y = f(X) + \varepsilon
$$
The signal to noise ratio can be defined as (ref. ESL10) :
$$
\frac{Var(f(X))}{Var(\varepsilon)}
$$
To generate data with a specific signal to noise ratio:
signal_to_noise_ratio = 4
data = c(0.47, 0.45, 0.30, 1.15, 0.82, 0.38, 0.51, 1.36, 1.72, 0.36)
noise = rnorm(data) # generate standard normal errors
k <- sqrt(var(data)/(signal_to_noise_ratio*var(noise)))
data_wNoise = data + k*noise
|
How to simulate Signal-Noise Ratio?
|
Given a model
$$
Y = f(X) + \varepsilon
$$
The signal to noise ratio can be defined as (ref. ESL10) :
$$
\frac{Var(f(X))}{Var(\varepsilon)}
$$
To generate data with a specific signal to noise ratio:
|
How to simulate Signal-Noise Ratio?
Given a model
$$
Y = f(X) + \varepsilon
$$
The signal to noise ratio can be defined as (ref. ESL10) :
$$
\frac{Var(f(X))}{Var(\varepsilon)}
$$
To generate data with a specific signal to noise ratio:
signal_to_noise_ratio = 4
data = c(0.47, 0.45, 0.30, 1.15, 0.82, 0.38, 0.51, 1.36, 1.72, 0.36)
noise = rnorm(data) # generate standard normal errors
k <- sqrt(var(data)/(signal_to_noise_ratio*var(noise)))
data_wNoise = data + k*noise
|
How to simulate Signal-Noise Ratio?
Given a model
$$
Y = f(X) + \varepsilon
$$
The signal to noise ratio can be defined as (ref. ESL10) :
$$
\frac{Var(f(X))}{Var(\varepsilon)}
$$
To generate data with a specific signal to noise ratio:
|
47,468
|
On the corrections for multiple comparisons
|
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply a Bonferroni correction. Perhaps it's good to be conservative in this instance given your small sample size (indeed, even smaller than the number of comparisons you are making) and the fact that you began the analysis with no predefined assumptions. Although if this latter fact is true, replication in an independent population would really be in order if you want to make stronger conclusions.
|
On the corrections for multiple comparisons
|
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply
|
On the corrections for multiple comparisons
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply a Bonferroni correction. Perhaps it's good to be conservative in this instance given your small sample size (indeed, even smaller than the number of comparisons you are making) and the fact that you began the analysis with no predefined assumptions. Although if this latter fact is true, replication in an independent population would really be in order if you want to make stronger conclusions.
|
On the corrections for multiple comparisons
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply
|
47,469
|
On the corrections for multiple comparisons
|
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonferroni bound may be too conservative though if some of the p-values are close to 0.05. I believe that p-value adjustment using a bootstrap or permutation approach might do better. In SAS you can do this with PROC MULTTEST. If you are not familiar with these methods look at the text by Westfall and Young titled Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment.
|
On the corrections for multiple comparisons
|
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonfe
|
On the corrections for multiple comparisons
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonferroni bound may be too conservative though if some of the p-values are close to 0.05. I believe that p-value adjustment using a bootstrap or permutation approach might do better. In SAS you can do this with PROC MULTTEST. If you are not familiar with these methods look at the text by Westfall and Young titled Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment.
|
On the corrections for multiple comparisons
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonfe
|
47,470
|
On the corrections for multiple comparisons
|
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what correction for multiple comparisons is appropriate for p-values that look like this?" Researchers should decide BEFORE looking at the p-values (or at least independently of what the p-values are) what correction is appropriate for the hypotheses being tested.
Since this type of question comes up fairly frequently, perhaps we should create a single page that all such questions can be redirected to somehow.
|
On the corrections for multiple comparisons
|
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what corr
|
On the corrections for multiple comparisons
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what correction for multiple comparisons is appropriate for p-values that look like this?" Researchers should decide BEFORE looking at the p-values (or at least independently of what the p-values are) what correction is appropriate for the hypotheses being tested.
Since this type of question comes up fairly frequently, perhaps we should create a single page that all such questions can be redirected to somehow.
|
On the corrections for multiple comparisons
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what corr
|
47,471
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pieces of information here that may be helpful. I suspect KM curves are more common because they are older and conceptually easier to understand (for both the researcher and the consumer of research). If the competing risks are truly independent, then I believe the KM estimates should be unbiased. That is, a plausible reason why people may prefer KM curves is that many people already understand them, and if those patients who had died due to other causes would have followed the same path as everyone else if they hadn't, the KM curves usefully illustrate what was learned from the study.
Regarding the question of whether there is over-estimation in the literature, one relevant fact, distinct from these issues, is that for practical purposes 'significance' is often required for publication. This guarantees that the literature is biased (specifically over-estimated), an issue known as the file drawer problem.
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pi
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pieces of information here that may be helpful. I suspect KM curves are more common because they are older and conceptually easier to understand (for both the researcher and the consumer of research). If the competing risks are truly independent, then I believe the KM estimates should be unbiased. That is, a plausible reason why people may prefer KM curves is that many people already understand them, and if those patients who had died due to other causes would have followed the same path as everyone else if they hadn't, the KM curves usefully illustrate what was learned from the study.
Regarding the question of whether there is over-estimation in the literature, one relevant fact, distinct from these issues, is that for practical purposes 'significance' is often required for publication. This guarantees that the literature is biased (specifically over-estimated), an issue known as the file drawer problem.
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pi
|
47,472
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is the cause specific hazard function. There is a competing risk model called Gray-Fine model that he uses. I heard him speak on this recently at a meeting in Connecticut at Yale. If you email me I can send you the lecture slides. Here is some information on references and a link to Bob Gray's website that I took off one of his slides.
INFERENCE FOR CIF: ONE SAMPLE, RIGHT CENSORING
• Naive approach to estimation of Fk using Kaplan-Meier (KM) is
invalid with dependent risks
• Even if risks independent, KM estimates distribution of Tk ,
where death from other causes is impossible
• Valid estimator (equivalent to MLE) obtained by substituting
KM estimator of S and NA estimatorof k in Fk (Aalen, 1978;
Gray, 1988; Pepe, 1991)
• Special case of general counting process framework in ABGK,
with technical issues handled via martingale results and alternative
product integral representation of CIF
• Available in R function “cuminc” on Bob Gray’s website,
biowww.dfci.harvard.edu/ gray
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is t
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is the cause specific hazard function. There is a competing risk model called Gray-Fine model that he uses. I heard him speak on this recently at a meeting in Connecticut at Yale. If you email me I can send you the lecture slides. Here is some information on references and a link to Bob Gray's website that I took off one of his slides.
INFERENCE FOR CIF: ONE SAMPLE, RIGHT CENSORING
• Naive approach to estimation of Fk using Kaplan-Meier (KM) is
invalid with dependent risks
• Even if risks independent, KM estimates distribution of Tk ,
where death from other causes is impossible
• Valid estimator (equivalent to MLE) obtained by substituting
KM estimator of S and NA estimatorof k in Fk (Aalen, 1978;
Gray, 1988; Pepe, 1991)
• Special case of general counting process framework in ABGK,
with technical issues handled via martingale results and alternative
product integral representation of CIF
• Available in R function “cuminc” on Bob Gray’s website,
biowww.dfci.harvard.edu/ gray
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is t
|
47,473
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where one individual may have several outbreaks over the duration of the study), the cumulative incidence curve will account for the total volume of outbreaks. The natural estimator of this curve, then, is from a Poisson model.
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
|
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where one individual may have several outbreaks over the duration of the study), the cumulative incidence curve will account for the total volume of outbreaks. The natural estimator of this curve, then, is from a Poisson model.
|
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where
|
47,474
|
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
|
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if you take the squared root of the results from SPSS), you will see that they are exactly the same.
For example, squaring the results from Stata:
.1214197 ^ 2 = .014742744 (standlrt)
.3032317 ^ 2 = .091949464 (Intercept)
|
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
|
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if
|
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if you take the squared root of the results from SPSS), you will see that they are exactly the same.
For example, squaring the results from Stata:
.1214197 ^ 2 = .014742744 (standlrt)
.3032317 ^ 2 = .091949464 (Intercept)
|
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if
|
47,475
|
Why do we typically visually assess our assumptions?
|
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to REJECT H0. It is not a procedure we should follow to test the assumptions of our statistical analysis.
What do we usually do to test for normality? There are tests, but as many people do, I do not think they are usually useful. If the sample is small the power is too low and if it it large the tests detect even small deviations from normality, which are almost always there and do not actually matter much. So, the usually thing to do is to look at the QQ-plot. Since you use "visually" I assume you are familiar with it.
Since it plots estimated quantiles against theoretical quantiles, you can expect that the estimated quantiles are asymptotically the populations quantiles and that the QQ-plot is more or less stable as the sample size increases.
If your sample size is very small (10 qualifies) you either admit not being able to assume normality, or you are able to justify it in some other way (experiences with similar data, theoretical reasons,...) . In an ideal world it's at least an assumptions that should be discussed.
|
Why do we typically visually assess our assumptions?
|
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to
|
Why do we typically visually assess our assumptions?
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to REJECT H0. It is not a procedure we should follow to test the assumptions of our statistical analysis.
What do we usually do to test for normality? There are tests, but as many people do, I do not think they are usually useful. If the sample is small the power is too low and if it it large the tests detect even small deviations from normality, which are almost always there and do not actually matter much. So, the usually thing to do is to look at the QQ-plot. Since you use "visually" I assume you are familiar with it.
Since it plots estimated quantiles against theoretical quantiles, you can expect that the estimated quantiles are asymptotically the populations quantiles and that the QQ-plot is more or less stable as the sample size increases.
If your sample size is very small (10 qualifies) you either admit not being able to assume normality, or you are able to justify it in some other way (experiences with similar data, theoretical reasons,...) . In an ideal world it's at least an assumptions that should be discussed.
|
Why do we typically visually assess our assumptions?
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to
|
47,476
|
Why do we typically visually assess our assumptions?
|
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe unless there is sufficient evidence to the contrary. (Somewhat odd, I agree.) This 'default position' goes by the name of 'the null hypothesis'. Thus, for example with the normality assumption, we simply assume the data (actually the residuals) are normally distributed until the data force us to change our mind.
As to the question of why we do this visually instead of via a formal hypothesis test, there are several things. First, your visual system is just amazingly powerful: possibly the majority of your brain is devoted to visual processing (depending on how you count), ~70% of the sensory input is visual in nature, etc. It is considerably more powerful than the rational / reasoning parts (as counter-intuitive as that may sound). Personally, I feel like I understand something about my data when I see it that I just don't when I read in the statistical output that p<.05. I think a second reason is that there is a decent argument to be made that many statistical tests would ultimately show 'significance' if we had enough data, and thus you are only testing your $N$ (which you already knew). Moreover, if you have enough data to establish that they aren't normal, but your data are reasonably normal-ish, the central limit theorem will cover you anyway. So, what you really want to know is do you have a moderately-sized (or larger) deviation from normal with a mid-sized (or smaller) data set. Given that you know your $N$, a qq-plot, or similar approach, is more helpful. More along these lines can be found in this classic CV question.
|
Why do we typically visually assess our assumptions?
|
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe
|
Why do we typically visually assess our assumptions?
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe unless there is sufficient evidence to the contrary. (Somewhat odd, I agree.) This 'default position' goes by the name of 'the null hypothesis'. Thus, for example with the normality assumption, we simply assume the data (actually the residuals) are normally distributed until the data force us to change our mind.
As to the question of why we do this visually instead of via a formal hypothesis test, there are several things. First, your visual system is just amazingly powerful: possibly the majority of your brain is devoted to visual processing (depending on how you count), ~70% of the sensory input is visual in nature, etc. It is considerably more powerful than the rational / reasoning parts (as counter-intuitive as that may sound). Personally, I feel like I understand something about my data when I see it that I just don't when I read in the statistical output that p<.05. I think a second reason is that there is a decent argument to be made that many statistical tests would ultimately show 'significance' if we had enough data, and thus you are only testing your $N$ (which you already knew). Moreover, if you have enough data to establish that they aren't normal, but your data are reasonably normal-ish, the central limit theorem will cover you anyway. So, what you really want to know is do you have a moderately-sized (or larger) deviation from normal with a mid-sized (or smaller) data set. Given that you know your $N$, a qq-plot, or similar approach, is more helpful. More along these lines can be found in this classic CV question.
|
Why do we typically visually assess our assumptions?
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe
|
47,477
|
Why do we typically visually assess our assumptions?
|
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't.
We typically know what the residuals (or whatever) from our sample would look like if the population met our assumptions - so we look at them, just as part of the normal inference from sample to population.
Obviously it is silly to look at your sample of 2 and conclude "yes, that's plausibly from a normal distribution". So the approach becomes one of thinking carefully about power, what can be expected from your sample size, what you know about how the sample was generated, etc.
|
Why do we typically visually assess our assumptions?
|
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't.
We typically know what the residuals (or whateve
|
Why do we typically visually assess our assumptions?
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't.
We typically know what the residuals (or whatever) from our sample would look like if the population met our assumptions - so we look at them, just as part of the normal inference from sample to population.
Obviously it is silly to look at your sample of 2 and conclude "yes, that's plausibly from a normal distribution". So the approach becomes one of thinking carefully about power, what can be expected from your sample size, what you know about how the sample was generated, etc.
|
Why do we typically visually assess our assumptions?
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't.
We typically know what the residuals (or whateve
|
47,478
|
Ensembling regression models
|
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularization Paths for Generalized Linear Models
via Coordinate Descent
You might also look at randomForest or gbm in R depending on your data.
You can try fitting many models and averaging their prediction as well (your reference to ensembles).
|
Ensembling regression models
|
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularizati
|
Ensembling regression models
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularization Paths for Generalized Linear Models
via Coordinate Descent
You might also look at randomForest or gbm in R depending on your data.
You can try fitting many models and averaging their prediction as well (your reference to ensembles).
|
Ensembling regression models
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularizati
|
47,479
|
Multiple FDR corrected experiments using the same data
|
The answer would depend on how you measure errors (and their proportions).
If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If you are worried about the "global" proportion of false discoveries, you could treat all the experiments as one. This would guarantee Global FDR control, but the FDR within each experiment is NOT controlled for.
I believe your suggestion is a conservative way to get FDR control at both levels: within each experiment and globally.
|
Multiple FDR corrected experiments using the same data
|
The answer would depend on how you measure errors (and their proportions).
If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If yo
|
Multiple FDR corrected experiments using the same data
The answer would depend on how you measure errors (and their proportions).
If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If you are worried about the "global" proportion of false discoveries, you could treat all the experiments as one. This would guarantee Global FDR control, but the FDR within each experiment is NOT controlled for.
I believe your suggestion is a conservative way to get FDR control at both levels: within each experiment and globally.
|
Multiple FDR corrected experiments using the same data
The answer would depend on how you measure errors (and their proportions).
If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If yo
|
47,480
|
The unit information prior and its BIC approximation
|
Looking at BIC's formula
$$
BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n
$$
you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivation of the BIC by Schwarz is based on an asymptotic result under which his prior (a formal prior which puts mass on subspaces of the parameter space) is "washed out". So, to argue that the BIC is truly Bayesian ammounts to saying that it is possible to do Bayesian inference without any priors (not even improper/default/reference/whatever priors). The question is if, in certain cases, we may see the BIC as an (useful) approximation, in some sense, of a full Bayesian criterion. Schwarz original paper is very short and worth reading.
|
The unit information prior and its BIC approximation
|
Looking at BIC's formula
$$
BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n
$$
you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivatio
|
The unit information prior and its BIC approximation
Looking at BIC's formula
$$
BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n
$$
you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivation of the BIC by Schwarz is based on an asymptotic result under which his prior (a formal prior which puts mass on subspaces of the parameter space) is "washed out". So, to argue that the BIC is truly Bayesian ammounts to saying that it is possible to do Bayesian inference without any priors (not even improper/default/reference/whatever priors). The question is if, in certain cases, we may see the BIC as an (useful) approximation, in some sense, of a full Bayesian criterion. Schwarz original paper is very short and worth reading.
|
The unit information prior and its BIC approximation
Looking at BIC's formula
$$
BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n
$$
you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivatio
|
47,481
|
Dependent vs. independent samples
|
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context.
So, I'm going to take a shot at this, trying to guess what Serenity Stack Holder means.
Two samples (or more than two) are dependent if they are somehow connected, not by having similar results necessarily, but by having one result in some way depend on the other result. For example, suppose I am interested in comparing the heights of men and women. If I randomly pick 50 women and 50 men from some population, the samples are independent, because what one person's height is has no bearing on another person's height. One thing gives you no information about the other. However, if I picked 50 heterosexual couples, the two samples would not be independent, because people tend to marry people of similar height.
I hope this helps!
|
Dependent vs. independent samples
|
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context.
So,
|
Dependent vs. independent samples
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context.
So, I'm going to take a shot at this, trying to guess what Serenity Stack Holder means.
Two samples (or more than two) are dependent if they are somehow connected, not by having similar results necessarily, but by having one result in some way depend on the other result. For example, suppose I am interested in comparing the heights of men and women. If I randomly pick 50 women and 50 men from some population, the samples are independent, because what one person's height is has no bearing on another person's height. One thing gives you no information about the other. However, if I picked 50 heterosexual couples, the two samples would not be independent, because people tend to marry people of similar height.
I hope this helps!
|
Dependent vs. independent samples
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context.
So,
|
47,482
|
Dependent vs. independent samples
|
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as a synonym of "realizations", then the following applies:
Samples are dependent conditional on some (or possibly no) prior knowledge if and only if knowing something about one sample could tell you something new about the other sample.
The most common case is where samples $x_1, x_2$ are assumed to be "independently and identically distributed" according to a distribution $D$. In that case, given that you know $D$ is say a normal distribution with mean zero and variance one, knowing that the value of $x_1$ is $1.2$ your belief about $x_2$ is still that it follows $D$. Without knowing $D$, however, but knowing that the samples were iid from some normal distribution, leaves the samples clearly dependent: knowing something about one tells you something about $D$, which tells you something about the other.
Without assuming dependence or independence between samples, it's impossible to know whether they are dependent or independent, but one can often make good guesses by trying to find patterns. Correlation, the example above, is just one such pattern.
|
Dependent vs. independent samples
|
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as
|
Dependent vs. independent samples
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as a synonym of "realizations", then the following applies:
Samples are dependent conditional on some (or possibly no) prior knowledge if and only if knowing something about one sample could tell you something new about the other sample.
The most common case is where samples $x_1, x_2$ are assumed to be "independently and identically distributed" according to a distribution $D$. In that case, given that you know $D$ is say a normal distribution with mean zero and variance one, knowing that the value of $x_1$ is $1.2$ your belief about $x_2$ is still that it follows $D$. Without knowing $D$, however, but knowing that the samples were iid from some normal distribution, leaves the samples clearly dependent: knowing something about one tells you something about $D$, which tells you something about the other.
Without assuming dependence or independence between samples, it's impossible to know whether they are dependent or independent, but one can often make good guesses by trying to find patterns. Correlation, the example above, is just one such pattern.
|
Dependent vs. independent samples
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as
|
47,483
|
Dependent vs. independent samples
|
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense.
see also: How to define what a "sample" is?
Maybe a list with easy cases is a start:
if your samples are correlated, they are not independent (but you cannot conclude the other way round).
(kind of obvious): if one sample influences the other, they are not independent
if you know a cause that influences both samples, they are not independent
if you are able refine a prediction of whatever you are interested for one sample once you know the result for another sample (i.e. better than guessing from the overall distribution), then samples are not independent.
it is always difficult to argue independence:
Imagine you study hair color and to do that you grab people every 30 min from the street in front of your office. You may consider the persons independent: no way to predict another persons hair color better than guessing the average hair colour. But how do you know (prove!) that you did not just miss the right model to predict hair colour?
(In)dependence may be discussed with regard to the scope of the study: now your colleague at the other side of the globe joins your study and sends you hair colors of people in front of his office. Now you can predict upcoming hairs better than general guessing from the last hair color you collected: hair colour is not independent of geographical region.
You may say that the population to be studied needs to be well defined in order to argue (in)dependence: is it "hair colours appearing in front of my office" or is it "hair colour of humans"?
|
Dependent vs. independent samples
|
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense.
see also: How to define what a "sample" is?
Maybe a list with easy cases is a start:
if your sam
|
Dependent vs. independent samples
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense.
see also: How to define what a "sample" is?
Maybe a list with easy cases is a start:
if your samples are correlated, they are not independent (but you cannot conclude the other way round).
(kind of obvious): if one sample influences the other, they are not independent
if you know a cause that influences both samples, they are not independent
if you are able refine a prediction of whatever you are interested for one sample once you know the result for another sample (i.e. better than guessing from the overall distribution), then samples are not independent.
it is always difficult to argue independence:
Imagine you study hair color and to do that you grab people every 30 min from the street in front of your office. You may consider the persons independent: no way to predict another persons hair color better than guessing the average hair colour. But how do you know (prove!) that you did not just miss the right model to predict hair colour?
(In)dependence may be discussed with regard to the scope of the study: now your colleague at the other side of the globe joins your study and sends you hair colors of people in front of his office. Now you can predict upcoming hairs better than general guessing from the last hair color you collected: hair colour is not independent of geographical region.
You may say that the population to be studied needs to be well defined in order to argue (in)dependence: is it "hair colours appearing in front of my office" or is it "hair colour of humans"?
|
Dependent vs. independent samples
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense.
see also: How to define what a "sample" is?
Maybe a list with easy cases is a start:
if your sam
|
47,484
|
How to input self-defined distance function in R?
|
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' object, which is a convenient representation of a distance matrix that hclust() understands. Obviously whether your own distances make any sense is another question, but it's easy to try out.
If you want to apply an arbitrary function to all pairs of X and Y to get a matrix, have a look at outer()
|
How to input self-defined distance function in R?
|
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' o
|
How to input self-defined distance function in R?
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' object, which is a convenient representation of a distance matrix that hclust() understands. Obviously whether your own distances make any sense is another question, but it's easy to try out.
If you want to apply an arbitrary function to all pairs of X and Y to get a matrix, have a look at outer()
|
How to input self-defined distance function in R?
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' o
|
47,485
|
How to input self-defined distance function in R?
|
Have a look at proxy, it creates distance matrices from any custom function.
set.seed(1)
mat <- matrix(runif(5))
fn <- function(x, y) 1 - cos(x - y)
proxy::dist(mat, method = fn)
1 2 3 4
2 0.005678023
3 0.046859766 0.020078605
4 0.199519068 0.140284488 0.055706274
5 0.002036234 0.014490103 0.068096902 0.239378143
|
How to input self-defined distance function in R?
|
Have a look at proxy, it creates distance matrices from any custom function.
set.seed(1)
mat <- matrix(runif(5))
fn <- function(x, y) 1 - cos(x - y)
proxy::dist(mat, method = fn)
1
|
How to input self-defined distance function in R?
Have a look at proxy, it creates distance matrices from any custom function.
set.seed(1)
mat <- matrix(runif(5))
fn <- function(x, y) 1 - cos(x - y)
proxy::dist(mat, method = fn)
1 2 3 4
2 0.005678023
3 0.046859766 0.020078605
4 0.199519068 0.140284488 0.055706274
5 0.002036234 0.014490103 0.068096902 0.239378143
|
How to input self-defined distance function in R?
Have a look at proxy, it creates distance matrices from any custom function.
set.seed(1)
mat <- matrix(runif(5))
fn <- function(x, y) 1 - cos(x - y)
proxy::dist(mat, method = fn)
1
|
47,486
|
How to input self-defined distance function in R?
|
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix to a dist object using as.dist().
hclust() takes a dist object as an argument.
If you're plotting a heatmap, or something, supplying your custom function to the distfun argument overrides the default.
|
How to input self-defined distance function in R?
|
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix
|
How to input self-defined distance function in R?
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix to a dist object using as.dist().
hclust() takes a dist object as an argument.
If you're plotting a heatmap, or something, supplying your custom function to the distfun argument overrides the default.
|
How to input self-defined distance function in R?
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix
|
47,487
|
WiFi localization using machine learning
|
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i = {s_1, s_2, s_3, \dots, s_n}$ where each $s_i$ is the strength of $AP_i$.
That is if there were $n$ unique APs seen in all scans, each $S_i$ would be $n$ entries (ss's) long. That way, you do not have variable-length features.
|
WiFi localization using machine learning
|
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i =
|
WiFi localization using machine learning
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i = {s_1, s_2, s_3, \dots, s_n}$ where each $s_i$ is the strength of $AP_i$.
That is if there were $n$ unique APs seen in all scans, each $S_i$ would be $n$ entries (ss's) long. That way, you do not have variable-length features.
|
WiFi localization using machine learning
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i =
|
47,488
|
WiFi localization using machine learning
|
Here is a sketch of a naive Bayes solution.
Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$.
Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$.
Use the frequencies in your sample to especify $P(X_i\mid R_j)$ as the number of times the $4$-tuple $X_i$ was observed in room $R_j$ divided by the total number of $4$-tuples observed in room $R_j$.
Give prior probability $P(R_j)=1/3$ to each room.
Given a new observed $4$-tuple $Y$, use Bayes Theorem to compute
$$
P(R_j\mid Y) = \frac{P(Y\mid R_j)P(R_j)}{\sum_{k=1}^3 P(Y\mid R_k)P(R_k)} \, .
$$
Make a decision: classify $Y$ as coming from the room with highest posterior probability.
Use this as a starting point and sophisticate it. An immediate possibility is to introduce some kind of smoothing: http://en.wikipedia.org/wiki/Additive_smoothing
|
WiFi localization using machine learning
|
Here is a sketch of a naive Bayes solution.
Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$.
Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$.
Use the frequencies in your sa
|
WiFi localization using machine learning
Here is a sketch of a naive Bayes solution.
Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$.
Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$.
Use the frequencies in your sample to especify $P(X_i\mid R_j)$ as the number of times the $4$-tuple $X_i$ was observed in room $R_j$ divided by the total number of $4$-tuples observed in room $R_j$.
Give prior probability $P(R_j)=1/3$ to each room.
Given a new observed $4$-tuple $Y$, use Bayes Theorem to compute
$$
P(R_j\mid Y) = \frac{P(Y\mid R_j)P(R_j)}{\sum_{k=1}^3 P(Y\mid R_k)P(R_k)} \, .
$$
Make a decision: classify $Y$ as coming from the room with highest posterior probability.
Use this as a starting point and sophisticate it. An immediate possibility is to introduce some kind of smoothing: http://en.wikipedia.org/wiki/Additive_smoothing
|
WiFi localization using machine learning
Here is a sketch of a naive Bayes solution.
Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$.
Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$.
Use the frequencies in your sa
|
47,489
|
Dimensionality reduction method for uncorrelated data?
|
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reduction.
If you still needed to do dimensionality reduction for whatever reason, you could use random projections, independent components analysis, autoencoders, or any number of nonlinear techniques which work well in the absence of obvious linear relationships on the data.
|
Dimensionality reduction method for uncorrelated data?
|
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reducti
|
Dimensionality reduction method for uncorrelated data?
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reduction.
If you still needed to do dimensionality reduction for whatever reason, you could use random projections, independent components analysis, autoencoders, or any number of nonlinear techniques which work well in the absence of obvious linear relationships on the data.
|
Dimensionality reduction method for uncorrelated data?
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reducti
|
47,490
|
Dimensionality reduction method for uncorrelated data?
|
This description is closer to OK, but still you need to describe a lot of things in more detail.
Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more than PCA. You want to "dimensionality reduction", possibly because you need to be able to describe the rule you obtain, but more importantly don't forget that you want something that helps you classify boys from girls.
The most important key step is that you will have to think of a sensible representation for your data that helps achieve this goal. Depending on this:
what do the 500 RTs mean? Are they repetitions of the same experiment? or
are they executions of different experiments? Why does it "stand on its own" and "order has no meaning"?
the representation would be significantly different.
Also when you say PCA does not work well, what exactly do you mean by that? It could be many things: does it give unacceptable accuracy on new data? or does it work reasonably well, just not as well as you had hoped?
What you say in your question (A), is not true you may be seeing the effects of a poor representation for your data.
|
Dimensionality reduction method for uncorrelated data?
|
This description is closer to OK, but still you need to describe a lot of things in more detail.
Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more tha
|
Dimensionality reduction method for uncorrelated data?
This description is closer to OK, but still you need to describe a lot of things in more detail.
Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more than PCA. You want to "dimensionality reduction", possibly because you need to be able to describe the rule you obtain, but more importantly don't forget that you want something that helps you classify boys from girls.
The most important key step is that you will have to think of a sensible representation for your data that helps achieve this goal. Depending on this:
what do the 500 RTs mean? Are they repetitions of the same experiment? or
are they executions of different experiments? Why does it "stand on its own" and "order has no meaning"?
the representation would be significantly different.
Also when you say PCA does not work well, what exactly do you mean by that? It could be many things: does it give unacceptable accuracy on new data? or does it work reasonably well, just not as well as you had hoped?
What you say in your question (A), is not true you may be seeing the effects of a poor representation for your data.
|
Dimensionality reduction method for uncorrelated data?
This description is closer to OK, but still you need to describe a lot of things in more detail.
Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more tha
|
47,491
|
Classification of observation symbols in a HMM?
|
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are impossible, while from HMM2s perspective D, E, F are. They will never predict them. (Note that there is nothing about HMMs in this answer -- you could replace "HMM" with "classifier" or "model" and the previous statement would still hold.)
If you knew something about the relationship between the symbols A, B, C and D, E, F you could get creative with mapping them between each other.
In short, the loglikelihood of that sequence, i.e. a sequence A, B, C using a model trained on D, E, F is always -inf (= log 0).
|
Classification of observation symbols in a HMM?
|
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are im
|
Classification of observation symbols in a HMM?
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are impossible, while from HMM2s perspective D, E, F are. They will never predict them. (Note that there is nothing about HMMs in this answer -- you could replace "HMM" with "classifier" or "model" and the previous statement would still hold.)
If you knew something about the relationship between the symbols A, B, C and D, E, F you could get creative with mapping them between each other.
In short, the loglikelihood of that sequence, i.e. a sequence A, B, C using a model trained on D, E, F is always -inf (= log 0).
|
Classification of observation symbols in a HMM?
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are im
|
47,492
|
Classification of observation symbols in a HMM?
|
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when the HMM encounters an unseen observation, it looks for the closest pseudo observation. See 2.7.1 in this for more details.
On the other hand, if you can not have pseudo observation in you HMM model, the simplest way to handle unseen observation is just assign them zero probabilities!
|
Classification of observation symbols in a HMM?
|
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when t
|
Classification of observation symbols in a HMM?
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when the HMM encounters an unseen observation, it looks for the closest pseudo observation. See 2.7.1 in this for more details.
On the other hand, if you can not have pseudo observation in you HMM model, the simplest way to handle unseen observation is just assign them zero probabilities!
|
Classification of observation symbols in a HMM?
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when t
|
47,493
|
Dimensionality reduction using self-organizing map
|
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says, SOM creates a discretized low-dimensional representation.
Perhaps the issue is how SOM does this. Let's say you specified a 3x3 SOM, which is a 2-D grid with 9 points. The SOM algorithm embeds this 2-D grid in your 1000-D space, as Neil G points out. It then adapts the 9 points to the data in such a way as to maintain the manifold's smoothness in 1000-D space, while moving the grid points to denser (in terms of your data) areas. (It does this by propagating changes to closest SOM points in the 2-D manifold, not to neighbors in the 1000-D space.)
So, each of the 9 points in your SOM grid has a 1000-D location in 1000-D space, but after the algorithm is finished, you are considering the 9 points in the 3x3 grid itself, reducing 1000-D space to (discretized) 2-D space.
You could also look at this as a kind of clustering of your data around 9 points that are constrained in relationship to each other.
|
Dimensionality reduction using self-organizing map
|
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says,
|
Dimensionality reduction using self-organizing map
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says, SOM creates a discretized low-dimensional representation.
Perhaps the issue is how SOM does this. Let's say you specified a 3x3 SOM, which is a 2-D grid with 9 points. The SOM algorithm embeds this 2-D grid in your 1000-D space, as Neil G points out. It then adapts the 9 points to the data in such a way as to maintain the manifold's smoothness in 1000-D space, while moving the grid points to denser (in terms of your data) areas. (It does this by propagating changes to closest SOM points in the 2-D manifold, not to neighbors in the 1000-D space.)
So, each of the 9 points in your SOM grid has a 1000-D location in 1000-D space, but after the algorithm is finished, you are considering the 9 points in the 3x3 grid itself, reducing 1000-D space to (discretized) 2-D space.
You could also look at this as a kind of clustering of your data around 9 points that are constrained in relationship to each other.
|
Dimensionality reduction using self-organizing map
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says,
|
47,494
|
Dimensionality reduction using self-organizing map
|
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9 2D points are defined a priori and kept unchanged during training. What is mapped directly to the reduced space is the topological arrangement of the original space. In other words, if you pick two neighbors at the reduced space, they will be neighbors (with greater or lower distance - see umatrix) at the original space.
|
Dimensionality reduction using self-organizing map
|
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9
|
Dimensionality reduction using self-organizing map
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9 2D points are defined a priori and kept unchanged during training. What is mapped directly to the reduced space is the topological arrangement of the original space. In other words, if you pick two neighbors at the reduced space, they will be neighbors (with greater or lower distance - see umatrix) at the original space.
|
Dimensionality reduction using self-organizing map
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9
|
47,495
|
Dimensionality reduction using self-organizing map
|
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional.
Your view that "... we can say we use a 1-dimensional output space to
represent the original 1000-dimensional space." is therefore not
right.
If you want a 1-dimensional SOM, set it at 1 by 1. Your original data
of 200 by 1000 will then be reduced to 1 by 1000. That is, from a
200-dimensional dataset to a 1-dimensional dataset
|
Dimensionality reduction using self-organizing map
|
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional.
Your view that "... we can say we use a 1-dimensional output space to
represent the original 1000-dimensional space." is therefore not
rig
|
Dimensionality reduction using self-organizing map
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional.
Your view that "... we can say we use a 1-dimensional output space to
represent the original 1000-dimensional space." is therefore not
right.
If you want a 1-dimensional SOM, set it at 1 by 1. Your original data
of 200 by 1000 will then be reduced to 1 by 1000. That is, from a
200-dimensional dataset to a 1-dimensional dataset
|
Dimensionality reduction using self-organizing map
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional.
Your view that "... we can say we use a 1-dimensional output space to
represent the original 1000-dimensional space." is therefore not
rig
|
47,496
|
Dimensionality reduction using self-organizing map
|
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding data points.
|
Dimensionality reduction using self-organizing map
|
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding da
|
Dimensionality reduction using self-organizing map
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding data points.
|
Dimensionality reduction using self-organizing map
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding da
|
47,497
|
Reference for random forests
|
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attributes showing more-less how they were useful for the model -- it is usually better than correlation with a decision or linear model coefficient significance since it can handle some nonlinearity and multiattribute interactions without blowing the roof with overfitting or combinatorical explosion, but it is obviously far from perfectly recreating underlying Bayes net.
Now, this measure works quite well as a ranking of features, but it is not a complete answer for neither of feature selection problems -- one needs some cutoff to select minimal optimal (i.e. set of attributes on which model works best) and all relevant sets (i.e. set of attributes which are objectively connected to the decision).
Minimal optimal problem is usually quite easy and can be done with recursive feature elimination or (even better) with some regularization-supporting algorithm.
On the other hand, all relevant problem is very pesky and usually requires some explicit or implicit contrast attributes to obtain the importance threshold and some stabilization and "robustization" of importance measure -- Boruta is one of the RF wrapper algorithms trying to do this by extending the data set with artificial random attributes and iterating RF training progressively purging attributes claimed unimportant.
Note: there are of course non-RF based methods to deal with both feature selection problems, either using other importance sources, adding feature selection to internal optimization of the model or simply performing some more or less complex correlation tests between attributes and decision. For some more ramblings about this topic, you can skim this preprint.
For this two-problems-Bayes-net vision of feature selection, see this paper.
|
Reference for random forests
|
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attribut
|
Reference for random forests
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attributes showing more-less how they were useful for the model -- it is usually better than correlation with a decision or linear model coefficient significance since it can handle some nonlinearity and multiattribute interactions without blowing the roof with overfitting or combinatorical explosion, but it is obviously far from perfectly recreating underlying Bayes net.
Now, this measure works quite well as a ranking of features, but it is not a complete answer for neither of feature selection problems -- one needs some cutoff to select minimal optimal (i.e. set of attributes on which model works best) and all relevant sets (i.e. set of attributes which are objectively connected to the decision).
Minimal optimal problem is usually quite easy and can be done with recursive feature elimination or (even better) with some regularization-supporting algorithm.
On the other hand, all relevant problem is very pesky and usually requires some explicit or implicit contrast attributes to obtain the importance threshold and some stabilization and "robustization" of importance measure -- Boruta is one of the RF wrapper algorithms trying to do this by extending the data set with artificial random attributes and iterating RF training progressively purging attributes claimed unimportant.
Note: there are of course non-RF based methods to deal with both feature selection problems, either using other importance sources, adding feature selection to internal optimization of the model or simply performing some more or less complex correlation tests between attributes and decision. For some more ramblings about this topic, you can skim this preprint.
For this two-problems-Bayes-net vision of feature selection, see this paper.
|
Reference for random forests
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attribut
|
47,498
|
Train a SVM-based classifier while taking into account the weight information
|
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example by example basis.
I can think of one way in which you could approach this, if you have a lot of samples. You could use the weights as probabilities for inclusion in random subsets. You would then learn the SVM on each subset, and your final classifier is then a linear combination of these subsets. This is a variation on bootstrapping, which normally works over subsets of the features (see e.g. here, and might be quite interesting to analyse.
[Edit 1]:
Based on the answers from Jeff and Dikran it occured to me that you can just incorporate into the SVM objective. Normally the primal form looks like:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
subject to (for any $i=1,\dots n$)
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0 .$
but you could just include another vector of confidence values, e.g. $0 < \delta_i \leq 1, ~~~~i=1,\dots n$:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + \frac{C}{\delta_i} \sum_{i=1}^n \xi_i \right\}$
subject to (for any $i=1,\dots n$)
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0 .$
which would mean that instances with low probability would receive a greater penalty in the objective. Note that now the $C$ parameter performs two roles - as a regulariser and as a scaling factor for the confidence scores. This may cause its own problems, so it might be better to split it into two parts, but then of course you would have an extra hyperparameter to tune.
[Edit 2]:
This can be done with libSVM (MATLAB and Python interfaces are included). There is also code available in several languages for the SMO algorithm which can solve the SVM problem efficiently. Alternatively you could use an optimisation package, such as quadprog in matlab or CVX, to write a custom solver.
|
Train a SVM-based classifier while taking into account the weight information
|
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example
|
Train a SVM-based classifier while taking into account the weight information
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example by example basis.
I can think of one way in which you could approach this, if you have a lot of samples. You could use the weights as probabilities for inclusion in random subsets. You would then learn the SVM on each subset, and your final classifier is then a linear combination of these subsets. This is a variation on bootstrapping, which normally works over subsets of the features (see e.g. here, and might be quite interesting to analyse.
[Edit 1]:
Based on the answers from Jeff and Dikran it occured to me that you can just incorporate into the SVM objective. Normally the primal form looks like:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
subject to (for any $i=1,\dots n$)
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0 .$
but you could just include another vector of confidence values, e.g. $0 < \delta_i \leq 1, ~~~~i=1,\dots n$:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + \frac{C}{\delta_i} \sum_{i=1}^n \xi_i \right\}$
subject to (for any $i=1,\dots n$)
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0 .$
which would mean that instances with low probability would receive a greater penalty in the objective. Note that now the $C$ parameter performs two roles - as a regulariser and as a scaling factor for the confidence scores. This may cause its own problems, so it might be better to split it into two parts, but then of course you would have an extra hyperparameter to tune.
[Edit 2]:
This can be done with libSVM (MATLAB and Python interfaces are included). There is also code available in several languages for the SMO algorithm which can solve the SVM problem efficiently. Alternatively you could use an optimisation package, such as quadprog in matlab or CVX, to write a custom solver.
|
Train a SVM-based classifier while taking into account the weight information
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example
|
47,499
|
Train a SVM-based classifier while taking into account the weight information
|
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers are closely related and will give similar results for most problems. Note that the KFD is equivalent to kernel ridge regression, and a 2-norm SVM is equivalent to KRR computed on only the support vectors, so it may be possible to port this approach to SVM.
In general dealing with label noise is easiest when you have a fully probabilistic classifier, so as tdc suggests, the SVM is probably not the best approach. It might be worth looking at semi-supervised versions of the SVM, as the problem is similar in that in both cases you only have uncertain information about some of the labels (very uncertain for semi-supervised methods). The approach tdc suggests is probably worth trying, or alternatively present each pattern twice, with different labels each time, with weights $p$ and $1-p$.
|
Train a SVM-based classifier while taking into account the weight information
|
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers a
|
Train a SVM-based classifier while taking into account the weight information
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers are closely related and will give similar results for most problems. Note that the KFD is equivalent to kernel ridge regression, and a 2-norm SVM is equivalent to KRR computed on only the support vectors, so it may be possible to port this approach to SVM.
In general dealing with label noise is easiest when you have a fully probabilistic classifier, so as tdc suggests, the SVM is probably not the best approach. It might be worth looking at semi-supervised versions of the SVM, as the problem is similar in that in both cases you only have uncertain information about some of the labels (very uncertain for semi-supervised methods). The approach tdc suggests is probably worth trying, or alternatively present each pattern twice, with different labels each time, with weights $p$ and $1-p$.
|
Train a SVM-based classifier while taking into account the weight information
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers a
|
47,500
|
Train a SVM-based classifier while taking into account the weight information
|
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal training data. In this case, classification is biased towards the class with more observations. To compensate, W-SVM sets the penalty parameter C in proportion to the size of the class.
The same idea can be applied to confidence information by giving each observation its own C; though I'm not sure if LibSVM supports this. In this sense, you give a larger penalty to observations in which you have a lot of confidence, and a small penalty to observations with which you have little confidence. The end result is that the hyperplane is determined by weighting each observation by its confidence interval, as you desire.
Huang, & Du (2005). Weighted support vector machine for classification with uneven training class sized. Proc. of the 4th Int. Conf. on Machine Learning and Cybernetics, 4365-4369. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1527706
|
Train a SVM-based classifier while taking into account the weight information
|
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal tr
|
Train a SVM-based classifier while taking into account the weight information
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal training data. In this case, classification is biased towards the class with more observations. To compensate, W-SVM sets the penalty parameter C in proportion to the size of the class.
The same idea can be applied to confidence information by giving each observation its own C; though I'm not sure if LibSVM supports this. In this sense, you give a larger penalty to observations in which you have a lot of confidence, and a small penalty to observations with which you have little confidence. The end result is that the hyperplane is determined by weighting each observation by its confidence interval, as you desire.
Huang, & Du (2005). Weighted support vector machine for classification with uneven training class sized. Proc. of the 4th Int. Conf. on Machine Learning and Cybernetics, 4365-4369. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1527706
|
Train a SVM-based classifier while taking into account the weight information
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal tr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.