idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
47,401
Spurious correlation
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then $$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$ and $$Y_t=\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}+\frac{\delta_1}{\gamma_1}X_t+u_t-\frac{\delta_1}{\gamma_1}v_t$$ Now this is simply...
Spurious correlation
The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then $$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$ and $$Y_t=\delt
Spurious correlation The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then $$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$ and $$Y_t=\delta_0-\frac{\delta_1\gamma_0}{\gamma_1}+\frac{\delta_1}{\gamma_1}X_t+u_t-\frac{\delta_1}{\gamma_1}v_t...
Spurious correlation The regression would not be spurious. If $Y_t=\delta_0+\delta_1 t+u_t$ and $X_t=\gamma_0+\gamma_1t+v_t$ then $$t=\frac{1}{\gamma_1}X_t-\frac{\gamma_0}{\gamma_1}-\frac{1}{\gamma_1}v_t$$ and $$Y_t=\delt
47,402
Longitudinal predictive models
This is phase 1 of my answer. I want first to make sure that I understand the model. Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writes, the model appears to be (for one customer and assuming linearity for the moment) $$ s_{it} = g(a_1t,a_2t^2, a_3t^3,....
Longitudinal predictive models
This is phase 1 of my answer. I want first to make sure that I understand the model. Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writ
Longitudinal predictive models This is phase 1 of my answer. I want first to make sure that I understand the model. Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writes, the model appears to be (for one customer and assuming linearity for the moment) $$ s...
Longitudinal predictive models This is phase 1 of my answer. I want first to make sure that I understand the model. Take just one customer, say customer $i$, and denote by $s_{it}$ its monthly savings balance. From what the OP writ
47,403
Longitudinal predictive models
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." causes me some concern. If I observe 3 numbers say 8,10,12 ... the "best model" to predict this stream for the next 33 per...
Longitudinal predictive models
Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." ca
Longitudinal predictive models Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." causes me some concern. If I observe 3 numbers say 8,10,12 ... the "best model" to predict ...
Longitudinal predictive models Your comment "Hence, this is a forecasting type problem, but not in the time series sense where you observe a sequence of data significantly longer than the time horizon you are trying to predict." ca
47,404
Longitudinal predictive models
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R. Some of your constraints might be that you can use only information till month 3 which will hurt your estimation. Hope this hel...
Longitudinal predictive models
Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R. Some of
Longitudinal predictive models Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R. Some of your constraints might be that you can use only information till month 3 which will hurt ...
Longitudinal predictive models Going by your statement, you might want to look at Hidden Markov Modeling. The paper by Paas et al will give you a good idea. For modeling the same, you can look at the depmixS4 package in R. Some of
47,405
Critical effect sizes and power for paired t test
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are. The effect size as a difference in standard deviation units is usually referred to as $d$. We can apply a correction fac...
Critical effect sizes and power for paired t test
Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
Critical effect sizes and power for paired t test Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are. The effect size as a difference in standard deviation units is usually...
Critical effect sizes and power for paired t test Yes, this is possible and even fairly easy, but additional information is required. Specifically, we have to make an assumption about what the correlation between the observations from each pair are.
47,406
Critical effect sizes and power for paired t test
Any effect size is possible, it's hard to tell what you mean by "makes sense". There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often recommended here. Conceptually you calculate power more or less the same way. In the independent case you're probably using...
Critical effect sizes and power for paired t test
Any effect size is possible, it's hard to tell what you mean by "makes sense". There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often rec
Critical effect sizes and power for paired t test Any effect size is possible, it's hard to tell what you mean by "makes sense". There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often recommended here. Conceptually you calculate power more or less the same ...
Critical effect sizes and power for paired t test Any effect size is possible, it's hard to tell what you mean by "makes sense". There are many power calculators on the internet found with a simple Google search. You'll see that G*Power is often rec
47,407
Ideal number of variables for PCA
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables (gene expression data) and it works very well. What can happen is that when analysing PCA you will have to look into mor...
Ideal number of variables for PCA
One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables
Ideal number of variables for PCA One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables (gene expression data) and it works very well. What can happen is that when analysing ...
Ideal number of variables for PCA One of the main applications of PCA is to reduce the dimensionality when there are many variables. So yes, you should use all of the variables. I routinely apply PCA to tens of thousands of variables
47,408
Interpreting 3D scatter plot
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is missing the colors. As a.desantos already pointed out, the individual scatterplots in the second image are projections o...
Interpreting 3D scatter plot
3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is
Interpreting 3D scatter plot 3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is missing the colors. As a.desantos already pointed out, the individual scatterplots in the s...
Interpreting 3D scatter plot 3D-scatterplots are sometimes a bit confusing, especially if you can't rotate the plot around. However, the scatterplot matrix supports the interpretation, at least here, rather nicely, even if it is
47,409
Interpreting 3D scatter plot
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I think there are missing colours on the second image....
Interpreting 3D scatter plot
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Interpreting 3D scatter plot Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I think there are missing...
Interpreting 3D scatter plot Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
47,410
How do I find data to show whether a shaved die is really loaded?
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the difference (a large difference is easy to see) and the number of observations (a large number can detect smaller differences)...
How do I find data to show whether a shaved die is really loaded?
What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the diffe
How do I find data to show whether a shaved die is really loaded? What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the difference (a large difference is easy to see) and the numb...
How do I find data to show whether a shaved die is really loaded? What we want to know is how well can a reasonable test (like a chi-squared test) detect a small difference in the chances (of the six outcomes). This is its power: it depends on the size of the diffe
47,411
How do I find data to show whether a shaved die is really loaded?
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$ times, you can test the hypothesis that $p = 1/6$ using the test statistic $$ Z = \frac{ \hat p - 1/6}{\sqrt{\frac{ (1/...
How do I find data to show whether a shaved die is really loaded?
One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$
How do I find data to show whether a shaved die is really loaded? One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$ times, you can test the hypothesis that $p = 1/6$ usi...
How do I find data to show whether a shaved die is really loaded? One simple approach is to focus on a single face, say six (from the link it looks like this is one of the "flat" faces so should come up less often than 1/6 of the time). Then, if you roll the die $n$
47,412
A question from a test in statistics?
None are correct. The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, the Pearson correlation cannot be $1$, either. In the second answer the covariance and $S_x$ are measured in different un...
A question from a test in statistics?
None are correct. The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, t
A question from a test in statistics? None are correct. The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, the Pearson correlation cannot be $1$, either. In the second answer the covariance ...
A question from a test in statistics? None are correct. The first is wrong because Spearman's rank correlation is defined the Pearson correlation using ranks instead of actual values: therefore when the Spearman correlation is not $1$, t
47,413
Variance and covariance of binary data
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}$ is the number of pairs in which $x=y=1$. Both that formula and the formula you gave are usually called "population" fo...
Variance and covariance of binary data
The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}
Variance and covariance of binary data The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}$ is the number of pairs in which $x=y=1$. Both that formula and the formula you ...
Variance and covariance of binary data The shortcut formula for the covariance of two binary variables is $(n\,k_{xy}-k_xk_y)/n^2$, where $k_x$ is the number of pairs in which $x=1$, $k_y$ is the number of pairs in which $y=1$, and $k_{xy}
47,414
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$ is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior distribution $P(x \mid \mathcal{D})$ using a given factorization of the joint, $$P(x) \prod_i P(y_i \mid x).$$ To reduce...
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
$$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$ is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$ $$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$ is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior distribution $P(x \mid \mathcal{D})$ using a giv...
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$ $$\text{E}[X] = \frac{\int x \, P(y_i \mid x) \, dx}{\int P(y_i \mid x) \, dx}$$ is not a general statement, but only a first step in expectation propagation (EP). EP tries to approximate a posterior
47,415
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated further on the next slides. I have actually seen this bad approach used in papers, so I thought it was worth pointing out...
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$
The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated fu
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$ The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated further on the next slides. I have actually seen ...
Regarding the formula of using $\text{P}(Y|X)$ to compute $\text{E}[X]$ The formula on that slide was a straw man and not intended to make sense. The point was that moment matching does not make sense on an individual likelihood term in isolation. This is illustrated fu
47,416
How is the working correlation matrix estimated for GEE?
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjects." A good reference for your question is Liang and Zeger (1986) on Biometrika. Section 3.3 shows that the correlation ...
How is the working correlation matrix estimated for GEE?
If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjec
How is the working correlation matrix estimated for GEE? If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjects." A good reference for your question is Liang and Zeger (198...
How is the working correlation matrix estimated for GEE? If you look at the note (and your quotation) specifically, "The data in long form could be naively thrown into an ordinary least squares (OLS) linear regression…ignoring the correlation between subjec
47,417
Modelling zero-inflated proportion data in R using GAMLSS
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model. The response variable for the $\nu$ component of the model is a ratio of probabilities (an odds) given by $\nu = p_0 / (1...
Modelling zero-inflated proportion data in R using GAMLSS
The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model. T
Modelling zero-inflated proportion data in R using GAMLSS The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model. The response variable for the $\nu$ component of the model is a...
Modelling zero-inflated proportion data in R using GAMLSS The answer below references the inflated beta GAMLSS documentation (Rigby & Stasinopoulos, 2010, section 10.8.2, page 215). It would seem that your data could be fitted with the inflated beta model. T
47,418
Modelling zero-inflated proportion data in R using GAMLSS
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$! This means that in the above answer one needs to use the exponentials of $\nu$ and $\tau$: $p_0 = \frac{e^\nu}{1 + e^{\nu} + e^{\tau}}$ and similarly for $p_1$.
Modelling zero-inflated proportion data in R using GAMLSS
I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$! This means that in the above answer one needs to use the exponentials o
Modelling zero-inflated proportion data in R using GAMLSS I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$! This means that in the above answer one needs to use the exponentials of $\nu$ and $\tau$: $p_0 = \frac{e^\nu}{1 + e^{\nu} + e^{\tau}...
Modelling zero-inflated proportion data in R using GAMLSS I think the odds $\frac{p_0}{1-p_0-p_1}$ are given by $e^{\nu}$ not $\nu$, and similarly for $\frac{p_1}{1-p_0-p_1} = e^{\tau}$! This means that in the above answer one needs to use the exponentials o
47,419
Weighted standard deviation of average
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you can write $\rm{Var}(aX+bY) = a^2\rm{Var}(X) + b^2\rm{Var}(Y)$, noting the coefficients are squared. This would apply (ap...
Weighted standard deviation of average
You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you
Weighted standard deviation of average You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you can write $\rm{Var}(aX+bY) = a^2\rm{Var}(X) + b^2\rm{Var}(Y)$, noting the coeffic...
Weighted standard deviation of average You're confusing addition of random variables with concatenation of samples (easy to do, it took me a while to realize why your code didn't work!). So for independent random variables $X$ and $Y$ you
47,420
Weighted standard deviation of average
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples, and for some reason (eg sampling method) you want to give more weight to one of the samples. The main point you go wr...
Weighted standard deviation of average
It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples
Weighted standard deviation of average It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples, and for some reason (eg sampling method) you want to give more weight to one of...
Weighted standard deviation of average It's not completely clear exactly what you do want with your weighted standard deviation but I will presume you want the standard deviation of a single population from which you have drawn two samples
47,421
Weighted standard deviation of average
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions. $$ Z= 0.25A+0.75B \\ V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] + 2\cdot 0.25\cdot 0.75 \cdot Cov[A,B] $$ And because both are iid we have $$ V[Z] = 0.25^2V[A]+0.75^2V[B] \\ \widehat{\ma...
Weighted standard deviation of average
It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions. $$ Z= 0.25A+0.75B \\ V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] +
Weighted standard deviation of average It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions. $$ Z= 0.25A+0.75B \\ V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] + 2\cdot 0.25\cdot 0.75 \cdot Cov[A,B] $$ And because both are iid we have $$ V[Z] ...
Weighted standard deviation of average It seems your maths are wrong and there are some conceptual troubles. Let $Z$ be the actual compilation of your two distributions. $$ Z= 0.25A+0.75B \\ V[Z] = V[0.25A+0.75B] = 0.25^2V[A]+0.75^2V[B] +
47,422
How to interpret multimodal distribution of bootstrapped correlation?
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the right mode corresponds to the samples that exclude both the point with the smallest value of $x$ and the point with the...
How to interpret multimodal distribution of bootstrapped correlation?
My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the
How to interpret multimodal distribution of bootstrapped correlation? My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the right mode corresponds to the samples that exclud...
How to interpret multimodal distribution of bootstrapped correlation? My guess would be that there is a (set of) outlier(s) in your data. One mode represents those samples that included them and the other the samples that did not include them. My guess would be that the
47,423
Why is k called representer of evaluation in the definition of kernel functions
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definition (that's why it uses the := notation instead of just an equals sign.) The definition says that if $f = \sum_i \alpha_i ...
Why is k called representer of evaluation in the definition of kernel functions
When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definitio
Why is k called representer of evaluation in the definition of kernel functions When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definition (that's why it uses the := notation in...
Why is k called representer of evaluation in the definition of kernel functions When it says "follows directly from the definition", it means directly from the definition of $\langle -,- \rangle$, not directly from the definition of $\Phi$. Equation 2.24 on page 33 is a definitio
47,424
Why is k called representer of evaluation in the definition of kernel functions
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ itself. i.e, in detail, if you dont know the result of the function $f's$ Hilbert-norm minimizing minimizer, all you need ...
Why is k called representer of evaluation in the definition of kernel functions
In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ its
Why is k called representer of evaluation in the definition of kernel functions In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ itself. i.e, in detail, if you dont know t...
Why is k called representer of evaluation in the definition of kernel functions In an RKHS framework, any function $f(\cdot)$ can be minimized with a hilbert-norm minimizing solution, just by a linear combination of the kernels evaluated at the rest of the data points and $x$ its
47,425
Estimating a distribution from above/below observations
You could try to directly estimate the CDF via a binomial rate smoother ? Here is an idealized example for x stemming from a normal distribution: ci = seq(from=-3,to=3,length=500) X = rnorm(500) Y = rep(NA, 500) for (i in 1:500) Y[i] = as.numeric(X[i] < ci[i] ) plot(ci,Y, type="s") library(mgcv) li...
Estimating a distribution from above/below observations
You could try to directly estimate the CDF via a binomial rate smoother ? Here is an idealized example for x stemming from a normal distribution: ci = seq(from=-3,to=3,length=500) X = rnorm(500)
Estimating a distribution from above/below observations You could try to directly estimate the CDF via a binomial rate smoother ? Here is an idealized example for x stemming from a normal distribution: ci = seq(from=-3,to=3,length=500) X = rnorm(500) Y = rep(NA, 500) for (i in 1:500) Y[i] = as.numeric(X[i] ...
Estimating a distribution from above/below observations You could try to directly estimate the CDF via a binomial rate smoother ? Here is an idealized example for x stemming from a normal distribution: ci = seq(from=-3,to=3,length=500) X = rnorm(500)
47,426
What does "chi" mean and come from in "chi-squared distribution"?
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Helmert in 1876 deserves more than a nod. http://jeff560.tripod.com/c.html is a good start, especially if other historical...
What does "chi" mean and come from in "chi-squared distribution"?
Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Hel
What does "chi" mean and come from in "chi-squared distribution"? Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Helmert in 1876 deserves more than a nod. http://jeff560...
What does "chi" mean and come from in "chi-squared distribution"? Chi is a Greek letter. The canonical modern history references are Karl Pearson's introduction of the chi-square test in 1900 and R.A. Fisher's work in 1924, but there is ancient history too: F.R. Hel
47,427
Confidence error bars and "central point": Should we emphasize the median?
Median! Note these advantages: the median and it's C.I. (see below) are equivariant to monotone transformation of your data: $$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$ for any function $g$ monotone on the domain of $x$ (i.e. $\log()$ if $x>0$). It's robust in the sense that it's minimally changed when you replace a...
Confidence error bars and "central point": Should we emphasize the median?
Median! Note these advantages: the median and it's C.I. (see below) are equivariant to monotone transformation of your data: $$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$ for any function $g$ monotone
Confidence error bars and "central point": Should we emphasize the median? Median! Note these advantages: the median and it's C.I. (see below) are equivariant to monotone transformation of your data: $$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$ for any function $g$ monotone on the domain of $x$ (i.e. $\log()$ if $x>0$...
Confidence error bars and "central point": Should we emphasize the median? Median! Note these advantages: the median and it's C.I. (see below) are equivariant to monotone transformation of your data: $$\mathrm{med}(g(x))=g(\mathrm{med}(x))$$ for any function $g$ monotone
47,428
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first. Discrete data is a larger issue. K-means is meant for continuous data. The mean will not be discrete, so the cluster centers will likely be anomalous. You have a high chance that the clus...
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first. Discrete data is a larger issue. K-means is meant for continuous da
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable? First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first. Discrete data is a larger issue. K-means is meant for continuous data. The mean will not be dis...
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable? First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first. Discrete data is a larger issue. K-means is meant for continuous da
47,429
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
For number one: Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reason for this is because otherwise the feature with the highest range will have more weight on the clustering process. For ...
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable?
For number one: Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reaso
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable? For number one: Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reason for this is because otherw...
Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable? For number one: Yes you have to standardize, the exact method really depends on what you expect to obtain from the data but in general you need to have all of the features in the same scale. The reaso
47,430
Chi-squared distribution for dice not returning expected values?
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, something really unlikely just happened" (the usual conclusion is then usually that something less remarkable happened under th...
Chi-squared distribution for dice not returning expected values?
A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, somethi
Chi-squared distribution for dice not returning expected values? A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, something really unlikely just happened" (the usual conclusion...
Chi-squared distribution for dice not returning expected values? A chi-square statistic gets bigger the further from expected the entries are. The p-value gets smaller. Very small p-values are saying "If the null hypothesis of equal probabilities were true, somethi
47,431
Getting rid of a huge categorical factor in multiple regression
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.model.matrix() from Matrix to build the design frame and then pass that into glmnet() from glmnet package. (lme4 naturally ...
Getting rid of a huge categorical factor in multiple regression
I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.mode
Getting rid of a huge categorical factor in multiple regression I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.model.matrix() from Matrix to build the design frame and the...
Getting rid of a huge categorical factor in multiple regression I would think lme4 would be highly appropriate for this. Treat your huge categorical factor as a practical random effect. I won't go into the theoretical definitions. Alternatively, use sparse.mode
47,432
Theoretical objections to hypothesis testing [duplicate]
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good example of that. I used to remember relevant titles (I still remember my first one: E.T. Jaynes "Probability theory - the...
Theoretical objections to hypothesis testing [duplicate]
Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good e
Theoretical objections to hypothesis testing [duplicate] Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good example of that. I used to remember relevant titles (I still rem...
Theoretical objections to hypothesis testing [duplicate] Well, try virtually any book about Bayesian statistics. I haven't find one that doesn't have at least a few paragraphs debunking practice of significance testing. Really. And Gelman's books are good e
47,433
Theoretical objections to hypothesis testing [duplicate]
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that it is--if at all--more philosophical than anything else. But maybe it helps.
Theoretical objections to hypothesis testing [duplicate]
This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that
Theoretical objections to hypothesis testing [duplicate] This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that it is--if at all--more philosophical than anything else. But m...
Theoretical objections to hypothesis testing [duplicate] This paper characterizes the publication/scientific practice in psychology as being a ritual of finding p<.05. They argue against hypothesis testing without solid theoretical basis. Note, though, that
47,434
Theoretical objections to hypothesis testing [duplicate]
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like the likelihood principle(LP). They go through a concrete example of coin-tossing and show how the concept of statistical...
Theoretical objections to hypothesis testing [duplicate]
I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like
Theoretical objections to hypothesis testing [duplicate] I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like the likelihood principle(LP). They go through a concrete exampl...
Theoretical objections to hypothesis testing [duplicate] I'm reading an article right now called "Principles of Inference and Their Consequences" by D.G. Mayo and M. Kruse. It's a good article so far about how hypothesis testing can violate principles like
47,435
Simulate ARIMA by hand
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at the end of your ARIMA fit. Something like arima(x.ts,order=c(p,d,q),list(maxit = 1000)). I think the default for ? is 50...
Simulate ARIMA by hand
The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at t
Simulate ARIMA by hand The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at the end of your ARIMA fit. Something like arima(x.ts,order=c(p,d,q),list(maxit = 1000)). I think ...
Simulate ARIMA by hand The warning is because the normal optimization for the MLE has reached the default maximum number of iterations before convergence. You can increase that by adding optim.control = list(maxit = ?) at t
47,436
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. This is done for backpropagation to work properly, since it uses activation function derivatives, and ~ 0 derivatives imply...
Must I normalize inputs into a perceptron that uses a sigmoid activation function?
The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. Thi
Must I normalize inputs into a perceptron that uses a sigmoid activation function? The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. This is done for backpropagation to work...
Must I normalize inputs into a perceptron that uses a sigmoid activation function? The inputs should be scaled to the so-called "active range" of the activation function, or, in other words, the area of the function curve where the derivative of the function is clearly non-zero. Thi
47,437
Inequality involving interquartile range and standard deviation
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio. Upper bound for SD:IQR The Cauchy distribution with PDF $$\frac{dx / \sigma}{\pi(1 + (x/\sigma)^2)}$$ has infinite SD and quartiles at $\pm\sigma$. From it we can create, via truncation on ...
Inequality involving interquartile range and standard deviation
The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio. Upper bound for SD:IQR The Cauchy distribution with PDF $$\frac{dx / \
Inequality involving interquartile range and standard deviation The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio. Upper bound for SD:IQR The Cauchy distribution with PDF $$\frac{dx / \sigma}{\pi(1 + (x/\sigma)^2)}$$ has infinite SD and quar...
Inequality involving interquartile range and standard deviation The IQR and standard deviation both are proportional to a scale factor, so the proper way to compare the two is with their ratio. Upper bound for SD:IQR The Cauchy distribution with PDF $$\frac{dx / \
47,438
Distribution of ratio of sample means from two independent normal variables?
This framework is a particular case of Cox's model http://www.jstor.org/stable/2530661 studied here http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
Distribution of ratio of sample means from two independent normal variables?
This framework is a particular case of Cox's model http://www.jstor.org/stable/2530661 studied here http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
Distribution of ratio of sample means from two independent normal variables? This framework is a particular case of Cox's model http://www.jstor.org/stable/2530661 studied here http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
Distribution of ratio of sample means from two independent normal variables? This framework is a particular case of Cox's model http://www.jstor.org/stable/2530661 studied here http://onlinelibrary.wiley.com/doi/10.1002/bimj.200310009/abstract
47,439
Distribution of ratio of sample means from two independent normal variables?
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$; then use these improved estimates to get a better estimate of c, and repeat until it converges. This ducks the questi...
Distribution of ratio of sample means from two independent normal variables?
If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$
Distribution of ratio of sample means from two independent normal variables? If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$; then use these improved estimates to get ...
Distribution of ratio of sample means from two independent normal variables? If you could only divide Y by c, all of your data would come from $N(\mu, \sigma^2)$. This suggests to me an iterative approach. Estimate c, then use the pooled data to estimate $\mu$ and $\sigma^2$
47,440
quantile in scipy library
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quantiles? If you want quantiles, try scipy.stats.norm.ppf( [.05,.5, .95], 2, 9) will give you the quantiles at the point...
quantile in scipy library
You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quan
quantile in scipy library You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quantiles? If you want quantiles, try scipy.stats.norm.ppf( [.05,.5, .95], 2, 9) will give you ...
quantile in scipy library You are calling the normal pdf, with parameters $\mu=2$ and $\sigma=9$, evaluated at the points 0,1,2,3,4. It cannot be interpreted as a probability as is. Are you interested in probabilities or quan
47,441
Percent correctly predicted of logit model
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomial likelihood isn't also the best one according to percent accuracy. Edited to add: He's also right in the comments belo...
Percent correctly predicted of logit model
@FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomi
Percent correctly predicted of logit model @FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomial likelihood isn't also the best one according to percent accuracy. Edited t...
Percent correctly predicted of logit model @FrankHarrell is correct that percent accuracy isn't the loss function that logistic regression is trying to optimize. So there could be situations where the best model according to the (quasi) binomi
47,442
t-distribution confidence intervals for non-Gaussian data but large n
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something). First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$ has a Normal distribution, $S_n$ has a $\chi$-distribution with $n-1$ degrees of freedom, and $\overline{X}_n$ and $S_n$...
t-distribution confidence intervals for non-Gaussian data but large n
Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something). First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$
t-distribution confidence intervals for non-Gaussian data but large n Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something). First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$ has a Normal distribution, $S_n$ has a $\chi$-dist...
t-distribution confidence intervals for non-Gaussian data but large n Ok, after the hint of Procrastinator I think this is the answer (please correct me if I missed something). First of all, $\frac{\overline{X}_n-\mu}{S_n/\sqrt{n}}$ is t-distributed if $\overline{X}_n$
47,443
How to explain the connection between SVD and clustering?
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD): "Relation between PCA and K-means clustering It has been shown recently (2001,2004) that the relaxed solution of K-means clustering, specified by the cluster indicators, is given by the PCA principal components, and the PCA su...
How to explain the connection between SVD and clustering?
Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD): "Relation between PCA and K-means clustering It has been shown recently (2001,2004) that the relaxed solut
How to explain the connection between SVD and clustering? Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD): "Relation between PCA and K-means clustering It has been shown recently (2001,2004) that the relaxed solution of K-means clustering, specified by the cluster indicators...
How to explain the connection between SVD and clustering? Perhaps this will help, taken from the Wikipedia article on PCA (PCA is very similar to SVD): "Relation between PCA and K-means clustering It has been shown recently (2001,2004) that the relaxed solut
47,444
How to explain the connection between SVD and clustering?
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization). To concur Bitwise's point, the NMF wikipedia page http://en.wikipedia.org/wiki/Non-negative_matrix_factorization states that "It has been shown [27][28] NMF...
How to explain the connection between SVD and clustering?
I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization). To concur Bitwise's point, the NMF w
How to explain the connection between SVD and clustering? I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization). To concur Bitwise's point, the NMF wikipedia page http://en.wikipedia.org/wiki/Non-negative_matrix...
How to explain the connection between SVD and clustering? I had the similar question in mind when trying to comparing methods like SVD, PCA, and LSA (latent Sementic Analysis), and NMF (Non-negative Matrix Factorization). To concur Bitwise's point, the NMF w
47,445
Why can a polynomial of degree $>2$ not be a cumulant generating function?
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper J. Marcinkiewicz, Sur une propriete de la loi de Gauss, Mathematische Zeitschrift 44 (1939) 612-618. The result is also proved on p.213 of E. Lukacs, Characteristic Functions, 2nd ed., Griffi...
Why can a polynomial of degree $>2$ not be a cumulant generating function?
In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper J. Marcinkiewicz, Sur une propriete de la loi de Gauss, Mathematische Ze
Why can a polynomial of degree $>2$ not be a cumulant generating function? In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper J. Marcinkiewicz, Sur une propriete de la loi de Gauss, Mathematische Zeitschrift 44 (1939) 612-618. The result is al...
Why can a polynomial of degree $>2$ not be a cumulant generating function? In the mean time, I found out that the result (rephrased in terms of characteristic functions) was first described in the paper J. Marcinkiewicz, Sur une propriete de la loi de Gauss, Mathematische Ze
47,446
Why can a polynomial of degree $>2$ not be a cumulant generating function?
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem: https://math.uc.edu/~brycw/probab/charakt/charakt.pdf (and excels by being neither paywalled nor in French) Seems the key is to reframe the problem as showing that the characteristic function for the differenc...
Why can a polynomial of degree $>2$ not be a cumulant generating function?
For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem: https://math.uc.edu/~brycw/probab/charakt/charakt.pdf (and excels by being neither paywal
Why can a polynomial of degree $>2$ not be a cumulant generating function? For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem: https://math.uc.edu/~brycw/probab/charakt/charakt.pdf (and excels by being neither paywalled nor in French) Seems the key is to refram...
Why can a polynomial of degree $>2$ not be a cumulant generating function? For future reference, this set of lecture notes provides a rather succinct proof of the Marcinkiewicz theorem: https://math.uc.edu/~brycw/probab/charakt/charakt.pdf (and excels by being neither paywal
47,447
How to compute the standard error of the mean of an AR(1) process?
Well there are three things as i see it with this question : 1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression .. i didnt consider the auto covariance earlier ..sorry about that $$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N}...
How to compute the standard error of the mean of an AR(1) process?
Well there are three things as i see it with this question : 1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression ..
How to compute the standard error of the mean of an AR(1) process? Well there are three things as i see it with this question : 1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression .. i didnt consider the auto covariance earlier ..sorry...
How to compute the standard error of the mean of an AR(1) process? Well there are three things as i see it with this question : 1) In your derivation when your taking the variance of the terms inside Rho should get squared and you should end up with the expression ..
47,448
How to compute the standard error of the mean of an AR(1) process?
Well actually when you take the following \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ \end{align*} It is easier to derive an implicit value rather than an explicit value in this case..your answer and mine are the same ..it's just that yours is a bit more difficult to ...
How to compute the standard error of the mean of an AR(1) process?
Well actually when you take the following \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ \end{align*} It is easier to derive an implicit value rather t
How to compute the standard error of the mean of an AR(1) process? Well actually when you take the following \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ \end{align*} It is easier to derive an implicit value rather than an explicit value in this case..your answer and m...
How to compute the standard error of the mean of an AR(1) process? Well actually when you take the following \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ \end{align*} It is easier to derive an implicit value rather t
47,449
How to compute the standard error of the mean of an AR(1) process?
This is the R code btw .. nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1)) } #Simulation answer ans1 <-sd(means) #This should be the standard error according to the given formula cov <- 0 for(i in 1:N){ for(j in...
How to compute the standard error of the mean of an AR(1) process?
This is the R code btw .. nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1)) } #Simulation ans
How to compute the standard error of the mean of an AR(1) process? This is the R code btw .. nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1)) } #Simulation answer ans1 <-sd(means) #This should be the standard er...
How to compute the standard error of the mean of an AR(1) process? This is the R code btw .. nrMCS <- 10000 N <- 100 pers <- 0.9 means <- numeric(nrMCS) for (i in 1:nrMCS) { means[i] <- mean(arima.sim(model=list(ar=c(pers)), n = N,mean=0,sd=1)) } #Simulation ans
47,450
How to compute the standard error of the mean of an AR(1) process?
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give: > .9459876/sqrt(100) [1] 0.09459876 Not sure why you were using sd(means) and calling it standard error (if I understood the code comment right). It wo...
How to compute the standard error of the mean of an AR(1) process?
Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give: > .9459876/sqrt(100) [1] 0.094598
How to compute the standard error of the mean of an AR(1) process? Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give: > .9459876/sqrt(100) [1] 0.09459876 Not sure why you were using sd(means) and calling...
How to compute the standard error of the mean of an AR(1) process? Don't know if it qualifies as a formal answer, but on the simulation side, standard error of a means estimator is defined as est(sd(means))/sqrt(N), which would give: > .9459876/sqrt(100) [1] 0.094598
47,451
Cox regression when reference group had zero events
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incident rate (IR) is $IR=\text{number of cases }/\text{ total person-time}$. $$IRR_{\text{quartile 4 vs. quartile 1}}=\frac{...
Cox regression when reference group had zero events
Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incid
Cox regression when reference group had zero events Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incident rate (IR) is $IR=\text{number of cases }/\text{ total person-tim...
Cox regression when reference group had zero events Well, what you're doing wrong is using as the reference group a group with zero events. Instead of hazard ratios, think in simpler terms (in my opinion) of incident rate ratios (IRRs), where the incid
47,452
Cox regression when reference group had zero events
To supplement andrea's response by extending it a bit to hazard ratios: The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously occurred. Your problem should be clear instantly - with no events, the probability is zero. Borrowing from andrea's exa...
Cox regression when reference group had zero events
To supplement andrea's response by extending it a bit to hazard ratios: The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously
Cox regression when reference group had zero events To supplement andrea's response by extending it a bit to hazard ratios: The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously occurred. Your problem should be clear instantly - with no events, ...
Cox regression when reference group had zero events To supplement andrea's response by extending it a bit to hazard ratios: The hazard of an event is the instantaneous probability of an event occurring at time t, conditional on it not having previously
47,453
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
Short Answer: Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "reduce the dimension of the problem". The original inputs can then be accordingly "cut down" before being fed into a clust...
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
Short Answer: Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "red
What are features that distinguish clustering, blind signal separation and dimensionality reduction? Short Answer: Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "reduce the dimension o...
What are features that distinguish clustering, blind signal separation and dimensionality reduction? Short Answer: Clustering and blind signal separation (BSS) are often used together in an application, and when this is the case, the BSS algorithm comes first as a pre-processing step in order to "red
47,454
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years. If you want to learn about clustering, do not approach it from the learning side. To the machine learning side, unsupervised learning is the ugly duckling they resort to when they don't have any labeled training data. B...
What are features that distinguish clustering, blind signal separation and dimensionality reduction?
That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years. If you want to learn about clustering, do not approach it from the learning side. To the machine le
What are features that distinguish clustering, blind signal separation and dimensionality reduction? That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years. If you want to learn about clustering, do not approach it from the learning side. To the machine learning side, unsupe...
What are features that distinguish clustering, blind signal separation and dimensionality reduction? That Wikipedia article is a mess. No wonder it has been tagged as "cleanup" for more than two years. If you want to learn about clustering, do not approach it from the learning side. To the machine le
47,455
Resource to read about distributions
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion. From NIST: http://www.itl.nist.gov/div898/handbook/eda/section3/eda366.htm From Dr. M.P. McLaughlin: http://www.causascientia.org/math_stat/Dists/Compendium.pdf
Resource to read about distributions
Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion. From NIST: http://www.itl.nist.gov/div898/handbook/eda/sect
Resource to read about distributions Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion. From NIST: http://www.itl.nist.gov/div898/handbook/eda/section3/eda366.htm From Dr. M.P. McLaughlin: http://www.causascientia.org/math_stat/Di...
Resource to read about distributions Here are two more distribution resources. They are descriptive and present equations, without much proof, application, or even discussion. From NIST: http://www.itl.nist.gov/div898/handbook/eda/sect
47,456
Resource to read about distributions
Look at the series "Distributions in Statistics" by Johnson and Kotz. Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics) Continuous Univariate Distributions, Vol. 2 (Wiley Series in Probability and Statistics) Univariate Discrete Distributions (Wiley Series in Probability and S...
Resource to read about distributions
Look at the series "Distributions in Statistics" by Johnson and Kotz. Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics) Continuous Univariate Distributions, V
Resource to read about distributions Look at the series "Distributions in Statistics" by Johnson and Kotz. Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics) Continuous Univariate Distributions, Vol. 2 (Wiley Series in Probability and Statistics) Univariate Discrete Distributio...
Resource to read about distributions Look at the series "Distributions in Statistics" by Johnson and Kotz. Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics) Continuous Univariate Distributions, V
47,457
How do I decide which family of variance/link functions to use in a generalized linear model?
It depends on the nature of your dependent variable: Gaussian is for continuous DV (this is ordinary least squares) Binomial, as you note, is for logistic regression . Poisson is for count data (non-negative integers). See also quasipoisson. Gamma is for continuous DV that is always positive (although often you can us...
How do I decide which family of variance/link functions to use in a generalized linear model?
It depends on the nature of your dependent variable: Gaussian is for continuous DV (this is ordinary least squares) Binomial, as you note, is for logistic regression . Poisson is for count data (non-n
How do I decide which family of variance/link functions to use in a generalized linear model? It depends on the nature of your dependent variable: Gaussian is for continuous DV (this is ordinary least squares) Binomial, as you note, is for logistic regression . Poisson is for count data (non-negative integers). See als...
How do I decide which family of variance/link functions to use in a generalized linear model? It depends on the nature of your dependent variable: Gaussian is for continuous DV (this is ordinary least squares) Binomial, as you note, is for logistic regression . Poisson is for count data (non-n
47,458
"Running it" multiple times in No-Limit Hold'em poker
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original, so there is no advantage in expected winnings to running the hand $n$ times instead of once. Although some people find i...
"Running it" multiple times in No-Limit Hold'em poker
Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original,
"Running it" multiple times in No-Limit Hold'em poker Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original, so there is no advantage in expected winnings to running the hand ...
"Running it" multiple times in No-Limit Hold'em poker Expected value is linear, even over dependent random variables. The chance to win each run (not conditional on particular outcomes of the previous runs) is the same as the chance to win the original,
47,459
"Running it" multiple times in No-Limit Hold'em poker
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only" On the contrary, approximations are easy on the flop and turn. From the flop, with 2 cards to come. Percent to win = (# outs) x 4. Example, if you have 9 clean flush outs on the flop, then you are about 36% (actual...
"Running it" multiple times in No-Limit Hold'em poker
"Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only" On the contrary, approximations are easy on the flop and turn. From the flop, with 2 cards to
"Running it" multiple times in No-Limit Hold'em poker "Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only" On the contrary, approximations are easy on the flop and turn. From the flop, with 2 cards to come. Percent to win = (# outs) x 4. Example, if you have 9 clean ...
"Running it" multiple times in No-Limit Hold'em poker "Figuring out the percentage chance of 1 player winning is no easy feat for running it out once only" On the contrary, approximations are easy on the flop and turn. From the flop, with 2 cards to
47,460
Correlation and non-normal distributions
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article includes R code. Reference: Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative al...
Correlation and non-normal distributions
It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article inc
Correlation and non-normal distributions It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article includes R code. Reference: Ruscio, J., & Kaczetow, W. (2008). Simulating multivar...
Correlation and non-normal distributions It sounds like this problem can be solved with Ruscio & Kaczetow's algorithm for generating correlated non-normal variables. It's flexible enough to work for bimodal distributions. Their article inc
47,461
Frequentist properties of p-values in relation to type I error
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead. As noted in the comments, it seems that what caused the confusion is that the applet runs under both the alternative and the null hypothesis. To check that the type I error rate really is $0.05$ you need to ...
Frequentist properties of p-values in relation to type I error
I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead. As noted in the comments, it seems that what caused the confusion is that the applet ru
Frequentist properties of p-values in relation to type I error I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead. As noted in the comments, it seems that what caused the confusion is that the applet runs under both the alternative and the null hypothesis. To...
Frequentist properties of p-values in relation to type I error I can't for the life of me get that applet to run in my browser, so I'll try to give an example using R instead. As noted in the comments, it seems that what caused the confusion is that the applet ru
47,462
Maximum likelihood estimation in a Poisson model for football (soccer) scores
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think that it's on CRAN anymore, but you can find it here. For more information, see the description of the package in Journa...
Maximum likelihood estimation in a Poisson model for football (soccer) scores
The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think
Maximum likelihood estimation in a Poisson model for football (soccer) scores The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think that it's on CRAN anymore, but you can fi...
Maximum likelihood estimation in a Poisson model for football (soccer) scores The bivpois package, written by Karlis and Ntzoufras, uses the EM-algorithm for maximum likelihood estimation in this kind of bivariate Poisson models (and some generalisations of them). I don't think
47,463
Maximum likelihood estimation in a Poisson model for football (soccer) scores
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
Maximum likelihood estimation in a Poisson model for football (soccer) scores
You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
Maximum likelihood estimation in a Poisson model for football (soccer) scores You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
Maximum likelihood estimation in a Poisson model for football (soccer) scores You should look at the VGAM package - it has functions to fit the Bradley-Terry model described in the linked questions in the comments.
47,464
How to combine values based on standard errors?
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis. If you believe that the means are estimating different true values, then things get more tricky. If there were more means, you could do random-...
How to combine values based on standard errors?
If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis. If you believe that the
How to combine values based on standard errors? If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis. If you believe that the means are estimating different true values, then things get more tricky...
How to combine values based on standard errors? If you believe that these two means are both estimates of the same true value then inverse-variance weighting is the way to go. That's equivalent to fixed-effect meta-analysis. If you believe that the
47,465
How can I check if my time series data is zero mean, stationary and independent identically distributed?
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses and no local time trends. (2) The variance of the errors from the final model should be constant which means no structural...
How can I check if my time series data is zero mean, stationary and independent identically distribu
The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses an
How can I check if my time series data is zero mean, stationary and independent identically distributed? The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses and no local time...
How can I check if my time series data is zero mean, stationary and independent identically distribu The errors from the model should have a zero mean or a mean that is not significantly different from zero everywhere. (1) In practice this means no Pulses, no Level/Step shifts , no seasonal pulses an
47,466
How can I check if my time series data is zero mean, stationary and independent identically distributed?
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary components (periodic components and polynomial trend). The rule for testing for nonstationarity is to compute the autoco...
How can I check if my time series data is zero mean, stationary and independent identically distribu
Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary c
How can I check if my time series data is zero mean, stationary and independent identically distributed? Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary components (peri...
How can I check if my time series data is zero mean, stationary and independent identically distribu Box and Jenkins suggest used the autocorrelation and partial autocorrelation functions to identify the model. The general Box Jenkins models are seasonal ARIMA modles which allow for nonstationary c
47,467
How to simulate Signal-Noise Ratio?
Given a model $$ Y = f(X) + \varepsilon $$ The signal to noise ratio can be defined as (ref. ESL10) : $$ \frac{Var(f(X))}{Var(\varepsilon)} $$ To generate data with a specific signal to noise ratio: signal_to_noise_ratio = 4 data = c(0.47, 0.45, 0.30, 1.15, 0.82, 0.38, 0.51, 1.36, 1.72, 0.36) noise = rnorm(data) # gen...
How to simulate Signal-Noise Ratio?
Given a model $$ Y = f(X) + \varepsilon $$ The signal to noise ratio can be defined as (ref. ESL10) : $$ \frac{Var(f(X))}{Var(\varepsilon)} $$ To generate data with a specific signal to noise ratio:
How to simulate Signal-Noise Ratio? Given a model $$ Y = f(X) + \varepsilon $$ The signal to noise ratio can be defined as (ref. ESL10) : $$ \frac{Var(f(X))}{Var(\varepsilon)} $$ To generate data with a specific signal to noise ratio: signal_to_noise_ratio = 4 data = c(0.47, 0.45, 0.30, 1.15, 0.82, 0.38, 0.51, 1.36, 1...
How to simulate Signal-Noise Ratio? Given a model $$ Y = f(X) + \varepsilon $$ The signal to noise ratio can be defined as (ref. ESL10) : $$ \frac{Var(f(X))}{Var(\varepsilon)} $$ To generate data with a specific signal to noise ratio:
47,468
On the corrections for multiple comparisons
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply a Bonferroni correction. Perhaps it's good to be conservative in this instance given your small sample size (indeed, ev...
On the corrections for multiple comparisons
I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply
On the corrections for multiple comparisons I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply a Bonferroni correction. Perhaps it's good to be conservative in this insta...
On the corrections for multiple comparisons I don't think that the fact that you have found significant differences in all fifteen of your comparisons makes a difference. To maintain the familywise error rate, I would be tempted to simply apply
47,469
On the corrections for multiple comparisons
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonferroni bound may be too conservative though if some of the p-values are close to 0.05. I believe that p-value adjustment...
On the corrections for multiple comparisons
Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonfe
On the corrections for multiple comparisons Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonferroni bound may be too conservative though if some of the p-values are close...
On the corrections for multiple comparisons Alexander's answer is very good and he makes a good suggestion. It does seem a little surprising that everything is significant when the sample size is relatively small as in your example. The Bonfe
47,470
On the corrections for multiple comparisons
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what correction for multiple comparisons is appropriate for p-values that look like this?" Researchers should decide BEFORE looki...
On the corrections for multiple comparisons
This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what corr
On the corrections for multiple comparisons This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what correction for multiple comparisons is appropriate for p-values that look like t...
On the corrections for multiple comparisons This is one of many questions on here that comes at the problem of multiple comparisons backwards. Indeed, it is basically nonsensical for researchers to look at their results and then ask, "what corr
47,471
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pieces of information here that may be helpful. I suspect KM curves are more common because they are older and conceptual...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pi
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pieces of information here that may be helpful. ...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure Let me state up front that I don't have answers for all of your questions. I'm not as strong on competing risks as simpler applications of survival analysis. So, I will just throw out a couple of pi
47,472
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is the cause specific hazard function. There is a competing risk model called Gray-Fine model that he uses. I heard him sp...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is t
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is the cause specific hazard function. There is a ...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure You should look at the work of Jason Fine on competing risk modeling. Inplace of Kaplan-Meier there is the cumulative incidence function also analogous to the Hazard function in survival analysis is t
47,473
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where one individual may have several outbreaks over the duration of the study), the cumulative incidence curve will account f...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure
Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where one individual may have several outbreaks over ...
Cumulative Incidence vs. Kaplan Meier to estimate probability of failure Cumulative incidence is NOT opposite of survival in general. If one person can only experience one event ever, yes this is the case. However, if you are comparing risk of, say, herpes outbreak (where
47,474
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if you take the squared root of the results from SPSS), you will see that they are exactly the same. For example, squaring...
Inconsistency in mixed-effects model estimation results (Stata and SPSS)
Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if
Inconsistency in mixed-effects model estimation results (Stata and SPSS) Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if you take the squared root of the results from ...
Inconsistency in mixed-effects model estimation results (Stata and SPSS) Stata reports the estimated standard deviations of the random effects, whereas SPSS reports variances (this means you are not comparing apples with apples). If you square the results from Stata (or if
47,475
Why do we typically visually assess our assumptions?
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to REJECT H0. It is not a procedure we should follow to test the assumptions of our statistical analysis. What do we usual...
Why do we typically visually assess our assumptions?
I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to
Why do we typically visually assess our assumptions? I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to REJECT H0. It is not a procedure we should follow to test the assu...
Why do we typically visually assess our assumptions? I disagree with holding the opinion that the data is normally distributed unless you have statistically rejected normality. This is the procedure we follow when the goal of our research is actually to
47,476
Why do we typically visually assess our assumptions?
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe unless there is sufficient evidence to the contrary. (Somewhat odd, I agree.) This 'default position' goes by the nam...
Why do we typically visually assess our assumptions?
You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe
Why do we typically visually assess our assumptions? You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe unless there is sufficient evidence to the contrary. (Somewhat od...
Why do we typically visually assess our assumptions? You certainly could say that you don't have enough data to test the assumptions. Generally speaking, in significance testing we hold that there is a default position which we will continue to believe
47,477
Why do we typically visually assess our assumptions?
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't. We typically know what the residuals (or whatever) from our sample would look like if the population met our assumptions - so we look at them, just as part of the norma...
Why do we typically visually assess our assumptions?
Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't. We typically know what the residuals (or whateve
Why do we typically visually assess our assumptions? Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't. We typically know what the residuals (or whatever) from our sample would look like if the population met our assump...
Why do we typically visually assess our assumptions? Why do we look at the sample - because it's all we've got. It would be great to look at the population to see if it meets our assumption but we can't. We typically know what the residuals (or whateve
47,478
Ensembling regression models
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularization Paths for Generalized Linear Models via Coordinate Descent You might also look at randomForest or gbm in R depending...
Ensembling regression models
If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularizati
Ensembling regression models If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularization Paths for Generalized Linear Models via Coordinate Descent You might also look at rando...
Ensembling regression models If you are experiencing over fitting you could look into regularized regression which in R can be fit using many packages such as (glmnet). There are many good tutorials for this - one is Regularizati
47,479
Multiple FDR corrected experiments using the same data
The answer would depend on how you measure errors (and their proportions). If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If you are worried about the "global" proportion of false discoveries, you could treat all the experiments as one. This would...
Multiple FDR corrected experiments using the same data
The answer would depend on how you measure errors (and their proportions). If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If yo
Multiple FDR corrected experiments using the same data The answer would depend on how you measure errors (and their proportions). If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If you are worried about the "global" proportion of false discoveries,...
Multiple FDR corrected experiments using the same data The answer would depend on how you measure errors (and their proportions). If you are concerned with the proportion of false discoveries within each experiment, then do separate FDR corrections. If yo
47,480
The unit information prior and its BIC approximation
Looking at BIC's formula $$ BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n $$ you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivation of the BIC by Schwarz is based on an asymptotic result under which his prior (a formal prior which puts mass on subspa...
The unit information prior and its BIC approximation
Looking at BIC's formula $$ BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n $$ you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivatio
The unit information prior and its BIC approximation Looking at BIC's formula $$ BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n $$ you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivation of the BIC by Schwarz is based on an asymptotic result under whic...
The unit information prior and its BIC approximation Looking at BIC's formula $$ BIC = -2 \log\left(\sup_\theta f(x\mid\theta)\right) + k \, \log n $$ you will see that there is no trace of the prior $\pi(\theta)$ on it. That's because the derivatio
47,481
Dependent vs. independent samples
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context. So, I'm going to take a shot at this, trying to guess what Serenity Stack Holder means. Two samples (or more than two) are ...
Dependent vs. independent samples
Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context. So,
Dependent vs. independent samples Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context. So, I'm going to take a shot at this, trying to guess what Serenity Stack Holder means. T...
Dependent vs. independent samples Reading all the answers and comments, it's clear that we have a bit of a Catch-22 here. People can't answer the question without more context, but the question seems to be asking for that context. So,
47,482
Dependent vs. independent samples
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as a synonym of "realizations", then the following applies: Samples are dependent conditional on some (or possibly no) pri...
Dependent vs. independent samples
@whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as
Dependent vs. independent samples @whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as a synonym of "realizations", then the following applies: Samples are dependent condit...
Dependent vs. independent samples @whuber is right that we need a little bit more context to decipher what you mean by "samples." If you mean "samples" in the sense of "the result of doing sampling," and thus you're using the term as
47,483
Dependent vs. independent samples
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense. see also: How to define what a "sample" is? Maybe a list with easy cases is a start: if your samples are correlated, they are not independent (but you cannot conclude the other way round). (kind of obvious): if one s...
Dependent vs. independent samples
terminology: I'm chemist. I have many samples which together form one sample in the statistical sense. see also: How to define what a "sample" is? Maybe a list with easy cases is a start: if your sam
Dependent vs. independent samples terminology: I'm chemist. I have many samples which together form one sample in the statistical sense. see also: How to define what a "sample" is? Maybe a list with easy cases is a start: if your samples are correlated, they are not independent (but you cannot conclude the other way r...
Dependent vs. independent samples terminology: I'm chemist. I have many samples which together form one sample in the statistical sense. see also: How to define what a "sample" is? Maybe a list with easy cases is a start: if your sam
47,484
How to input self-defined distance function in R?
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' object, which is a convenient representation of a distance matrix that hclust() understands. Obviously whether your ow...
How to input self-defined distance function in R?
hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' o
How to input self-defined distance function in R? hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' object, which is a convenient representation of a distance matrix that...
How to input self-defined distance function in R? hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' o
47,485
How to input self-defined distance function in R?
Have a look at proxy, it creates distance matrices from any custom function. set.seed(1) mat <- matrix(runif(5)) fn <- function(x, y) 1 - cos(x - y) proxy::dist(mat, method = fn) 1 2 3 4 2 0.005678023 3 0.046859766 0.020078605 ...
How to input self-defined distance function in R?
Have a look at proxy, it creates distance matrices from any custom function. set.seed(1) mat <- matrix(runif(5)) fn <- function(x, y) 1 - cos(x - y) proxy::dist(mat, method = fn) 1
How to input self-defined distance function in R? Have a look at proxy, it creates distance matrices from any custom function. set.seed(1) mat <- matrix(runif(5)) fn <- function(x, y) 1 - cos(x - y) proxy::dist(mat, method = fn) 1 2 3 4 2 0.005678023 ...
How to input self-defined distance function in R? Have a look at proxy, it creates distance matrices from any custom function. set.seed(1) mat <- matrix(runif(5)) fn <- function(x, y) 1 - cos(x - y) proxy::dist(mat, method = fn) 1
47,486
How to input self-defined distance function in R?
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix to a dist object using as.dist(). hclust() takes a dist object as an argument. If you're plotting a heatmap, or somethi...
How to input self-defined distance function in R?
My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix
How to input self-defined distance function in R? My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix to a dist object using as.dist(). hclust() takes a dist object as an ...
How to input self-defined distance function in R? My approach is to write the distance function for two vectors and use the apply function to calculate distance to pairs of vectors (stored in a data frame, for example). Convert this symmetric matrix
47,487
WiFi localization using machine learning
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i = {s_1, s_2, s_3, \dots, s_n}$ where each $s_i$ is the strength of $AP_i$. That is if there were $n$ unique APs seen in a...
WiFi localization using machine learning
Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i =
WiFi localization using machine learning Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i = {s_1, s_2, s_3, \dots, s_n}$ where each $s_i$ is the strength of $AP_i$. That ...
WiFi localization using machine learning Could you simply take all APs that you saw in any reading to create a set $AP$ and fill in 0 strength for APs which do not appear in a particular scan? So a particular scan would be recorded as $S_i =
47,488
WiFi localization using machine learning
Here is a sketch of a naive Bayes solution. Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$. Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$. Use the frequencies in your sample to especify $P(X_i\mid R_j)$ as the number of times the $4$-tuple $X_i$ was observed in room $R_j$ divided by the t...
WiFi localization using machine learning
Here is a sketch of a naive Bayes solution. Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$. Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$. Use the frequencies in your sa
WiFi localization using machine learning Here is a sketch of a naive Bayes solution. Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$. Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$. Use the frequencies in your sample to especify $P(X_i\mid R_j)$ as the number of times the $4$-tuple $X_i$ wa...
WiFi localization using machine learning Here is a sketch of a naive Bayes solution. Define $X_i$ as the $4$-tuple $(SSID_i, BSSID_i, SS_i, FREQ_i)$. Denote the indicator of room $j$ by $R_j$, for $j=1,2,3$. Use the frequencies in your sa
47,489
Dimensionality reduction method for uncorrelated data?
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reduction. If you still needed to do dimensionality reduction for whatever reason, you could use random projections, independen...
Dimensionality reduction method for uncorrelated data?
A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reducti
Dimensionality reduction method for uncorrelated data? A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reduction. If you still needed to do dimensionality reduction for whatev...
Dimensionality reduction method for uncorrelated data? A simple discriminative classifier should train in seconds and generalize well after tuning l1 and l2 regularization parameters on a dataset of this size. There is no need to do dimensionality reducti
47,490
Dimensionality reduction method for uncorrelated data?
This description is closer to OK, but still you need to describe a lot of things in more detail. Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more than PCA. You want to "dimensionality reduction", possibly because you need to be able to describe the rule you obtain, but...
Dimensionality reduction method for uncorrelated data?
This description is closer to OK, but still you need to describe a lot of things in more detail. Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more tha
Dimensionality reduction method for uncorrelated data? This description is closer to OK, but still you need to describe a lot of things in more detail. Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more than PCA. You want to "dimensionality reduction", possibly because y...
Dimensionality reduction method for uncorrelated data? This description is closer to OK, but still you need to describe a lot of things in more detail. Since you want to classify, it seems like what you want is LDA (Linear Discriminant Analysis) more tha
47,491
Classification of observation symbols in a HMM?
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are impossible, while from HMM2s perspective D, E, F are. They will never predict them. (Note that there is nothing about H...
Classification of observation symbols in a HMM?
This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are im
Classification of observation symbols in a HMM? This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are impossible, while from HMM2s perspective D, E, F are. They will never pre...
Classification of observation symbols in a HMM? This is a classic Black Swan problem. HMM1 will assign zero likelihood to symbols D, E, F and HMM2 will assign zero likelihood to symbols A, B, C. Essentially from HMM1's perspective, D, E, F are im
47,492
Classification of observation symbols in a HMM?
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when the HMM encounters an unseen observation, it looks for the closest pseudo observation. See 2.7.1 in this for more details...
Classification of observation symbols in a HMM?
Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when t
Classification of observation symbols in a HMM? Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when the HMM encounters an unseen observation, it looks for the closest pseudo...
Classification of observation symbols in a HMM? Depending on how you define an observation, you can solve this problem by have a pseudo observation for rare training observations or unseen observations, e.g. number for all numbers. That way, when t
47,493
Dimensionality reduction using self-organizing map
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says, SOM creates a discretized low-dimensional representation. Perhaps the issue is how SOM does this. Let's say you specifi...
Dimensionality reduction using self-organizing map
In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says,
Dimensionality reduction using self-organizing map In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says, SOM creates a discretized low-dimensional representation. Perhaps th...
Dimensionality reduction using self-organizing map In your extreme example, I'd say your view is correct. You specified that you wanted a reduction to one dimension with two possible values in that dimension and that's what you got. As Wikipedia says,
47,494
Dimensionality reduction using self-organizing map
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9 2D points are defined a priori and kept unchanged during training. What is mapped directly to the reduced space is the ...
Dimensionality reduction using self-organizing map
I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9
Dimensionality reduction using self-organizing map I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9 2D points are defined a priori and kept unchanged during training. W...
Dimensionality reduction using self-organizing map I would say that SOM reduces the dimension for visual and other analysis purposes, the mapping between the reduced space and the original space is lost. This is due the fact that your grid of 3x3 or 9
47,495
Dimensionality reduction using self-organizing map
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional. Your view that "... we can say we use a 1-dimensional output space to represent the original 1000-dimensional space." is therefore not right. If you want a 1-dimensional SOM, set it at 1 by 1. Your original data of 200 by 1000 will then be reduced to 1 by 1...
Dimensionality reduction using self-organizing map
A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional. Your view that "... we can say we use a 1-dimensional output space to represent the original 1000-dimensional space." is therefore not rig
Dimensionality reduction using self-organizing map A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional. Your view that "... we can say we use a 1-dimensional output space to represent the original 1000-dimensional space." is therefore not right. If you want a 1-dimensional SOM, set it at 1 by 1. Your original ...
Dimensionality reduction using self-organizing map A 1 by 2 SOM is not a 1-dimensional SOM, but 2-dimensional. Your view that "... we can say we use a 1-dimensional output space to represent the original 1000-dimensional space." is therefore not rig
47,496
Dimensionality reduction using self-organizing map
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding data points.
Dimensionality reduction using self-organizing map
No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding da
Dimensionality reduction using self-organizing map No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding data points.
Dimensionality reduction using self-organizing map No, the feature vectors $x_1$ and $x_2$ are in the 1000-dimensional space. If you train with the same points for long enough, each feature vector approaches the Euclidean mean of its corresponding da
47,497
Reference for random forests
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attributes showing more-less how they were useful for the model -- it is usually better than correlation with a decision or line...
Reference for random forests
Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attribut
Reference for random forests Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attributes showing more-less how they were useful for the model -- it is usually better than correl...
Reference for random forests Random forest is a machine learning algorithm proposed by Breiman in this paper (there is also a webpage about it). Its significant property is that it can calculate an importance measure for attribut
47,498
Train a SVM-based classifier while taking into account the weight information
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example by example basis. I can think of one way in which you could approach this, if you have a lot of samples. You could use...
Train a SVM-based classifier while taking into account the weight information
What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example
Train a SVM-based classifier while taking into account the weight information What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example by example basis. I can think of one way...
Train a SVM-based classifier while taking into account the weight information What you are asking doesn't really fall into the framework of the SVM. There is some work on incorporating prior knowledge into SVMs (see e.g. here but these approaches are generally not on an example
47,499
Train a SVM-based classifier while taking into account the weight information
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers are closely related and will give similar results for most problems. Note that the KFD is equivalent to kernel ridge reg...
Train a SVM-based classifier while taking into account the weight information
A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers a
Train a SVM-based classifier while taking into account the weight information A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers are closely related and will give similar r...
Train a SVM-based classifier while taking into account the weight information A paper that might be of interest is "Estimating a Kernel Fisher Discriminant in the Presence of Label Noise" by Lawrence and Scholkopf, which deals with KFD rather than SVM, but the two classifiers a
47,500
Train a SVM-based classifier while taking into account the weight information
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal training data. In this case, classification is biased towards the class with more observations. To compensate, W-SVM sets ...
Train a SVM-based classifier while taking into account the weight information
There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal tr
Train a SVM-based classifier while taking into account the weight information There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal training data. In this case, classification ...
Train a SVM-based classifier while taking into account the weight information There is a technique called weighted SVM (see ref below), that appears to be supported by LibSVM (which I've never actually used). Weighted SVM solves the problem of having two classes with unequal tr