idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
26,601
Different results after propensity score matching in R
This is a standard behaviour of MatchIt package. It shuffles the observations before matching, i.e., it randomly selects the order of matching for the treated observations. You may use set.seed() function to fix the results. E.g., call set.seed(100) before calling matchit(). Different arguments of set.seed() will correspond to different matchings.
Different results after propensity score matching in R
This is a standard behaviour of MatchIt package. It shuffles the observations before matching, i.e., it randomly selects the order of matching for the treated observations. You may use set.seed() func
Different results after propensity score matching in R This is a standard behaviour of MatchIt package. It shuffles the observations before matching, i.e., it randomly selects the order of matching for the treated observations. You may use set.seed() function to fix the results. E.g., call set.seed(100) before calling matchit(). Different arguments of set.seed() will correspond to different matchings.
Different results after propensity score matching in R This is a standard behaviour of MatchIt package. It shuffles the observations before matching, i.e., it randomly selects the order of matching for the treated observations. You may use set.seed() func
26,602
Different results after propensity score matching in R
This is a very interesting question. The first explanation I can suggest is that your study is quite small and thus few matching differences are impactful. More in general, nearest neighbor matching is not very accurate. Caliper mathing is more reliable, and possibly the differences you report would decrease or disappear using it (as with using inverse probability treatment weighting). Finally, I am not sure whether you used the t test to compare baseline differences (which is inappropriate, as this should be done computing standardized differences), or for hypothesis testing (in which case a paired test should be used). In any case, the typical reporting approach is simply to report results of a single matching procedure, as long as it is correctly done (eg with caliper matching).
Different results after propensity score matching in R
This is a very interesting question. The first explanation I can suggest is that your study is quite small and thus few matching differences are impactful. More in general, nearest neighbor matching i
Different results after propensity score matching in R This is a very interesting question. The first explanation I can suggest is that your study is quite small and thus few matching differences are impactful. More in general, nearest neighbor matching is not very accurate. Caliper mathing is more reliable, and possibly the differences you report would decrease or disappear using it (as with using inverse probability treatment weighting). Finally, I am not sure whether you used the t test to compare baseline differences (which is inappropriate, as this should be done computing standardized differences), or for hypothesis testing (in which case a paired test should be used). In any case, the typical reporting approach is simply to report results of a single matching procedure, as long as it is correctly done (eg with caliper matching).
Different results after propensity score matching in R This is a very interesting question. The first explanation I can suggest is that your study is quite small and thus few matching differences are impactful. More in general, nearest neighbor matching i
26,603
What is the difference between manifold learning and non-linear dimensionality reduction?
Non-linear dimensionality reduction occurs when method used for reduction assumes that manifold on which latent variables are lying is, well... non-linear. So for linear methods manifold is a n-dimensional plane, i.e. affine surface, for non-linear methods it's not. "Manifold learning" term usually means geometrical/topological methods that learn non-linear manifold. So we can think about manifold learning as a subset of non-linear dimensionality reduction methods.
What is the difference between manifold learning and non-linear dimensionality reduction?
Non-linear dimensionality reduction occurs when method used for reduction assumes that manifold on which latent variables are lying is, well... non-linear. So for linear methods manifold is a n-dimens
What is the difference between manifold learning and non-linear dimensionality reduction? Non-linear dimensionality reduction occurs when method used for reduction assumes that manifold on which latent variables are lying is, well... non-linear. So for linear methods manifold is a n-dimensional plane, i.e. affine surface, for non-linear methods it's not. "Manifold learning" term usually means geometrical/topological methods that learn non-linear manifold. So we can think about manifold learning as a subset of non-linear dimensionality reduction methods.
What is the difference between manifold learning and non-linear dimensionality reduction? Non-linear dimensionality reduction occurs when method used for reduction assumes that manifold on which latent variables are lying is, well... non-linear. So for linear methods manifold is a n-dimens
26,604
Gaussian process vs Neural Networks
Gaussian processes are suitable for modelling small datasets where some prior knowledge of the generative process exists. GPs do require assumptions about the functional form of the underlying response. GPs do not scale well in terms of dimensionality. They may provide well calibrated uncertainty output. Neural networks are, on the other hand, more suitable for large and very large data sets where little knowledge about the underlying process or suitable features exist. NNs scale well. Work is being done to enable neural networks to output calibrated uncertainty estimates.
Gaussian process vs Neural Networks
Gaussian processes are suitable for modelling small datasets where some prior knowledge of the generative process exists. GPs do require assumptions about the functional form of the underlying respons
Gaussian process vs Neural Networks Gaussian processes are suitable for modelling small datasets where some prior knowledge of the generative process exists. GPs do require assumptions about the functional form of the underlying response. GPs do not scale well in terms of dimensionality. They may provide well calibrated uncertainty output. Neural networks are, on the other hand, more suitable for large and very large data sets where little knowledge about the underlying process or suitable features exist. NNs scale well. Work is being done to enable neural networks to output calibrated uncertainty estimates.
Gaussian process vs Neural Networks Gaussian processes are suitable for modelling small datasets where some prior knowledge of the generative process exists. GPs do require assumptions about the functional form of the underlying respons
26,605
Calculating Emission Probability values for Hidden Markov Model (HMM)
For these kind of questions, it is possible to use Laplace Smoothing. In general Laplace Smoothing can be written as: $$ \text{If } y \in \begin{Bmatrix} 1,2,...,k\end{Bmatrix} \text{then,}\\ P(y=j)=\frac{\sum_{i=1}^{m} L\begin{Bmatrix} y^{i}=j \end{Bmatrix} + 1}{m+k} $$ Here $L$ is the likelihood. So in this case the emission probability values ( $b_i(o)$ ) can be re-written as: $$ b_i(o) = \frac{\Count(i \to o) + 1}{\Count(i) + n} $$ where $n$ is the number of tags available after the system is trained.
Calculating Emission Probability values for Hidden Markov Model (HMM)
For these kind of questions, it is possible to use Laplace Smoothing. In general Laplace Smoothing can be written as: $$ \text{If } y \in \begin{Bmatrix} 1,2,...,k\end{Bmatrix} \text{then,}\\ P(y=j)=
Calculating Emission Probability values for Hidden Markov Model (HMM) For these kind of questions, it is possible to use Laplace Smoothing. In general Laplace Smoothing can be written as: $$ \text{If } y \in \begin{Bmatrix} 1,2,...,k\end{Bmatrix} \text{then,}\\ P(y=j)=\frac{\sum_{i=1}^{m} L\begin{Bmatrix} y^{i}=j \end{Bmatrix} + 1}{m+k} $$ Here $L$ is the likelihood. So in this case the emission probability values ( $b_i(o)$ ) can be re-written as: $$ b_i(o) = \frac{\Count(i \to o) + 1}{\Count(i) + n} $$ where $n$ is the number of tags available after the system is trained.
Calculating Emission Probability values for Hidden Markov Model (HMM) For these kind of questions, it is possible to use Laplace Smoothing. In general Laplace Smoothing can be written as: $$ \text{If } y \in \begin{Bmatrix} 1,2,...,k\end{Bmatrix} \text{then,}\\ P(y=j)=
26,606
Calculating Emission Probability values for Hidden Markov Model (HMM)
This is relatively old question, but I'll add my 5 cents for the people who (like myself) came across it searching for something related. An alternative approach for dealing with zero emission probabilities is to "close the vocabulary". An idea is to define "rare" words in training set - those that appear less than predefined number of times and substitute them with "word classes" before the model is trained. When applying a model to a new sequence of words, all words that were not seen in a training set are converted to "word classes" as well (effectively considering them as "rare"). It guarantees that for a model there will be no unseen words. The rules for producing "word classes" from words have to be selected manually (which is a downside). For instance, in a (probably) first article when this approach was utilized (Bikel, D.M., Schwartz, R. & Weischedel, R.M. Machine Learning (1999) 34: 211.; https://link.springer.com/article/10.1023/A:1007558221122; http://curtis.ml.cmu.edu/w/courses/index.php/Bikel_et_al_MLJ_1999) an examples of classes are: Word Feature | Example Text | Intuition -----------------------|------------------------|----------------------------------------- twoDigitNum | 90 | Two-digit year fourDigitNum | 1990 | Four digit year containsDigitAndAlpha | A8956-67 | Product code containsDigitAndDash | 09-96 | Date containsDigitAndSlash | 11/9/89 | Date containsDigitAndComma | 23,000.00 | Monetary amount containsDigitAndPeriod | 1.00 Monetary | amount, percentage otherNum | 456789 | Other number allCaps | BBN | Organization capPeriod | M. | Person name initial firstWord | first word of sentence | No useful capitalization information initCap | Sally | Capitalized word lowerCase | can | Uncapitalized word other | , | Punctuation marks, all other words An example of pre-processed tagged sentence from a training set (from lectures of Michael Collins): "Profits/NA soared/NA at/NA Boeing/SC Co./CC ,/NA easily/NA topping/NA forecasts/NA on/NA Wall/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP Mulally/CP announced/NA first/NA quarter/NA results/NA ./NA" is transformed (with some hypothetical set of tags and "rare words") into (substituted words as shown in bold) "firstword/NA soared/NA at/NA initCap/SC Co./CC ,/NA easily/NA lowercase/NA forecasts/NA on/NA initCap/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP initCap/CP announced/NA first/NA quarter/NA results/NA ./NA" It is still possible that in training set not all pairs of "tag -> word/word class" are seen, which makes it impossible for a certain word or word class being tagged with those tags. But that doesn't prevent of those words to be tagged with other tags - unlike when there is a word that was not seen in a training set.
Calculating Emission Probability values for Hidden Markov Model (HMM)
This is relatively old question, but I'll add my 5 cents for the people who (like myself) came across it searching for something related. An alternative approach for dealing with zero emission probabi
Calculating Emission Probability values for Hidden Markov Model (HMM) This is relatively old question, but I'll add my 5 cents for the people who (like myself) came across it searching for something related. An alternative approach for dealing with zero emission probabilities is to "close the vocabulary". An idea is to define "rare" words in training set - those that appear less than predefined number of times and substitute them with "word classes" before the model is trained. When applying a model to a new sequence of words, all words that were not seen in a training set are converted to "word classes" as well (effectively considering them as "rare"). It guarantees that for a model there will be no unseen words. The rules for producing "word classes" from words have to be selected manually (which is a downside). For instance, in a (probably) first article when this approach was utilized (Bikel, D.M., Schwartz, R. & Weischedel, R.M. Machine Learning (1999) 34: 211.; https://link.springer.com/article/10.1023/A:1007558221122; http://curtis.ml.cmu.edu/w/courses/index.php/Bikel_et_al_MLJ_1999) an examples of classes are: Word Feature | Example Text | Intuition -----------------------|------------------------|----------------------------------------- twoDigitNum | 90 | Two-digit year fourDigitNum | 1990 | Four digit year containsDigitAndAlpha | A8956-67 | Product code containsDigitAndDash | 09-96 | Date containsDigitAndSlash | 11/9/89 | Date containsDigitAndComma | 23,000.00 | Monetary amount containsDigitAndPeriod | 1.00 Monetary | amount, percentage otherNum | 456789 | Other number allCaps | BBN | Organization capPeriod | M. | Person name initial firstWord | first word of sentence | No useful capitalization information initCap | Sally | Capitalized word lowerCase | can | Uncapitalized word other | , | Punctuation marks, all other words An example of pre-processed tagged sentence from a training set (from lectures of Michael Collins): "Profits/NA soared/NA at/NA Boeing/SC Co./CC ,/NA easily/NA topping/NA forecasts/NA on/NA Wall/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP Mulally/CP announced/NA first/NA quarter/NA results/NA ./NA" is transformed (with some hypothetical set of tags and "rare words") into (substituted words as shown in bold) "firstword/NA soared/NA at/NA initCap/SC Co./CC ,/NA easily/NA lowercase/NA forecasts/NA on/NA initCap/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP initCap/CP announced/NA first/NA quarter/NA results/NA ./NA" It is still possible that in training set not all pairs of "tag -> word/word class" are seen, which makes it impossible for a certain word or word class being tagged with those tags. But that doesn't prevent of those words to be tagged with other tags - unlike when there is a word that was not seen in a training set.
Calculating Emission Probability values for Hidden Markov Model (HMM) This is relatively old question, but I'll add my 5 cents for the people who (like myself) came across it searching for something related. An alternative approach for dealing with zero emission probabi
26,607
The product of two lognormal random variables
Using Dilips answer here, if $X$ and $Y$ are bi-variate normal and $X \sim N(\mu_1, \sigma_1^2)$ and $Y \sim N(\mu_2, \sigma_2^2)$ and the correlation between $X$ and $Y$ is $\rho$. Then $$ Cov(X,Y) = \rho \sigma_1 \sigma_2,$$ $$X + Y \sim N(\mu_1 + \mu_2, \sigma^2_1 + \sigma^2_2 + 2\rho\sigma_1 \sigma_2). $$ Thus $Z_1Z_2$ will also be a lognormal distribution with parameters $\mu_1 + \mu_2$ and $\sigma^2_1 + \sigma^2_2 + 2\rho\sigma_1 \sigma_2$.
The product of two lognormal random variables
Using Dilips answer here, if $X$ and $Y$ are bi-variate normal and $X \sim N(\mu_1, \sigma_1^2)$ and $Y \sim N(\mu_2, \sigma_2^2)$ and the correlation between $X$ and $Y$ is $\rho$. Then $$ Cov(X,Y) =
The product of two lognormal random variables Using Dilips answer here, if $X$ and $Y$ are bi-variate normal and $X \sim N(\mu_1, \sigma_1^2)$ and $Y \sim N(\mu_2, \sigma_2^2)$ and the correlation between $X$ and $Y$ is $\rho$. Then $$ Cov(X,Y) = \rho \sigma_1 \sigma_2,$$ $$X + Y \sim N(\mu_1 + \mu_2, \sigma^2_1 + \sigma^2_2 + 2\rho\sigma_1 \sigma_2). $$ Thus $Z_1Z_2$ will also be a lognormal distribution with parameters $\mu_1 + \mu_2$ and $\sigma^2_1 + \sigma^2_2 + 2\rho\sigma_1 \sigma_2$.
The product of two lognormal random variables Using Dilips answer here, if $X$ and $Y$ are bi-variate normal and $X \sim N(\mu_1, \sigma_1^2)$ and $Y \sim N(\mu_2, \sigma_2^2)$ and the correlation between $X$ and $Y$ is $\rho$. Then $$ Cov(X,Y) =
26,608
A paper mentions a "Monte Carlo simulation to determine the number of principal components"; how does it work?
A related term to this question is "Parallel Analysis". In simple terms, the monte carlo simulation would generate 1000 (or such) 10304x236 matrices of random normally distributed data (this assumes, of course, that the data you analyzing are normally distributed; if your data were distributed differently, you'd use a different random distribution). You would then extract the eigenvalues for each data set you created, and average each eigenvalue across all 1000 (or such) replications while also creating confidence intervals. You then compare the eigenvalues from your data set to the average eigenvalues from your simulation. Wherever the eigenvalues from your dataset exceed the 99th confidence interval of the eigenvalues from the monte carlo simulation, that's how many factors the analysis would suggest to retain. For example, if the 25th eigenvalue from your data is 2.10 and the 26th is 1.97, and the 99th confidence interval of the 25th eigenvalues from the 1000 (or such) random data sets is 2.04 and the 26th is 2.01, this would suggest that you retain 25 components. There are functions built to do this for you. One link for Matlab is this: http://www.mathworks.com/matlabcentral/fileexchange/44996-parallel-analysis--pa--to-for-determining-the-number-of-components-to-retain-from-pca/content/pa_test.m I found that one by googling "Parallel Analysis in Matlab".
A paper mentions a "Monte Carlo simulation to determine the number of principal components"; how doe
A related term to this question is "Parallel Analysis". In simple terms, the monte carlo simulation would generate 1000 (or such) 10304x236 matrices of random normally distributed data (this assumes,
A paper mentions a "Monte Carlo simulation to determine the number of principal components"; how does it work? A related term to this question is "Parallel Analysis". In simple terms, the monte carlo simulation would generate 1000 (or such) 10304x236 matrices of random normally distributed data (this assumes, of course, that the data you analyzing are normally distributed; if your data were distributed differently, you'd use a different random distribution). You would then extract the eigenvalues for each data set you created, and average each eigenvalue across all 1000 (or such) replications while also creating confidence intervals. You then compare the eigenvalues from your data set to the average eigenvalues from your simulation. Wherever the eigenvalues from your dataset exceed the 99th confidence interval of the eigenvalues from the monte carlo simulation, that's how many factors the analysis would suggest to retain. For example, if the 25th eigenvalue from your data is 2.10 and the 26th is 1.97, and the 99th confidence interval of the 25th eigenvalues from the 1000 (or such) random data sets is 2.04 and the 26th is 2.01, this would suggest that you retain 25 components. There are functions built to do this for you. One link for Matlab is this: http://www.mathworks.com/matlabcentral/fileexchange/44996-parallel-analysis--pa--to-for-determining-the-number-of-components-to-retain-from-pca/content/pa_test.m I found that one by googling "Parallel Analysis in Matlab".
A paper mentions a "Monte Carlo simulation to determine the number of principal components"; how doe A related term to this question is "Parallel Analysis". In simple terms, the monte carlo simulation would generate 1000 (or such) 10304x236 matrices of random normally distributed data (this assumes,
26,609
What question does ANOVA answer?
ANOVA stands for "Analysis of Variance". Rather unsurprisingly, it analyses variance. Let's be a little more explicit. Your observations will exhibit some variance. If you group your observations by your factor 1, the variance within the groups defined by factor 1 will be smaller than the overall variance. Factor 1 "explains variance". However, this is not sufficient to conclude that factor 1 actually does have a relationship to your observations... because grouping by anything whatsoever will "explain" variance. The good thing is that we know how much variance will be explained under the null hypothesis that your factor does, in fact, have nothing to do with your observations. This amount of variance explained under the null is described by an $F$ distribution. Thus, the strategy in ANOVA is to estimate overall variance and within-groups variance (using sums of squares) and taking ratios of these estimated variances. This ratio is the $F$ statistic. We then compare this $F$ statistic to the critical value of the $F$ distribution in a one-sided test, yielding your $p$ value. The number of factor levels goes into one parameter of the $F$ distribution (more factor levels will explain more variance under the null hypothesis), and the number of observations and the number of levels goes into the other. This earlier question may be helpful. (Why a one-sided test? Because, as above, any grouping will explain some variance, so it only makes sense to check whether your factor explains a significantly large amount of variance.) The "Motivating Example" section of the Wikipedia entry provides some very nice illustrations of factors that explain very little, some, and a lot of the overall variance. Two-way ANOVA and interactions, as in your example, as well as ANCOVA, are then just generalizations on this theme. In each case, we investigate whether adding some explanatory variable explains a significantly large amount of variance. Once we have a significant overall $F$ test, we can examine whether certain factor levels' observations are significantly different than others in post-hoc tests. For instance, D may be different from A, B and C, but those may not be significantly different from each other. You will typically use $t$ tests for this. This earlier question may be useful, as well as this one.
What question does ANOVA answer?
ANOVA stands for "Analysis of Variance". Rather unsurprisingly, it analyses variance. Let's be a little more explicit. Your observations will exhibit some variance. If you group your observations by
What question does ANOVA answer? ANOVA stands for "Analysis of Variance". Rather unsurprisingly, it analyses variance. Let's be a little more explicit. Your observations will exhibit some variance. If you group your observations by your factor 1, the variance within the groups defined by factor 1 will be smaller than the overall variance. Factor 1 "explains variance". However, this is not sufficient to conclude that factor 1 actually does have a relationship to your observations... because grouping by anything whatsoever will "explain" variance. The good thing is that we know how much variance will be explained under the null hypothesis that your factor does, in fact, have nothing to do with your observations. This amount of variance explained under the null is described by an $F$ distribution. Thus, the strategy in ANOVA is to estimate overall variance and within-groups variance (using sums of squares) and taking ratios of these estimated variances. This ratio is the $F$ statistic. We then compare this $F$ statistic to the critical value of the $F$ distribution in a one-sided test, yielding your $p$ value. The number of factor levels goes into one parameter of the $F$ distribution (more factor levels will explain more variance under the null hypothesis), and the number of observations and the number of levels goes into the other. This earlier question may be helpful. (Why a one-sided test? Because, as above, any grouping will explain some variance, so it only makes sense to check whether your factor explains a significantly large amount of variance.) The "Motivating Example" section of the Wikipedia entry provides some very nice illustrations of factors that explain very little, some, and a lot of the overall variance. Two-way ANOVA and interactions, as in your example, as well as ANCOVA, are then just generalizations on this theme. In each case, we investigate whether adding some explanatory variable explains a significantly large amount of variance. Once we have a significant overall $F$ test, we can examine whether certain factor levels' observations are significantly different than others in post-hoc tests. For instance, D may be different from A, B and C, but those may not be significantly different from each other. You will typically use $t$ tests for this. This earlier question may be useful, as well as this one.
What question does ANOVA answer? ANOVA stands for "Analysis of Variance". Rather unsurprisingly, it analyses variance. Let's be a little more explicit. Your observations will exhibit some variance. If you group your observations by
26,610
Variational Bayes combined with Monte Carlo
I'll confess this isn't a domain I know very well, so take this with a grain of salt. First of all, note that what you are proposing doesn't yield such a simple algorithm: in order to compute the new $q^\star_i$, we don't need to compute a single expected value (like a mean or variance), but the expected value of a whole function. This is computationally hard and will require you to approximate the true $q^\star$ by some $\tilde q$ (for example, we might find a histogram approximation) But, if you are going to be restricting the $q_i$ to a small parametric family, a better idea might be to use stochastic gradient descent to find the best parameter values (see: Variational bayesian inference with stochastic search, 2012, Paisley, Blei, Jordan). The gradient they compute is very similar to what you wrote: they sample from all the approximations they are currently not optimizing. So what you propose isn't that simple, but it's quite close to an actual method that has been proposed very recently
Variational Bayes combined with Monte Carlo
I'll confess this isn't a domain I know very well, so take this with a grain of salt. First of all, note that what you are proposing doesn't yield such a simple algorithm: in order to compute the new
Variational Bayes combined with Monte Carlo I'll confess this isn't a domain I know very well, so take this with a grain of salt. First of all, note that what you are proposing doesn't yield such a simple algorithm: in order to compute the new $q^\star_i$, we don't need to compute a single expected value (like a mean or variance), but the expected value of a whole function. This is computationally hard and will require you to approximate the true $q^\star$ by some $\tilde q$ (for example, we might find a histogram approximation) But, if you are going to be restricting the $q_i$ to a small parametric family, a better idea might be to use stochastic gradient descent to find the best parameter values (see: Variational bayesian inference with stochastic search, 2012, Paisley, Blei, Jordan). The gradient they compute is very similar to what you wrote: they sample from all the approximations they are currently not optimizing. So what you propose isn't that simple, but it's quite close to an actual method that has been proposed very recently
Variational Bayes combined with Monte Carlo I'll confess this isn't a domain I know very well, so take this with a grain of salt. First of all, note that what you are proposing doesn't yield such a simple algorithm: in order to compute the new
26,611
Gaussian Mixture and Method of Moments
The method of moments can always be used; I assume its properties for Gaussian mixture have been studied but I don’t know any references. Let’s have a look on the mixture of two Gaussian $\mathcal N(\mu_1, \sigma_1^2)$ and $\mathcal N(\mu_2, \sigma_2^2)$ in proportions $p$, $1-p$. We have five parameters to estimate so we will use the first five moments. The moment generating function of this mixture is $$p \exp\left(\mu_1 t + {1\over 2} \sigma_1^2 t^2\right) + (1-p) \exp\left(\mu_2 t + {1\over 2} \sigma_2^2 t^2\right)$$ which gives the first five moments as $$\begin{aligned} m_1 &= p \mu_1 + (1-p) \mu_2 \\ m_2 &= p (\mu_1^2 + \sigma_1^2) + (1-p)(\mu_2^2 + \sigma_2^2) \\ m_3 &= p (\mu_1^3 + 3 \mu_1 \sigma_1^2) + (1-p)(\mu_2^3 + 3 \mu_2 \sigma_2^2)\\ m_4 &= p (\mu_1^4 + 6 \mu_1^2 \sigma_1^2 + 3\sigma_1^2) + (1-p)(\mu_2^4 + 6 \mu_2^2 \sigma_2^2 + 3\sigma_2^4)\\ m_5 &= p (\mu_1^5 + 10 \mu_1^3 \sigma_1^2 + 15 \mu_1 \sigma_1^4) + (1-p)(\mu_2^5 + 10 \mu_2^3 \sigma_2^2 + 15 \mu_2 \sigma_2^4) \end{aligned}$$ The difficulty is to solve these equations in $p$, $\mu_1$ and $\mu_2$ for given moments $m_1$, $m_2$ and $m_3$. This is not easy! We need here a iterative algorithm. There is surely something clever to do here but I’ll use brute force, with a quasi-Newton to minimize the norm of the difference: f <- function(m, p, mu1, s1, mu2, s2) { mm1 <- c(mu1, mu1**2 + s1, 3*mu1*s1 + mu1**3, 3*s1**2 + 6*s1*mu1**2 + mu1**4, 15*mu1*s1^2 + 10*s1*mu1^3 + mu1^5) mm2 <- c(mu2, mu2**2 + s2, 3*mu2*s2 + mu2**3, 3*s2**2 + 6*s2*mu2**2 + mu2**4, 15*mu2*s2^2 + 10*s2*mu2^3 + mu2^5) mm <- p*mm1 + (1-p)*mm2; sum( (m-mm)**2 ) } set.seed(1) x <- c( rnorm(100, 0, 1), rnorm(200, 4, 0.5) ) m <- c(mean(x), mean(x**2), mean(x**3), mean(x**4), mean(x**5) ) r <- optim(c(0.5,0,1,4,0.25), function(x) f(m, x[1], x[2], x[3], x[4], x[5]), method="BFGS")$par Let’s see: hist(x, freq=FALSE, breaks=20) t <- seq(-3,6,length=501) lines(t, r[1]*dnorm(t, mean=r[2], sd=sqrt(r[3])) + (1-r[1])*dnorm(t, mean=r[4], sd=sqrt(r[5])), col="red") This does not look very good. I am pretty sure maximum likelihood behaves better. Moreover it is easy to implement with an EM. I don’t think this deserves more investigations.
Gaussian Mixture and Method of Moments
The method of moments can always be used; I assume its properties for Gaussian mixture have been studied but I don’t know any references. Let’s have a look on the mixture of two Gaussian $\mathcal N(\
Gaussian Mixture and Method of Moments The method of moments can always be used; I assume its properties for Gaussian mixture have been studied but I don’t know any references. Let’s have a look on the mixture of two Gaussian $\mathcal N(\mu_1, \sigma_1^2)$ and $\mathcal N(\mu_2, \sigma_2^2)$ in proportions $p$, $1-p$. We have five parameters to estimate so we will use the first five moments. The moment generating function of this mixture is $$p \exp\left(\mu_1 t + {1\over 2} \sigma_1^2 t^2\right) + (1-p) \exp\left(\mu_2 t + {1\over 2} \sigma_2^2 t^2\right)$$ which gives the first five moments as $$\begin{aligned} m_1 &= p \mu_1 + (1-p) \mu_2 \\ m_2 &= p (\mu_1^2 + \sigma_1^2) + (1-p)(\mu_2^2 + \sigma_2^2) \\ m_3 &= p (\mu_1^3 + 3 \mu_1 \sigma_1^2) + (1-p)(\mu_2^3 + 3 \mu_2 \sigma_2^2)\\ m_4 &= p (\mu_1^4 + 6 \mu_1^2 \sigma_1^2 + 3\sigma_1^2) + (1-p)(\mu_2^4 + 6 \mu_2^2 \sigma_2^2 + 3\sigma_2^4)\\ m_5 &= p (\mu_1^5 + 10 \mu_1^3 \sigma_1^2 + 15 \mu_1 \sigma_1^4) + (1-p)(\mu_2^5 + 10 \mu_2^3 \sigma_2^2 + 15 \mu_2 \sigma_2^4) \end{aligned}$$ The difficulty is to solve these equations in $p$, $\mu_1$ and $\mu_2$ for given moments $m_1$, $m_2$ and $m_3$. This is not easy! We need here a iterative algorithm. There is surely something clever to do here but I’ll use brute force, with a quasi-Newton to minimize the norm of the difference: f <- function(m, p, mu1, s1, mu2, s2) { mm1 <- c(mu1, mu1**2 + s1, 3*mu1*s1 + mu1**3, 3*s1**2 + 6*s1*mu1**2 + mu1**4, 15*mu1*s1^2 + 10*s1*mu1^3 + mu1^5) mm2 <- c(mu2, mu2**2 + s2, 3*mu2*s2 + mu2**3, 3*s2**2 + 6*s2*mu2**2 + mu2**4, 15*mu2*s2^2 + 10*s2*mu2^3 + mu2^5) mm <- p*mm1 + (1-p)*mm2; sum( (m-mm)**2 ) } set.seed(1) x <- c( rnorm(100, 0, 1), rnorm(200, 4, 0.5) ) m <- c(mean(x), mean(x**2), mean(x**3), mean(x**4), mean(x**5) ) r <- optim(c(0.5,0,1,4,0.25), function(x) f(m, x[1], x[2], x[3], x[4], x[5]), method="BFGS")$par Let’s see: hist(x, freq=FALSE, breaks=20) t <- seq(-3,6,length=501) lines(t, r[1]*dnorm(t, mean=r[2], sd=sqrt(r[3])) + (1-r[1])*dnorm(t, mean=r[4], sd=sqrt(r[5])), col="red") This does not look very good. I am pretty sure maximum likelihood behaves better. Moreover it is easy to implement with an EM. I don’t think this deserves more investigations.
Gaussian Mixture and Method of Moments The method of moments can always be used; I assume its properties for Gaussian mixture have been studied but I don’t know any references. Let’s have a look on the mixture of two Gaussian $\mathcal N(\
26,612
Logistic Regression: Interpreting Continuous Variables
1) Since it is an odds ratio it doesn't matter where you start. The odds for an 18 year old are 3 times those for a 17 year old. Or the odds for a 17 year old are 1/3 those of an 18 year old. Same thing. If you want to get the probability that a person of a particular age will be employed, you can use the formula with the parameter estimates (not the ORs). Or you can get the program you are using to do it for you. 2) Whether centering helps is a matter of opinion. I don't find centered models clearer, but some people do. 3) The odds are not exactly the same as "likely" (although many people speak as if they were) and the odds for a 17 year old would be 27 times those of a 14 year old. Finally, I'd be cautious about this model. The model assumes that the OR is the same between 14 and 15, 15 and 16 and so on. That seems unlikely to me, based on what I know about the subject.
Logistic Regression: Interpreting Continuous Variables
1) Since it is an odds ratio it doesn't matter where you start. The odds for an 18 year old are 3 times those for a 17 year old. Or the odds for a 17 year old are 1/3 those of an 18 year old. Same thi
Logistic Regression: Interpreting Continuous Variables 1) Since it is an odds ratio it doesn't matter where you start. The odds for an 18 year old are 3 times those for a 17 year old. Or the odds for a 17 year old are 1/3 those of an 18 year old. Same thing. If you want to get the probability that a person of a particular age will be employed, you can use the formula with the parameter estimates (not the ORs). Or you can get the program you are using to do it for you. 2) Whether centering helps is a matter of opinion. I don't find centered models clearer, but some people do. 3) The odds are not exactly the same as "likely" (although many people speak as if they were) and the odds for a 17 year old would be 27 times those of a 14 year old. Finally, I'd be cautious about this model. The model assumes that the OR is the same between 14 and 15, 15 and 16 and so on. That seems unlikely to me, based on what I know about the subject.
Logistic Regression: Interpreting Continuous Variables 1) Since it is an odds ratio it doesn't matter where you start. The odds for an 18 year old are 3 times those for a 17 year old. Or the odds for a 17 year old are 1/3 those of an 18 year old. Same thi
26,613
Logistic Regression: Interpreting Continuous Variables
The average odds of enrolling in the training problem for an individual is # times the odds for another individual who are one year younger/older, after holding all other variables constant. That's my take.
Logistic Regression: Interpreting Continuous Variables
The average odds of enrolling in the training problem for an individual is # times the odds for another individual who are one year younger/older, after holding all other variables constant. That's my
Logistic Regression: Interpreting Continuous Variables The average odds of enrolling in the training problem for an individual is # times the odds for another individual who are one year younger/older, after holding all other variables constant. That's my take.
Logistic Regression: Interpreting Continuous Variables The average odds of enrolling in the training problem for an individual is # times the odds for another individual who are one year younger/older, after holding all other variables constant. That's my
26,614
Error propagation SD vs SE
You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so the math does not change. In your particular case when you estimate SE of $C=A-B$ and you know $\sigma^2_A$, $\sigma^2_B$, $N_A$, and $N_B$, then $$\mathrm{SE}_C=\sqrt{\frac{\sigma^2_A}{N_A}+\frac{\sigma^2_B}{N_B}}.$$ Please note that another option that could potentially sound reasonable is incorrect: $$\mathrm{SE}_C \ne \sqrt{\frac{\sigma^2_A+\sigma^2_B}{N_A+N_B}}.$$ To see why, imagine that $\sigma^2_A=\sigma^2_B=1$, but in one case you have a lot of observations and another case only one: $N_A=100, N_B=1$. The standard error of the mean of the first group is 0.1, and of the second it is 1. Now if you use the second (incorrect) formula, you would get approximately 0.14 as the joint standard error, which is far too small given that you second measurement is known $\pm 1$. The correct formula gives $\approx 1$, which makes sense.
Error propagation SD vs SE
You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so
Error propagation SD vs SE You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so the math does not change. In your particular case when you estimate SE of $C=A-B$ and you know $\sigma^2_A$, $\sigma^2_B$, $N_A$, and $N_B$, then $$\mathrm{SE}_C=\sqrt{\frac{\sigma^2_A}{N_A}+\frac{\sigma^2_B}{N_B}}.$$ Please note that another option that could potentially sound reasonable is incorrect: $$\mathrm{SE}_C \ne \sqrt{\frac{\sigma^2_A+\sigma^2_B}{N_A+N_B}}.$$ To see why, imagine that $\sigma^2_A=\sigma^2_B=1$, but in one case you have a lot of observations and another case only one: $N_A=100, N_B=1$. The standard error of the mean of the first group is 0.1, and of the second it is 1. Now if you use the second (incorrect) formula, you would get approximately 0.14 as the joint standard error, which is far too small given that you second measurement is known $\pm 1$. The correct formula gives $\approx 1$, which makes sense.
Error propagation SD vs SE You should simply treat your SE as SD, and use exactly the same error propagation formulas. Indeed, standard error of the mean is nothing else than standard deviation of your estimate of the mean, so
26,615
Error propagation SD vs SE
Since you know the number of measurements, my first instinct would be to just calculate the propagated SD and then calculate the SE from the propagated SD by dividing it by the square root of N, as per your equation above.
Error propagation SD vs SE
Since you know the number of measurements, my first instinct would be to just calculate the propagated SD and then calculate the SE from the propagated SD by dividing it by the square root of N, as pe
Error propagation SD vs SE Since you know the number of measurements, my first instinct would be to just calculate the propagated SD and then calculate the SE from the propagated SD by dividing it by the square root of N, as per your equation above.
Error propagation SD vs SE Since you know the number of measurements, my first instinct would be to just calculate the propagated SD and then calculate the SE from the propagated SD by dividing it by the square root of N, as pe
26,616
Estimating the parameter of a uniform distribution: improper prior?
This has generated some interesting debate, but note that it really doesn't make much difference to the question of interest. Personally I think that because $\theta$ is a scale parameter, the transformation group argument is appropriate, leading to a prior of $$\begin{array}& p(\theta|I)=\frac{\theta^{-1}}{\log\left(\frac{U}{L}\right)}\propto\theta^{-1} & L<\theta<U\end{array}$$ This distribution has the same form under rescaling of the problem (the likelihood also remains "invariant" under rescaling). The kernel of this prior, $f(y)=y^{-1}$ can be derived by solving the functional equation $af(ay)=f(y)$. The values $L,U$ depend on the problem, and really only matter if the sample size is very small (like 1 or 2). The posterior is a truncated pareto, given by: $$\begin{array}\\ p(\theta|DI)=\frac{N\theta^{-N-1}}{ (L^{*})^{-N}-U^{-N}} & L^{*}<\theta<U & \text{where} & L^{*}=max(L,X_{(N)}) \end{array}$$ Where $X_{(N)}$ is the Nth order statistic, or the maximum value of the sample. We get the posterior mean of $$E(\theta|DI)= \frac{ N((L^{*})^{1-N}-U^{1-N}) }{ (N-1)((L^{*})^{-N}-U^{-N}) }=\frac{N}{N-1}L^{*}\left(\frac{ 1-\left[\frac{L^{*}}{U}\right]^{N-1} }{ 1-\left[\frac{L^{*}}{U}\right]^{N} }\right)$$ If we set $U\to\infty$ and $L\to 0$ the we get the simpler exression $E(\theta|DI)=\frac{N}{N-1}X_{(N)}$. But now suppose we use a more general prior, given by $p(\theta|cI)\propto\theta^{-c-1}$ (note that we keep the limits $L,U$ to ensure everything is proper - no singular maths then). The posterior is then the same as above, but with $N$ replaced by $c+N$ - provided that $c+N\geq 0$. Repeating the above calculations, we the simplified posterior mean of $$E(\theta|DI)=\frac{N+c}{N+c-1}X_{(N)}$$ So the uniform prior ($c=-1$) will give an estimate of $\frac{N-1}{N-2}X_{(N)}$ provided that $N\geq 2$ (mean is infinite for $N=2$). This shows that the debate here is a bit like whether or not to use $N$ or $N-1$ as the divisor in the variance estimate. One argument against the use of the improper uniform prior in this case is that the posterior is improper when $N=1$, as it is proportional to $\theta^{-1}$. But this only matters if $N=1$ or is very small.
Estimating the parameter of a uniform distribution: improper prior?
This has generated some interesting debate, but note that it really doesn't make much difference to the question of interest. Personally I think that because $\theta$ is a scale parameter, the transf
Estimating the parameter of a uniform distribution: improper prior? This has generated some interesting debate, but note that it really doesn't make much difference to the question of interest. Personally I think that because $\theta$ is a scale parameter, the transformation group argument is appropriate, leading to a prior of $$\begin{array}& p(\theta|I)=\frac{\theta^{-1}}{\log\left(\frac{U}{L}\right)}\propto\theta^{-1} & L<\theta<U\end{array}$$ This distribution has the same form under rescaling of the problem (the likelihood also remains "invariant" under rescaling). The kernel of this prior, $f(y)=y^{-1}$ can be derived by solving the functional equation $af(ay)=f(y)$. The values $L,U$ depend on the problem, and really only matter if the sample size is very small (like 1 or 2). The posterior is a truncated pareto, given by: $$\begin{array}\\ p(\theta|DI)=\frac{N\theta^{-N-1}}{ (L^{*})^{-N}-U^{-N}} & L^{*}<\theta<U & \text{where} & L^{*}=max(L,X_{(N)}) \end{array}$$ Where $X_{(N)}$ is the Nth order statistic, or the maximum value of the sample. We get the posterior mean of $$E(\theta|DI)= \frac{ N((L^{*})^{1-N}-U^{1-N}) }{ (N-1)((L^{*})^{-N}-U^{-N}) }=\frac{N}{N-1}L^{*}\left(\frac{ 1-\left[\frac{L^{*}}{U}\right]^{N-1} }{ 1-\left[\frac{L^{*}}{U}\right]^{N} }\right)$$ If we set $U\to\infty$ and $L\to 0$ the we get the simpler exression $E(\theta|DI)=\frac{N}{N-1}X_{(N)}$. But now suppose we use a more general prior, given by $p(\theta|cI)\propto\theta^{-c-1}$ (note that we keep the limits $L,U$ to ensure everything is proper - no singular maths then). The posterior is then the same as above, but with $N$ replaced by $c+N$ - provided that $c+N\geq 0$. Repeating the above calculations, we the simplified posterior mean of $$E(\theta|DI)=\frac{N+c}{N+c-1}X_{(N)}$$ So the uniform prior ($c=-1$) will give an estimate of $\frac{N-1}{N-2}X_{(N)}$ provided that $N\geq 2$ (mean is infinite for $N=2$). This shows that the debate here is a bit like whether or not to use $N$ or $N-1$ as the divisor in the variance estimate. One argument against the use of the improper uniform prior in this case is that the posterior is improper when $N=1$, as it is proportional to $\theta^{-1}$. But this only matters if $N=1$ or is very small.
Estimating the parameter of a uniform distribution: improper prior? This has generated some interesting debate, but note that it really doesn't make much difference to the question of interest. Personally I think that because $\theta$ is a scale parameter, the transf
26,617
Estimating the parameter of a uniform distribution: improper prior?
Since the purpose here is presumably to obtain some valid and useful estimate of $\theta$, the prior distribution should be consistent with the specification of the distribution of the population from which the sample comes. This does NOT in any way mean that we "calculate" the prior using the sample itself -this would nullify the validity of the whole procedure. We do know that the population from which the sample comes is a population of i.i.d. uniform random variables each ranging in $[0,\theta]$. This is a maintained assumption and is part of the prior information that we possess (and it has nothing to do with the sample, i.e. with a specific realization of a subset of these random variables). Now assume that this population consists of $m$ random variables, (while our sample consists of $n<m$ realizations of $n$ random variables). The maintained assumption tells us that $$\max_{i=1,...,n}\{X_i\}\le \max_{j=1,...,m}\{X_j\} \le \theta$$ Denote for compactness $\max_{i=1,...,n}\{X_i\} \equiv X^*$. Then we have $\theta \ge X^*$ which can also be written $$\theta = cX^*\qquad c\ge 1$$ The density function of the $\max$ of $N$ i.i.d Uniform r.v.'s ranging in $[0,\theta]$ is $$f_{X^*}(x^*) = N\frac {(x^*)^{N-1}}{\theta^N} $$ for the support $[0,\theta]$, and zero elsewhere. Then by using $\theta = cX^*$ and applying the change-of-variable formula we obtain a prior distribution for $\theta$ that is consistent with the maintained assumption: $$f_p(\theta) = N\frac {(\frac{\theta}{c})^{N-1}}{\theta^N}\frac 1c = \frac {N}{c^N} \theta^{-1}\qquad \theta \in [x^*, \infty]$$ which may be improper if we don't specify the constant $c$ suitably. But our interest lies in having a proper posterior for $\theta$, and also, we do not want to restrict the possible values of $\theta$ (beyond the restriction implied by the maintained assumption). So we leave $c$ undetermined. Then writing $\mathbf X = \{x_1,..,x_n\}$ the posterior is $$f(\theta \mid \mathbf X)\; \propto\; \theta^{-N}\frac {N}{c^N} \theta^{-1} \Rightarrow f(\theta \mid \mathbf X) = A\frac {N}{c^N} \theta^{-(N+1)}$$ for some normalizing constant A. We want $$\int_{S_{\theta}}f(\theta \mid \mathbf X)d\theta =1 \Rightarrow \int_{x^*}^{\infty}A\frac {N}{c^N} \theta^{-(N+1)}d\theta =1$$ $$\Rightarrow A\frac {N}{c^N}\frac {1}{-N}\theta^{-N}\Big |_{x^*}^{\infty} = 1 \Rightarrow A = (cx^*)^N$$ Inserting into the posterior $$f(\theta \mid \mathbf X) = (cx^*)^N\frac {N}{c^N} \theta^{-(N+1)} = N(x^*)^N\theta^{-(N+1)} $$ Note that the undetermined constant $c$ of the prior distribution has conveniently cancelled out. The posterior summarizes all the information that the specific sample can give us regarding the value of $\theta$. If we want to obtain a specific value for $\theta$ we can easily calculate the expected value of the posterior, $$E(\theta\mid \mathbf X) = \int_{x^*}^{\infty}\theta N(x^*)^N\theta^{-(N+1)}d\theta = -\frac{N}{N-1}(x^*)^N\theta^{-N+1}\Big |_{x^*}^{\infty} = \frac{N}{N-1}x^*$$ Is there any intuition in this result? Well, as the number of $X$'s increases, the more likely is that the maximum realization among them will be closer and closer to their upper bound, $\theta$ - which is exactly what the posterior mean value of $\theta$ reflects: if, say, $N=2 \Rightarrow E(\theta\mid \mathbf X) = 2x^*$, but if $N=10 \Rightarrow E(\theta\mid \mathbf X) = \frac{10}{9}x^*$. This shows that our tactic regarding the selection of the prior was reasonable and consistent with the problem at hand, but not necessarily "optimal" in some sense.
Estimating the parameter of a uniform distribution: improper prior?
Since the purpose here is presumably to obtain some valid and useful estimate of $\theta$, the prior distribution should be consistent with the specification of the distribution of the population from
Estimating the parameter of a uniform distribution: improper prior? Since the purpose here is presumably to obtain some valid and useful estimate of $\theta$, the prior distribution should be consistent with the specification of the distribution of the population from which the sample comes. This does NOT in any way mean that we "calculate" the prior using the sample itself -this would nullify the validity of the whole procedure. We do know that the population from which the sample comes is a population of i.i.d. uniform random variables each ranging in $[0,\theta]$. This is a maintained assumption and is part of the prior information that we possess (and it has nothing to do with the sample, i.e. with a specific realization of a subset of these random variables). Now assume that this population consists of $m$ random variables, (while our sample consists of $n<m$ realizations of $n$ random variables). The maintained assumption tells us that $$\max_{i=1,...,n}\{X_i\}\le \max_{j=1,...,m}\{X_j\} \le \theta$$ Denote for compactness $\max_{i=1,...,n}\{X_i\} \equiv X^*$. Then we have $\theta \ge X^*$ which can also be written $$\theta = cX^*\qquad c\ge 1$$ The density function of the $\max$ of $N$ i.i.d Uniform r.v.'s ranging in $[0,\theta]$ is $$f_{X^*}(x^*) = N\frac {(x^*)^{N-1}}{\theta^N} $$ for the support $[0,\theta]$, and zero elsewhere. Then by using $\theta = cX^*$ and applying the change-of-variable formula we obtain a prior distribution for $\theta$ that is consistent with the maintained assumption: $$f_p(\theta) = N\frac {(\frac{\theta}{c})^{N-1}}{\theta^N}\frac 1c = \frac {N}{c^N} \theta^{-1}\qquad \theta \in [x^*, \infty]$$ which may be improper if we don't specify the constant $c$ suitably. But our interest lies in having a proper posterior for $\theta$, and also, we do not want to restrict the possible values of $\theta$ (beyond the restriction implied by the maintained assumption). So we leave $c$ undetermined. Then writing $\mathbf X = \{x_1,..,x_n\}$ the posterior is $$f(\theta \mid \mathbf X)\; \propto\; \theta^{-N}\frac {N}{c^N} \theta^{-1} \Rightarrow f(\theta \mid \mathbf X) = A\frac {N}{c^N} \theta^{-(N+1)}$$ for some normalizing constant A. We want $$\int_{S_{\theta}}f(\theta \mid \mathbf X)d\theta =1 \Rightarrow \int_{x^*}^{\infty}A\frac {N}{c^N} \theta^{-(N+1)}d\theta =1$$ $$\Rightarrow A\frac {N}{c^N}\frac {1}{-N}\theta^{-N}\Big |_{x^*}^{\infty} = 1 \Rightarrow A = (cx^*)^N$$ Inserting into the posterior $$f(\theta \mid \mathbf X) = (cx^*)^N\frac {N}{c^N} \theta^{-(N+1)} = N(x^*)^N\theta^{-(N+1)} $$ Note that the undetermined constant $c$ of the prior distribution has conveniently cancelled out. The posterior summarizes all the information that the specific sample can give us regarding the value of $\theta$. If we want to obtain a specific value for $\theta$ we can easily calculate the expected value of the posterior, $$E(\theta\mid \mathbf X) = \int_{x^*}^{\infty}\theta N(x^*)^N\theta^{-(N+1)}d\theta = -\frac{N}{N-1}(x^*)^N\theta^{-N+1}\Big |_{x^*}^{\infty} = \frac{N}{N-1}x^*$$ Is there any intuition in this result? Well, as the number of $X$'s increases, the more likely is that the maximum realization among them will be closer and closer to their upper bound, $\theta$ - which is exactly what the posterior mean value of $\theta$ reflects: if, say, $N=2 \Rightarrow E(\theta\mid \mathbf X) = 2x^*$, but if $N=10 \Rightarrow E(\theta\mid \mathbf X) = \frac{10}{9}x^*$. This shows that our tactic regarding the selection of the prior was reasonable and consistent with the problem at hand, but not necessarily "optimal" in some sense.
Estimating the parameter of a uniform distribution: improper prior? Since the purpose here is presumably to obtain some valid and useful estimate of $\theta$, the prior distribution should be consistent with the specification of the distribution of the population from
26,618
Estimating the parameter of a uniform distribution: improper prior?
Uniform Prior Distribution Theorem (interval case): "If the totality of Your information about $\theta$ external to the data $D$ is captured by the single proposition $$B=\{\{\text{Possible values for } \theta\}=\{\text{the interval } (a,b)\},a<b\}$$ then Your only possible logically-internally-consistent prior specification is $$f(\theta)=\text{Uniform}(a,b)$$ Thus, you prior specification should correspond to the Jeffrey's prior if you truly believe in the above theorem." Not part of the Uniform Prior Distribution Theorem: Alternatively you could specify your prior distribution $f(\theta)$ as a Pareto distribution, which is the conjugate distribution for the uniform, knowing that you posterior distribution will have to be another uniform distribution by conjugacy. However, if you use the Pareto distribution, then you will need to specify parameters of the Pareto distribution in some sort of way.
Estimating the parameter of a uniform distribution: improper prior?
Uniform Prior Distribution Theorem (interval case): "If the totality of Your information about $\theta$ external to the data $D$ is captured by the single proposition $$B=\{\{\text{Possible values for
Estimating the parameter of a uniform distribution: improper prior? Uniform Prior Distribution Theorem (interval case): "If the totality of Your information about $\theta$ external to the data $D$ is captured by the single proposition $$B=\{\{\text{Possible values for } \theta\}=\{\text{the interval } (a,b)\},a<b\}$$ then Your only possible logically-internally-consistent prior specification is $$f(\theta)=\text{Uniform}(a,b)$$ Thus, you prior specification should correspond to the Jeffrey's prior if you truly believe in the above theorem." Not part of the Uniform Prior Distribution Theorem: Alternatively you could specify your prior distribution $f(\theta)$ as a Pareto distribution, which is the conjugate distribution for the uniform, knowing that you posterior distribution will have to be another uniform distribution by conjugacy. However, if you use the Pareto distribution, then you will need to specify parameters of the Pareto distribution in some sort of way.
Estimating the parameter of a uniform distribution: improper prior? Uniform Prior Distribution Theorem (interval case): "If the totality of Your information about $\theta$ external to the data $D$ is captured by the single proposition $$B=\{\{\text{Possible values for
26,619
Example of maximum a posteriori estimation
1st Example A typical case is tagging in the context of natural language processing. See here for a detailed explanation. The idea is basically to be able to determine the lexical category of a word in a sentence (is it a noun, an adjective,...). The basic idea is that you have a model of your language consisting on a hidden markov model (HMM). In this model, the hidden states correspond to the lexical categories, and the observed states to the actual words. The respective graphical model has the form, where $\mathbf{y} = (y1,...,y_{N})$ is the sequence of words in the sentence, and $\mathbf{x} = (x1,...,x_{N})$ is the sequence of tags. Once trained, the goal is to find the correct sequence of lexical categories that correspond to a given input sentence. This is formulated as finding the sequence of tags which are most compatible/most likely to have been generated by the language model, i.e. $$f(y) = \mathbf{argmax}_{\mathbf{x} \in Y}p(\mathbf{x})p(\mathbf{y}|\mathbf{x})$$ 2nd Example Actually, a better example would be regression. Not only because it is easier to understand, but also because makes the differences between maximum likelihood (ML) and maximum a posteriori (MAP) clear. Basically, the problem is that of fitting some function given by the samples $t$ with a linear combination of a set of basis functions, $$y(\mathbf{x};\mathbf{w}) = \sum_{i}w_{i}\phi_{i}(\mathbf{x})$$ where $\phi(\mathbf{x})$ are the basis functions, and $\mathbf{w}$ are the weights. It is usually assumed that the samples are corrupted by Gaussian noise. Hence, if we assume that the target function can be exactly written as such a linear combination, then we have, $$t = y(\mathbf{x};\mathbf{w}) + \epsilon$$ so we have $p(t|\mathbf{w}) = \mathcal{N}(t|y(\mathbf{x};\mathbf{w}))$ The ML solution of this problem is equivalent to minimizing, $$E(\mathbf{w}) = \frac{1}{2}\sum_{n}\left(t_{n} - \mathbf{w}^{T}\phi(\mathbf{x}_{n}) \right)^{2}$$ which yields the well-known least square error solution. Now, ML is sentitive to noise, and under certain circumstances not stable. MAP allows you to pick up better solutions by putting constraints on the weights. For example, a typical case is ridge regression, where you demand the weights to have a norm as small as possible, $$E(\mathbf{w}) = \frac{1}{2}\sum_{n}\left(t_{n} - \mathbf{w}^{T}\phi(\mathbf{x}_{n}) \right)^{2} + \lambda \sum_{k}w_{k}^{2}$$ which is equivalent to setting a Gaussian prior on the weights $\mathcal{N}(\mathbf{w}|\mathbf{0},\lambda^{-1}\mathbf{I})$. In all, the estimated weights are $$\mathbf{w} = \mathbf{argmin}_{w}p(\mathbf{w};\lambda)p(t|\mathbf{w};\phi)$$ Notice that in MAP the weights are not parameters as in ML, but random variables. Nevertheless, both ML and MAP are point estimators (they return an optimal set of weights, rather than a distribution of optimal weights).
Example of maximum a posteriori estimation
1st Example A typical case is tagging in the context of natural language processing. See here for a detailed explanation. The idea is basically to be able to determine the lexical category of a word i
Example of maximum a posteriori estimation 1st Example A typical case is tagging in the context of natural language processing. See here for a detailed explanation. The idea is basically to be able to determine the lexical category of a word in a sentence (is it a noun, an adjective,...). The basic idea is that you have a model of your language consisting on a hidden markov model (HMM). In this model, the hidden states correspond to the lexical categories, and the observed states to the actual words. The respective graphical model has the form, where $\mathbf{y} = (y1,...,y_{N})$ is the sequence of words in the sentence, and $\mathbf{x} = (x1,...,x_{N})$ is the sequence of tags. Once trained, the goal is to find the correct sequence of lexical categories that correspond to a given input sentence. This is formulated as finding the sequence of tags which are most compatible/most likely to have been generated by the language model, i.e. $$f(y) = \mathbf{argmax}_{\mathbf{x} \in Y}p(\mathbf{x})p(\mathbf{y}|\mathbf{x})$$ 2nd Example Actually, a better example would be regression. Not only because it is easier to understand, but also because makes the differences between maximum likelihood (ML) and maximum a posteriori (MAP) clear. Basically, the problem is that of fitting some function given by the samples $t$ with a linear combination of a set of basis functions, $$y(\mathbf{x};\mathbf{w}) = \sum_{i}w_{i}\phi_{i}(\mathbf{x})$$ where $\phi(\mathbf{x})$ are the basis functions, and $\mathbf{w}$ are the weights. It is usually assumed that the samples are corrupted by Gaussian noise. Hence, if we assume that the target function can be exactly written as such a linear combination, then we have, $$t = y(\mathbf{x};\mathbf{w}) + \epsilon$$ so we have $p(t|\mathbf{w}) = \mathcal{N}(t|y(\mathbf{x};\mathbf{w}))$ The ML solution of this problem is equivalent to minimizing, $$E(\mathbf{w}) = \frac{1}{2}\sum_{n}\left(t_{n} - \mathbf{w}^{T}\phi(\mathbf{x}_{n}) \right)^{2}$$ which yields the well-known least square error solution. Now, ML is sentitive to noise, and under certain circumstances not stable. MAP allows you to pick up better solutions by putting constraints on the weights. For example, a typical case is ridge regression, where you demand the weights to have a norm as small as possible, $$E(\mathbf{w}) = \frac{1}{2}\sum_{n}\left(t_{n} - \mathbf{w}^{T}\phi(\mathbf{x}_{n}) \right)^{2} + \lambda \sum_{k}w_{k}^{2}$$ which is equivalent to setting a Gaussian prior on the weights $\mathcal{N}(\mathbf{w}|\mathbf{0},\lambda^{-1}\mathbf{I})$. In all, the estimated weights are $$\mathbf{w} = \mathbf{argmin}_{w}p(\mathbf{w};\lambda)p(t|\mathbf{w};\phi)$$ Notice that in MAP the weights are not parameters as in ML, but random variables. Nevertheless, both ML and MAP are point estimators (they return an optimal set of weights, rather than a distribution of optimal weights).
Example of maximum a posteriori estimation 1st Example A typical case is tagging in the context of natural language processing. See here for a detailed explanation. The idea is basically to be able to determine the lexical category of a word i
26,620
Highly unbalanced test data set and balanced training data in classification
This is called Dataset Shift setting. This pdf [1] should help you understand several of the underlying issues involved. For the moment however, you can use least squares importance fitting to obtain importance estimates for your training data using your test set (you don't need the test set labels, just the feature vectors) [2]. Once you gain the importance estimates, you can use them as instance weights in libSVM [3]. That should enable you to get a better classifier. [1] http://www.acad.bg/ebook/ml/The.MIT.Press.Dataset.Shift.in.Machine.Learning.Feb.2009.eBook-DDU.pdf [2] http://www.ms.k.u-tokyo.ac.jp/software.html#uLSIF [3] http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/#weights_for_data_instances
Highly unbalanced test data set and balanced training data in classification
This is called Dataset Shift setting. This pdf [1] should help you understand several of the underlying issues involved. For the moment however, you can use least squares importance fitting to obtain
Highly unbalanced test data set and balanced training data in classification This is called Dataset Shift setting. This pdf [1] should help you understand several of the underlying issues involved. For the moment however, you can use least squares importance fitting to obtain importance estimates for your training data using your test set (you don't need the test set labels, just the feature vectors) [2]. Once you gain the importance estimates, you can use them as instance weights in libSVM [3]. That should enable you to get a better classifier. [1] http://www.acad.bg/ebook/ml/The.MIT.Press.Dataset.Shift.in.Machine.Learning.Feb.2009.eBook-DDU.pdf [2] http://www.ms.k.u-tokyo.ac.jp/software.html#uLSIF [3] http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/#weights_for_data_instances
Highly unbalanced test data set and balanced training data in classification This is called Dataset Shift setting. This pdf [1] should help you understand several of the underlying issues involved. For the moment however, you can use least squares importance fitting to obtain
26,621
Highly unbalanced test data set and balanced training data in classification
Do you think the `real world' looks more like the training set or the test set? If it looks more like the training set, the you can randomly sample 50 instances from your negative test set to get a more unbiased estimate of precision. But I agree with Peter Flom: In general, your test and train sets should both look similar.
Highly unbalanced test data set and balanced training data in classification
Do you think the `real world' looks more like the training set or the test set? If it looks more like the training set, the you can randomly sample 50 instances from your negative test set to get a mo
Highly unbalanced test data set and balanced training data in classification Do you think the `real world' looks more like the training set or the test set? If it looks more like the training set, the you can randomly sample 50 instances from your negative test set to get a more unbiased estimate of precision. But I agree with Peter Flom: In general, your test and train sets should both look similar.
Highly unbalanced test data set and balanced training data in classification Do you think the `real world' looks more like the training set or the test set? If it looks more like the training set, the you can randomly sample 50 instances from your negative test set to get a mo
26,622
How does LASSO select among collinear predictors?
LASSO differs from best-subset selection in terms of penalization and path dependence. In best-subset selection, presumably CV was used to identify that 2 predictors gave the best performance. During CV, full-magnitude regression coefficients without penalization would have been used for evaluating how many variables to include. Once the decision was made to use 2 predictors, then all combinations of 2 predictors would be compared on the full data set, in parallel, to find the 2 for the final model. Those 2 final predictors would be given their full-magnitude regression coefficients, without penalization, as if they had been the only choices all along. You can think of LASSO as starting with a large penalty on the sum of the magnitudes of the regression coefficients, with the penalty gradually relaxed. The result is that variables enter one at a time, with a decision made at each point during the relaxation whether it's more valuable to increase the coefficients of the variables already in the model, or to add another variable. But when you get, say, to a 2-variable model, the regression coefficients allowed by LASSO will be lower in magnitude than those same variables would have in the standard non-penalized regressions used to compare 2-variable and 3-variable models in best-subset selection. This can be thought of as making it easier for new variables to enter in LASSO than in best-subset selection. Heuristically, LASSO trades off potentially lower-than-actual regression coefficients against the uncertainty in how many variables should be included. This would tend to include more variables in a LASSO model, and potentially worse performance for LASSO if you knew for sure that only 2 variables needed to be included. But if you already knew how many predictor variables should be included in the correct model, you probably wouldn't be using LASSO. Nothing so far has depended on collinearity, which leads different types of arbitrariness in variable selection in best-subset versus LASSO. In this example, best-subset examined all possible combinations of 2 predictors and chose the best among those combinations. So the best 2 for that particular data sample win. LASSO, with its path dependence in adding one variable at a time, means that an early choice of one variable may influence when other variables correlated to it enter later in the relaxation process. It's also possible for a variable to enter early and then for its LASSO coefficient to drop as other correlated variables enter. In practice, the choice among correlated predictors in final models with either method is highly sample dependent, as can be checked by repeating these model-building processes on bootstrap samples of the same data. If there aren't too many predictors, and your primary interest is in prediction on new data sets, ridge regression, which tends to keep all predictors, may be a better choice.
How does LASSO select among collinear predictors?
LASSO differs from best-subset selection in terms of penalization and path dependence. In best-subset selection, presumably CV was used to identify that 2 predictors gave the best performance. During
How does LASSO select among collinear predictors? LASSO differs from best-subset selection in terms of penalization and path dependence. In best-subset selection, presumably CV was used to identify that 2 predictors gave the best performance. During CV, full-magnitude regression coefficients without penalization would have been used for evaluating how many variables to include. Once the decision was made to use 2 predictors, then all combinations of 2 predictors would be compared on the full data set, in parallel, to find the 2 for the final model. Those 2 final predictors would be given their full-magnitude regression coefficients, without penalization, as if they had been the only choices all along. You can think of LASSO as starting with a large penalty on the sum of the magnitudes of the regression coefficients, with the penalty gradually relaxed. The result is that variables enter one at a time, with a decision made at each point during the relaxation whether it's more valuable to increase the coefficients of the variables already in the model, or to add another variable. But when you get, say, to a 2-variable model, the regression coefficients allowed by LASSO will be lower in magnitude than those same variables would have in the standard non-penalized regressions used to compare 2-variable and 3-variable models in best-subset selection. This can be thought of as making it easier for new variables to enter in LASSO than in best-subset selection. Heuristically, LASSO trades off potentially lower-than-actual regression coefficients against the uncertainty in how many variables should be included. This would tend to include more variables in a LASSO model, and potentially worse performance for LASSO if you knew for sure that only 2 variables needed to be included. But if you already knew how many predictor variables should be included in the correct model, you probably wouldn't be using LASSO. Nothing so far has depended on collinearity, which leads different types of arbitrariness in variable selection in best-subset versus LASSO. In this example, best-subset examined all possible combinations of 2 predictors and chose the best among those combinations. So the best 2 for that particular data sample win. LASSO, with its path dependence in adding one variable at a time, means that an early choice of one variable may influence when other variables correlated to it enter later in the relaxation process. It's also possible for a variable to enter early and then for its LASSO coefficient to drop as other correlated variables enter. In practice, the choice among correlated predictors in final models with either method is highly sample dependent, as can be checked by repeating these model-building processes on bootstrap samples of the same data. If there aren't too many predictors, and your primary interest is in prediction on new data sets, ridge regression, which tends to keep all predictors, may be a better choice.
How does LASSO select among collinear predictors? LASSO differs from best-subset selection in terms of penalization and path dependence. In best-subset selection, presumably CV was used to identify that 2 predictors gave the best performance. During
26,623
The weighted sum of two independent Poisson random variables
Provided not a whole lot of probability is concentrated on any single value in this linear combination, it looks like a Cornish-Fisher expansion may provide good approximations to the (inverse) CDF. Recall that this expansion adjusts the inverse CDF of the standard Normal distribution using the first few cumulants of $S_2$. Its skewness $\beta_1$ is $$\frac{a_1^3 \lambda_1 + a_2^3 \lambda_2}{\left(\sqrt{a_1^2 \lambda_1 + a_2^2 \lambda_2}\right)^3}$$ and its kurtosis $\beta_2$ is $$\frac{a_1^4 \lambda_1 + 3a_1^4 \lambda_1^2 + a_2^4 \lambda_2 + 6 a_1^2 a_2^2 \lambda_1 \lambda_2 + 3 a_2^4 \lambda_2^2}{\left(a_1^2 \lambda_1 + a_2^2 \lambda_2\right)^2}.$$ To find the $\alpha$ percentile of the standardized version of $S_2$, compute $$w_\alpha = z +\frac{1}{6} \beta _1 \left(z^2-1\right) +\frac{1}{24} \left(\beta _2-3\right) \left(z^2-3\right) z-\frac{1}{36} \beta _1^2 z \left(2 z^2-5 z\right)-\frac{1}{24} \left(\beta _2-3\right) \beta _1 \left(z^4-5 z^2+2\right)$$ where $z$ is the $\alpha$ percentile of the standard Normal distribution. The percentile of $S_2$ thereby is $$a_1 \lambda_1 + a_2 \lambda_2 + w_\alpha \sqrt{a_1^2 \lambda_1 + a_2^2 \lambda_2}.$$ Numerical experiments suggest this is a good approximation once both $\lambda_1$ and $\lambda_2$ exceed $5$ or so. For example, consider the case $\lambda_1 = 5,$ $\lambda_2=5\pi/2,$ $a_1=\pi,$ and $a_2=-2$ (arranged to give a zero mean for convenience): The blue shaded portion is the numerically computed CDF of $S_2$ while the solid red underneath is the Cornish-Fisher approximation. The approximation is essentially a smooth of the actual distribution, showing only small systematic departures.
The weighted sum of two independent Poisson random variables
Provided not a whole lot of probability is concentrated on any single value in this linear combination, it looks like a Cornish-Fisher expansion may provide good approximations to the (inverse) CDF. R
The weighted sum of two independent Poisson random variables Provided not a whole lot of probability is concentrated on any single value in this linear combination, it looks like a Cornish-Fisher expansion may provide good approximations to the (inverse) CDF. Recall that this expansion adjusts the inverse CDF of the standard Normal distribution using the first few cumulants of $S_2$. Its skewness $\beta_1$ is $$\frac{a_1^3 \lambda_1 + a_2^3 \lambda_2}{\left(\sqrt{a_1^2 \lambda_1 + a_2^2 \lambda_2}\right)^3}$$ and its kurtosis $\beta_2$ is $$\frac{a_1^4 \lambda_1 + 3a_1^4 \lambda_1^2 + a_2^4 \lambda_2 + 6 a_1^2 a_2^2 \lambda_1 \lambda_2 + 3 a_2^4 \lambda_2^2}{\left(a_1^2 \lambda_1 + a_2^2 \lambda_2\right)^2}.$$ To find the $\alpha$ percentile of the standardized version of $S_2$, compute $$w_\alpha = z +\frac{1}{6} \beta _1 \left(z^2-1\right) +\frac{1}{24} \left(\beta _2-3\right) \left(z^2-3\right) z-\frac{1}{36} \beta _1^2 z \left(2 z^2-5 z\right)-\frac{1}{24} \left(\beta _2-3\right) \beta _1 \left(z^4-5 z^2+2\right)$$ where $z$ is the $\alpha$ percentile of the standard Normal distribution. The percentile of $S_2$ thereby is $$a_1 \lambda_1 + a_2 \lambda_2 + w_\alpha \sqrt{a_1^2 \lambda_1 + a_2^2 \lambda_2}.$$ Numerical experiments suggest this is a good approximation once both $\lambda_1$ and $\lambda_2$ exceed $5$ or so. For example, consider the case $\lambda_1 = 5,$ $\lambda_2=5\pi/2,$ $a_1=\pi,$ and $a_2=-2$ (arranged to give a zero mean for convenience): The blue shaded portion is the numerically computed CDF of $S_2$ while the solid red underneath is the Cornish-Fisher approximation. The approximation is essentially a smooth of the actual distribution, showing only small systematic departures.
The weighted sum of two independent Poisson random variables Provided not a whole lot of probability is concentrated on any single value in this linear combination, it looks like a Cornish-Fisher expansion may provide good approximations to the (inverse) CDF. R
26,624
The weighted sum of two independent Poisson random variables
Use the convolution: Let $f_{X_1}(x_1)= \dfrac{\lambda^{x_1}e^{-\lambda}}{x_1!} $ for $x_1 \geq 0$, $f_{X_1}(x_1)= 0$ otherwise, and $f_{X_2}(x_2)=\dfrac{\lambda^{x_2}e^{-\lambda}}{x_2!} $ for $x_2 \geq 0$, $f_{X_2}(x_2)= 0$ otherwise. Let $Z=X_1+X_2\rightarrow X_1=Z-X_2$, so $$f_Z(z)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f_{x_1,x_2}(z-x_2,x_2)dx_1dx_2$$ The former is known as convolution. If $X_1$ and $X_2$ are independent, $$f_Z(z)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f_{X_1}(z-x_2)f_{X_2}(x_2)dx_1dx_2$$ This way you can obtain the distribution of the sum of two continuous random variables. For the discrete poisson distribution $$f_Z(z)=\sum\limits_{x_2=0}^{z} \dfrac{\lambda^{z-x_{2}}_1e^{-\lambda_1}}{(z-x_2)!}\dfrac{\lambda^{x_2}_2e^{-\lambda_2}}{x_2!}$$ $$= e^{-(\lambda_1+\lambda_2)}\dfrac{(\lambda_1+\lambda_2)^z}{z!}$$ Which is also a Poisson distribution with parameter $\lambda_1+\lambda_2$
The weighted sum of two independent Poisson random variables
Use the convolution: Let $f_{X_1}(x_1)= \dfrac{\lambda^{x_1}e^{-\lambda}}{x_1!} $ for $x_1 \geq 0$, $f_{X_1}(x_1)= 0$ otherwise, and $f_{X_2}(x_2)=\dfrac{\lambda^{x_2}e^{-\lambda}}{x_2!} $ for $x_2 \
The weighted sum of two independent Poisson random variables Use the convolution: Let $f_{X_1}(x_1)= \dfrac{\lambda^{x_1}e^{-\lambda}}{x_1!} $ for $x_1 \geq 0$, $f_{X_1}(x_1)= 0$ otherwise, and $f_{X_2}(x_2)=\dfrac{\lambda^{x_2}e^{-\lambda}}{x_2!} $ for $x_2 \geq 0$, $f_{X_2}(x_2)= 0$ otherwise. Let $Z=X_1+X_2\rightarrow X_1=Z-X_2$, so $$f_Z(z)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f_{x_1,x_2}(z-x_2,x_2)dx_1dx_2$$ The former is known as convolution. If $X_1$ and $X_2$ are independent, $$f_Z(z)=\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f_{X_1}(z-x_2)f_{X_2}(x_2)dx_1dx_2$$ This way you can obtain the distribution of the sum of two continuous random variables. For the discrete poisson distribution $$f_Z(z)=\sum\limits_{x_2=0}^{z} \dfrac{\lambda^{z-x_{2}}_1e^{-\lambda_1}}{(z-x_2)!}\dfrac{\lambda^{x_2}_2e^{-\lambda_2}}{x_2!}$$ $$= e^{-(\lambda_1+\lambda_2)}\dfrac{(\lambda_1+\lambda_2)^z}{z!}$$ Which is also a Poisson distribution with parameter $\lambda_1+\lambda_2$
The weighted sum of two independent Poisson random variables Use the convolution: Let $f_{X_1}(x_1)= \dfrac{\lambda^{x_1}e^{-\lambda}}{x_1!} $ for $x_1 \geq 0$, $f_{X_1}(x_1)= 0$ otherwise, and $f_{X_2}(x_2)=\dfrac{\lambda^{x_2}e^{-\lambda}}{x_2!} $ for $x_2 \
26,625
The weighted sum of two independent Poisson random variables
I think the solution is the concept of a compound Poisson distribution. The idea is a random sum $$ S = \sum_{i=1}^N X_i $$ with $N$ Poisson distributed and $X_i$ and $iid$ sequence independent of $N$. When we restric to the case that $X_i=k$ always, then we can describe $k N$ for a real number $k$ and a Poisson distributed $N$. You get the pgf by $$ E[s^{k N}] = E[(s^{k})^N] = G_N(s^{k}) = \exp(\lambda(s^k-1)) $$ For the sum $Z = k_1 N_1 + k_2 N_2$ you get $$ G_Z(s) = \exp(\lambda_1(s^{k_1}-1) + \lambda_2(s^{k_2}-1)). $$ define $\lambda = \lambda_1 + \lambda_2$ then $$ G_Z(s) = \exp(\lambda ( \frac{\lambda_1}{\lambda}(s^{k_1}-1)+ \frac{\lambda_2}{\lambda}(s^{k_1}-1)) = \exp(\lambda (\frac{\lambda_1}{\lambda}s^{k_1}+ \frac{\lambda_2}{\lambda}s^{k_1}-1)). $$ The final interpretation is that the resulting rv is a compound Poisson distribution with intensity $\lambda = \lambda_1 + \lambda_2$ and distribution of the $X_i$ that take the value $k_1$ with probability $\lambda_1/\lambda$ and the value $k_2$ with $\lambda_2/\lambda$. Having proved that the distributions is compound Poisson we can either use Panjer recursion in the case that $k_1$ and $k_2$ are positive integers. Or we can easily derive the Fourier transform from the form of the pgf and get the distribution back by the inverse. Note that there is a point mass at $0$. Edit after a discussion: I think the best you can do is MC. You could use the derivation that this is a compound Poisson distr. sample N from $Pois(\lambda)$ (very efficient) then for each $i=1,\ldots,N$ sample whether it is from $X_1$ or $X_2$ where the probability of the first is $\lambda_1/\lambda$. Do this by sampling a Bernoulli rv with probability of success $\lambda_1/\lambda$. If it is $1$ then add $k_1$ to the sampled sum else add $k_2$. You will have a sample of say 100 000 in seconds. Alternatively you can sample the two summands in your inital representation separately ... this will be as quick. Everything else (FFT) is complicated if the constant factor k1 and k2 are totally general.
The weighted sum of two independent Poisson random variables
I think the solution is the concept of a compound Poisson distribution. The idea is a random sum $$ S = \sum_{i=1}^N X_i $$ with $N$ Poisson distributed and $X_i$ and $iid$ sequence independent of $
The weighted sum of two independent Poisson random variables I think the solution is the concept of a compound Poisson distribution. The idea is a random sum $$ S = \sum_{i=1}^N X_i $$ with $N$ Poisson distributed and $X_i$ and $iid$ sequence independent of $N$. When we restric to the case that $X_i=k$ always, then we can describe $k N$ for a real number $k$ and a Poisson distributed $N$. You get the pgf by $$ E[s^{k N}] = E[(s^{k})^N] = G_N(s^{k}) = \exp(\lambda(s^k-1)) $$ For the sum $Z = k_1 N_1 + k_2 N_2$ you get $$ G_Z(s) = \exp(\lambda_1(s^{k_1}-1) + \lambda_2(s^{k_2}-1)). $$ define $\lambda = \lambda_1 + \lambda_2$ then $$ G_Z(s) = \exp(\lambda ( \frac{\lambda_1}{\lambda}(s^{k_1}-1)+ \frac{\lambda_2}{\lambda}(s^{k_1}-1)) = \exp(\lambda (\frac{\lambda_1}{\lambda}s^{k_1}+ \frac{\lambda_2}{\lambda}s^{k_1}-1)). $$ The final interpretation is that the resulting rv is a compound Poisson distribution with intensity $\lambda = \lambda_1 + \lambda_2$ and distribution of the $X_i$ that take the value $k_1$ with probability $\lambda_1/\lambda$ and the value $k_2$ with $\lambda_2/\lambda$. Having proved that the distributions is compound Poisson we can either use Panjer recursion in the case that $k_1$ and $k_2$ are positive integers. Or we can easily derive the Fourier transform from the form of the pgf and get the distribution back by the inverse. Note that there is a point mass at $0$. Edit after a discussion: I think the best you can do is MC. You could use the derivation that this is a compound Poisson distr. sample N from $Pois(\lambda)$ (very efficient) then for each $i=1,\ldots,N$ sample whether it is from $X_1$ or $X_2$ where the probability of the first is $\lambda_1/\lambda$. Do this by sampling a Bernoulli rv with probability of success $\lambda_1/\lambda$. If it is $1$ then add $k_1$ to the sampled sum else add $k_2$. You will have a sample of say 100 000 in seconds. Alternatively you can sample the two summands in your inital representation separately ... this will be as quick. Everything else (FFT) is complicated if the constant factor k1 and k2 are totally general.
The weighted sum of two independent Poisson random variables I think the solution is the concept of a compound Poisson distribution. The idea is a random sum $$ S = \sum_{i=1}^N X_i $$ with $N$ Poisson distributed and $X_i$ and $iid$ sequence independent of $
26,626
Machine learning algorithm for ranking
Many classification algorithms already do exactly what you're looking for, but often present their answers to users in the form of a binary (or n-way) judgement. For example, SVMLight is an implementation of the support vector machine classification algorithm; people commonly use this to make binary judgments on some data set. What happens under the hood, however, is the algorithm is assigning signed confidence judgments to the data. These are bound between -1.0 and 1.0 and are what you should use for ranking your data!
Machine learning algorithm for ranking
Many classification algorithms already do exactly what you're looking for, but often present their answers to users in the form of a binary (or n-way) judgement. For example, SVMLight is an implementa
Machine learning algorithm for ranking Many classification algorithms already do exactly what you're looking for, but often present their answers to users in the form of a binary (or n-way) judgement. For example, SVMLight is an implementation of the support vector machine classification algorithm; people commonly use this to make binary judgments on some data set. What happens under the hood, however, is the algorithm is assigning signed confidence judgments to the data. These are bound between -1.0 and 1.0 and are what you should use for ranking your data!
Machine learning algorithm for ranking Many classification algorithms already do exactly what you're looking for, but often present their answers to users in the form of a binary (or n-way) judgement. For example, SVMLight is an implementa
26,627
Machine learning algorithm for ranking
It seems that you can use regression analysis. Also, probably you need to assign scores (real numbers) to the elements in your training set, if you don't have them. Although you can just use rank as your target value, it will make you get a poor model if you just have a small set of training samples.
Machine learning algorithm for ranking
It seems that you can use regression analysis. Also, probably you need to assign scores (real numbers) to the elements in your training set, if you don't have them. Although you can just use rank as y
Machine learning algorithm for ranking It seems that you can use regression analysis. Also, probably you need to assign scores (real numbers) to the elements in your training set, if you don't have them. Although you can just use rank as your target value, it will make you get a poor model if you just have a small set of training samples.
Machine learning algorithm for ranking It seems that you can use regression analysis. Also, probably you need to assign scores (real numbers) to the elements in your training set, if you don't have them. Although you can just use rank as y
26,628
Machine learning algorithm for ranking
I think you are expecting too much from machine learning algorithms. A computer cannot decide whether item 1 is better than item 2 on its own. What a Machine Learning algorithm can do is if you give it a few examples where you have rated some item 1 to be better than item 2, then it can learn to rank the items [1]. But you still need a training data where you provide examples of items and with information of whether item 1 is greater than item 2 for all items in the training data. [1] http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html
Machine learning algorithm for ranking
I think you are expecting too much from machine learning algorithms. A computer cannot decide whether item 1 is better than item 2 on its own. What a Machine Learning algorithm can do is if you give i
Machine learning algorithm for ranking I think you are expecting too much from machine learning algorithms. A computer cannot decide whether item 1 is better than item 2 on its own. What a Machine Learning algorithm can do is if you give it a few examples where you have rated some item 1 to be better than item 2, then it can learn to rank the items [1]. But you still need a training data where you provide examples of items and with information of whether item 1 is greater than item 2 for all items in the training data. [1] http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html
Machine learning algorithm for ranking I think you are expecting too much from machine learning algorithms. A computer cannot decide whether item 1 is better than item 2 on its own. What a Machine Learning algorithm can do is if you give i
26,629
Dirichlet posterior
For me the most helpful way to envision the effect of the parameters for Dirichlet is the Polya urn. Imagine you have an urn containing n different colors, with $\alpha_i$ of each color in the urn (note that you can have fractions of a ball). You reach in and draw a ball, then replace it along with another of the same color. You then repeat this an infinite amount of times and the final proportion constitutes a sample from the a Dirichlet distribution. If you have very small values for $\alpha$, it should be clear that the added ball will heavily weight you towards the color of that first draw, which explains why the mass moves to the corners of the simplex. If you have large $\alpha's$, then that first draw doesn't impact the final proportion as much. What your posterior is essentially saying is that you started with $\alpha_i$ balls of color $i$, did a bunch of draws, and happened to draw out that color $N_i$ times. You can then imagine samples from the posterior being generated with the same process and imagine the effects the initial $\alpha$ along with the counts $N$ will have on those samples. Clearly a small value for $\alpha$ will have less effect on the posterior. Another way to think about it is that the parameters to your Dirichlet control how much you trust your data. If you have small values of $\alpha$, then you trust your data almost entirely. Conversely, if you have large values for $\alpha$, then you are less trusting of your data and will smooth the posterior a bit more. In summary, you are correct to say that as you decrease the $\alpha's$, they will have less effect on the posterior, but at the same time the prior will have most of its mass on the corners of the simplex.
Dirichlet posterior
For me the most helpful way to envision the effect of the parameters for Dirichlet is the Polya urn. Imagine you have an urn containing n different colors, with $\alpha_i$ of each color in the urn (n
Dirichlet posterior For me the most helpful way to envision the effect of the parameters for Dirichlet is the Polya urn. Imagine you have an urn containing n different colors, with $\alpha_i$ of each color in the urn (note that you can have fractions of a ball). You reach in and draw a ball, then replace it along with another of the same color. You then repeat this an infinite amount of times and the final proportion constitutes a sample from the a Dirichlet distribution. If you have very small values for $\alpha$, it should be clear that the added ball will heavily weight you towards the color of that first draw, which explains why the mass moves to the corners of the simplex. If you have large $\alpha's$, then that first draw doesn't impact the final proportion as much. What your posterior is essentially saying is that you started with $\alpha_i$ balls of color $i$, did a bunch of draws, and happened to draw out that color $N_i$ times. You can then imagine samples from the posterior being generated with the same process and imagine the effects the initial $\alpha$ along with the counts $N$ will have on those samples. Clearly a small value for $\alpha$ will have less effect on the posterior. Another way to think about it is that the parameters to your Dirichlet control how much you trust your data. If you have small values of $\alpha$, then you trust your data almost entirely. Conversely, if you have large values for $\alpha$, then you are less trusting of your data and will smooth the posterior a bit more. In summary, you are correct to say that as you decrease the $\alpha's$, they will have less effect on the posterior, but at the same time the prior will have most of its mass on the corners of the simplex.
Dirichlet posterior For me the most helpful way to envision the effect of the parameters for Dirichlet is the Polya urn. Imagine you have an urn containing n different colors, with $\alpha_i$ of each color in the urn (n
26,630
Hierarchical models for multiple comparisons - multiple outcomes context
I think I've got a reasonable partial solution for the hierarchical Bayesian model. rjags Code below.... dflong$dv <- scale(dflong$dv)[,1] dataList = list( y = dflong$dv, rmFac = dflong$rmFac , dvFac = dflong$dvFac , id = dflong$id , Ntotal = length(dflong$dv) , NrmLvl = length(unique(dflong$rmFac)), Ndep = length(unique(dflong$dvFac)), NsLvl = length(unique(dflong$id)) ) modelstring = " model { for( i in 1:Ntotal ) { y[i] ~ dnorm( mu[i] , tau[rmFac[i], dvFac[i]]) mu[i] <- a0[ dvFac[i] ] + aS[id[i], dvFac[i]] + a1[rmFac[i] , dvFac[i]] } for (k in 1:Ndep){ for ( j in 1:NrmLvl ) { tau[j, k] <- 1 / pow( sigma[j, k] , 2 ) sigma[j, k] ~ dgamma(1.01005,0.1005) } } for (k in 1:Ndep) { a0[k] ~ dnorm(0, 0.001) for (s in 1:NsLvl){ aS[s, k] ~ dnorm(0.0, sTau[k]) } for (j in 1:NrmLvl) { a1[j, k] ~ dnorm(0, a1Tau[k]) } a1Tau[k] <- 1/ pow( a1SD[k] , 2) a1SD[k] ~ dgamma(1.01005,0.1005) sTau[k] <- 1/ pow( sSD[k] , 2) sSD[k] ~ dgamma(1.01005,0.1005) } } " # close quote for modelstring writeLines(modelstring,con="model.txt") Again, base Bayesian repeated measures script from Kruschke
Hierarchical models for multiple comparisons - multiple outcomes context
I think I've got a reasonable partial solution for the hierarchical Bayesian model. rjags Code below.... dflong$dv <- scale(dflong$dv)[,1] dataList = list( y = dflong$dv, rmFac = dflong$
Hierarchical models for multiple comparisons - multiple outcomes context I think I've got a reasonable partial solution for the hierarchical Bayesian model. rjags Code below.... dflong$dv <- scale(dflong$dv)[,1] dataList = list( y = dflong$dv, rmFac = dflong$rmFac , dvFac = dflong$dvFac , id = dflong$id , Ntotal = length(dflong$dv) , NrmLvl = length(unique(dflong$rmFac)), Ndep = length(unique(dflong$dvFac)), NsLvl = length(unique(dflong$id)) ) modelstring = " model { for( i in 1:Ntotal ) { y[i] ~ dnorm( mu[i] , tau[rmFac[i], dvFac[i]]) mu[i] <- a0[ dvFac[i] ] + aS[id[i], dvFac[i]] + a1[rmFac[i] , dvFac[i]] } for (k in 1:Ndep){ for ( j in 1:NrmLvl ) { tau[j, k] <- 1 / pow( sigma[j, k] , 2 ) sigma[j, k] ~ dgamma(1.01005,0.1005) } } for (k in 1:Ndep) { a0[k] ~ dnorm(0, 0.001) for (s in 1:NsLvl){ aS[s, k] ~ dnorm(0.0, sTau[k]) } for (j in 1:NrmLvl) { a1[j, k] ~ dnorm(0, a1Tau[k]) } a1Tau[k] <- 1/ pow( a1SD[k] , 2) a1SD[k] ~ dgamma(1.01005,0.1005) sTau[k] <- 1/ pow( sSD[k] , 2) sSD[k] ~ dgamma(1.01005,0.1005) } } " # close quote for modelstring writeLines(modelstring,con="model.txt") Again, base Bayesian repeated measures script from Kruschke
Hierarchical models for multiple comparisons - multiple outcomes context I think I've got a reasonable partial solution for the hierarchical Bayesian model. rjags Code below.... dflong$dv <- scale(dflong$dv)[,1] dataList = list( y = dflong$dv, rmFac = dflong$
26,631
Hierarchical models for multiple comparisons - multiple outcomes context
I finally found a literature solution to my problem Bayesian models for multiple outcomes nested in domains by Thurston et al. 2009. They propose a hierarchical model for single or multiple domains that reflects the domain dependent nature of the variables. It incorporates random effects for individuals and individuals across domains (if there are multiple domains). It can also be easily extended to include repeated measures or longitudinal designs. Note: I'll post a JAGS model on here to complete the answer soon
Hierarchical models for multiple comparisons - multiple outcomes context
I finally found a literature solution to my problem Bayesian models for multiple outcomes nested in domains by Thurston et al. 2009. They propose a hierarchical model for single or multiple domains th
Hierarchical models for multiple comparisons - multiple outcomes context I finally found a literature solution to my problem Bayesian models for multiple outcomes nested in domains by Thurston et al. 2009. They propose a hierarchical model for single or multiple domains that reflects the domain dependent nature of the variables. It incorporates random effects for individuals and individuals across domains (if there are multiple domains). It can also be easily extended to include repeated measures or longitudinal designs. Note: I'll post a JAGS model on here to complete the answer soon
Hierarchical models for multiple comparisons - multiple outcomes context I finally found a literature solution to my problem Bayesian models for multiple outcomes nested in domains by Thurston et al. 2009. They propose a hierarchical model for single or multiple domains th
26,632
Transform continuous variables for logistic regression
You should be wary of decide about transforming or not the variables just on statistical grounds. You must look on interpretation. ¿Is it reasonable that your responses is linear in $x$? or is it more probably linear in $\log(x)$? And to discuss that, we need to know your varaibles... Just as an example: independent of model fit, I wouldn't believe mortality to be a linear function of age! Since you say you have "large data", you could look into splines, to let the data speak about transformations ... for instance, package mgcv in R. But even using such technology (or other methodsto search for transformations automatically), the ultimate test is to ask yourselves what makes scientific sense. ¿What do other people in your field do with similar data?
Transform continuous variables for logistic regression
You should be wary of decide about transforming or not the variables just on statistical grounds. You must look on interpretation. ¿Is it reasonable that your responses is linear in $x$? or is it more
Transform continuous variables for logistic regression You should be wary of decide about transforming or not the variables just on statistical grounds. You must look on interpretation. ¿Is it reasonable that your responses is linear in $x$? or is it more probably linear in $\log(x)$? And to discuss that, we need to know your varaibles... Just as an example: independent of model fit, I wouldn't believe mortality to be a linear function of age! Since you say you have "large data", you could look into splines, to let the data speak about transformations ... for instance, package mgcv in R. But even using such technology (or other methodsto search for transformations automatically), the ultimate test is to ask yourselves what makes scientific sense. ¿What do other people in your field do with similar data?
Transform continuous variables for logistic regression You should be wary of decide about transforming or not the variables just on statistical grounds. You must look on interpretation. ¿Is it reasonable that your responses is linear in $x$? or is it more
26,633
Transform continuous variables for logistic regression
The critical issue is what are the numbers supposed to represent in the real world and what is the hypothesized relationship between those variables and the dependent variable. You may improve your model by 'cleaning' your data, but if it doesn't better reflect the real world you have been unsuccessful. Maybe the distributions of your data mean your modeling approach is incorrect and you need a different approach altogether, maybe your data have problems. Why you remove variables if they have corr>.3 is beyond me. Maybe those things really are related and both are important to the dependent variable. You can deal with this with an index or a function representing the joint contribution of correlated variables. It appears you are blindly throwing out information based on an arbitrary statistical criteria. Why not use corr>.31, or .33?
Transform continuous variables for logistic regression
The critical issue is what are the numbers supposed to represent in the real world and what is the hypothesized relationship between those variables and the dependent variable. You may improve your m
Transform continuous variables for logistic regression The critical issue is what are the numbers supposed to represent in the real world and what is the hypothesized relationship between those variables and the dependent variable. You may improve your model by 'cleaning' your data, but if it doesn't better reflect the real world you have been unsuccessful. Maybe the distributions of your data mean your modeling approach is incorrect and you need a different approach altogether, maybe your data have problems. Why you remove variables if they have corr>.3 is beyond me. Maybe those things really are related and both are important to the dependent variable. You can deal with this with an index or a function representing the joint contribution of correlated variables. It appears you are blindly throwing out information based on an arbitrary statistical criteria. Why not use corr>.31, or .33?
Transform continuous variables for logistic regression The critical issue is what are the numbers supposed to represent in the real world and what is the hypothesized relationship between those variables and the dependent variable. You may improve your m
26,634
Introduction to applied probability for pure mathematicians?
Though I am sure that @cardinal will also put together an excellent program, let me mention a couple of books that might cover some of the things the OP is asking for. I recently came across Probability for Statistics and Machine Learning by Anirban DasGupta, which appears to me to cover many of the probabilistic topics asked for. It is fairly mathematical in its style, though it does not seem to be "hard core" measure theoretic. The best "hard core" books are, in my opinion, Real Analysis and Probability by Dudley and Foundations of Modern Probability by Kallenberg. These two very mathematical books should be accessible given the OPs background in functional analysis and operator algebra $-$ they may even be enjoyable. Neither of them has much to say about applications though. On the more applied side I will definitely mention Elements of Statistical Learning by Hastie et al., which provides a treatment of many modern topics and applications from statistics and machine learning. Another book that I will recommend is In All Likelihood by Pawitan. It deals with more standard statistical material and applications and is fairly mathematical too.
Introduction to applied probability for pure mathematicians?
Though I am sure that @cardinal will also put together an excellent program, let me mention a couple of books that might cover some of the things the OP is asking for. I recently came across Probabili
Introduction to applied probability for pure mathematicians? Though I am sure that @cardinal will also put together an excellent program, let me mention a couple of books that might cover some of the things the OP is asking for. I recently came across Probability for Statistics and Machine Learning by Anirban DasGupta, which appears to me to cover many of the probabilistic topics asked for. It is fairly mathematical in its style, though it does not seem to be "hard core" measure theoretic. The best "hard core" books are, in my opinion, Real Analysis and Probability by Dudley and Foundations of Modern Probability by Kallenberg. These two very mathematical books should be accessible given the OPs background in functional analysis and operator algebra $-$ they may even be enjoyable. Neither of them has much to say about applications though. On the more applied side I will definitely mention Elements of Statistical Learning by Hastie et al., which provides a treatment of many modern topics and applications from statistics and machine learning. Another book that I will recommend is In All Likelihood by Pawitan. It deals with more standard statistical material and applications and is fairly mathematical too.
Introduction to applied probability for pure mathematicians? Though I am sure that @cardinal will also put together an excellent program, let me mention a couple of books that might cover some of the things the OP is asking for. I recently came across Probabili
26,635
Introduction to applied probability for pure mathematicians?
For a measure theory based introduction to probability I recommend Durrett's "Probability: Theory and Examples" (ISBN 0521765390) with Cosma Shalizi's "Almost None of the Theory of Stochastic Processes," (helpfully freely available http://www.stat.cmu.edu/~cshalizi/almost-none/v0.1.1/almost-none.pdf). I have not come across a perfect self contained book for everything after that. Some combination of MacKays's book (good for neural networks: http://www.inference.phy.cam.ac.uk/itprnn/book.html), the Koller and Friedman graphical models book (ISBN: 0262013193) and a good graduate level mathematical statistics book might work.
Introduction to applied probability for pure mathematicians?
For a measure theory based introduction to probability I recommend Durrett's "Probability: Theory and Examples" (ISBN 0521765390) with Cosma Shalizi's "Almost None of the Theory of Stochastic Processe
Introduction to applied probability for pure mathematicians? For a measure theory based introduction to probability I recommend Durrett's "Probability: Theory and Examples" (ISBN 0521765390) with Cosma Shalizi's "Almost None of the Theory of Stochastic Processes," (helpfully freely available http://www.stat.cmu.edu/~cshalizi/almost-none/v0.1.1/almost-none.pdf). I have not come across a perfect self contained book for everything after that. Some combination of MacKays's book (good for neural networks: http://www.inference.phy.cam.ac.uk/itprnn/book.html), the Koller and Friedman graphical models book (ISBN: 0262013193) and a good graduate level mathematical statistics book might work.
Introduction to applied probability for pure mathematicians? For a measure theory based introduction to probability I recommend Durrett's "Probability: Theory and Examples" (ISBN 0521765390) with Cosma Shalizi's "Almost None of the Theory of Stochastic Processe
26,636
What is the most efficient way of training data using least memory?
I believe the term for this type of learning is out-of-core learning. One suggestion is vowpal wabbit, which has a convenient R library, as well as libraries for many other languages.
What is the most efficient way of training data using least memory?
I believe the term for this type of learning is out-of-core learning. One suggestion is vowpal wabbit, which has a convenient R library, as well as libraries for many other languages.
What is the most efficient way of training data using least memory? I believe the term for this type of learning is out-of-core learning. One suggestion is vowpal wabbit, which has a convenient R library, as well as libraries for many other languages.
What is the most efficient way of training data using least memory? I believe the term for this type of learning is out-of-core learning. One suggestion is vowpal wabbit, which has a convenient R library, as well as libraries for many other languages.
26,637
What is the most efficient way of training data using least memory?
I heartily second Zach's suggestion. vowpal wabbit is an excellent option, and you'd be suprised by its speed. A 200k by 10k data-set is not considered large by vowpal wabbit's norms. vowpal_wabbit (available in source form via https://github.com/JohnLangford/vowpal_wabbit, an older version is available as a standard package in Ubuntu universe) is a fast online linear + bilinear learner, with very flexible input. You may mix binary and numeric-valued features. There's no need to number the features as variable names will work "as is". It has a ton of options, algorithms, reductions, loss-functions, and all-in-all great flexibility. You may join the mailing list (find it via github) and ask any question. The community is very knowledgable and supportive.
What is the most efficient way of training data using least memory?
I heartily second Zach's suggestion. vowpal wabbit is an excellent option, and you'd be suprised by its speed. A 200k by 10k data-set is not considered large by vowpal wabbit's norms. vowpal_wabbit
What is the most efficient way of training data using least memory? I heartily second Zach's suggestion. vowpal wabbit is an excellent option, and you'd be suprised by its speed. A 200k by 10k data-set is not considered large by vowpal wabbit's norms. vowpal_wabbit (available in source form via https://github.com/JohnLangford/vowpal_wabbit, an older version is available as a standard package in Ubuntu universe) is a fast online linear + bilinear learner, with very flexible input. You may mix binary and numeric-valued features. There's no need to number the features as variable names will work "as is". It has a ton of options, algorithms, reductions, loss-functions, and all-in-all great flexibility. You may join the mailing list (find it via github) and ask any question. The community is very knowledgable and supportive.
What is the most efficient way of training data using least memory? I heartily second Zach's suggestion. vowpal wabbit is an excellent option, and you'd be suprised by its speed. A 200k by 10k data-set is not considered large by vowpal wabbit's norms. vowpal_wabbit
26,638
What is the most efficient way of training data using least memory?
I answered similar question here. Point is most machine learning/data mining algorithms are batch learners that is they load all data to memory. Therefore you need different tools for very large data sets as you have. See that questions's tools also. Online Learning is a way to reduce memory footprint of algorithms.
What is the most efficient way of training data using least memory?
I answered similar question here. Point is most machine learning/data mining algorithms are batch learners that is they load all data to memory. Therefore you need different tools for very large data
What is the most efficient way of training data using least memory? I answered similar question here. Point is most machine learning/data mining algorithms are batch learners that is they load all data to memory. Therefore you need different tools for very large data sets as you have. See that questions's tools also. Online Learning is a way to reduce memory footprint of algorithms.
What is the most efficient way of training data using least memory? I answered similar question here. Point is most machine learning/data mining algorithms are batch learners that is they load all data to memory. Therefore you need different tools for very large data
26,639
Boosted AR for time series forecasting?
You can have a look at the paper "Boosting multi-step autoregressive forecasts (link not working)" by Souhaib Ben Taieb and Rob J. Hyndman. Or directly at Hyndmans website Boosting multi-step autoregressive forecasts. The main idea is to boost a traditional autoregressive (linear) model using a gradient boosting approach.
Boosted AR for time series forecasting?
You can have a look at the paper "Boosting multi-step autoregressive forecasts (link not working)" by Souhaib Ben Taieb and Rob J. Hyndman. Or directly at Hyndmans website Boosting multi-step autoreg
Boosted AR for time series forecasting? You can have a look at the paper "Boosting multi-step autoregressive forecasts (link not working)" by Souhaib Ben Taieb and Rob J. Hyndman. Or directly at Hyndmans website Boosting multi-step autoregressive forecasts. The main idea is to boost a traditional autoregressive (linear) model using a gradient boosting approach.
Boosted AR for time series forecasting? You can have a look at the paper "Boosting multi-step autoregressive forecasts (link not working)" by Souhaib Ben Taieb and Rob J. Hyndman. Or directly at Hyndmans website Boosting multi-step autoreg
26,640
Boosted AR for time series forecasting?
You probably need to include some nonlinear regressors into your AR formulation. Thus instead of regressing the vector $x_{t+1}$ against some $x_{t-i}$ ($i=0,1,...$) only (which gives standard AR), regress against $x_{t-i}$ and $x_{t-i}^2$ ($i=0,1,...$) or even $x_{t-i}x_{t-j}$ ($i\le j$). To get the most useful result you probably need to use subset selection to identify the most useful nonlinear terms. (All operations on vectors pointwise, but you may also consider products of different components.)
Boosted AR for time series forecasting?
You probably need to include some nonlinear regressors into your AR formulation. Thus instead of regressing the vector $x_{t+1}$ against some $x_{t-i}$ ($i=0,1,...$) only (which gives standard AR), re
Boosted AR for time series forecasting? You probably need to include some nonlinear regressors into your AR formulation. Thus instead of regressing the vector $x_{t+1}$ against some $x_{t-i}$ ($i=0,1,...$) only (which gives standard AR), regress against $x_{t-i}$ and $x_{t-i}^2$ ($i=0,1,...$) or even $x_{t-i}x_{t-j}$ ($i\le j$). To get the most useful result you probably need to use subset selection to identify the most useful nonlinear terms. (All operations on vectors pointwise, but you may also consider products of different components.)
Boosted AR for time series forecasting? You probably need to include some nonlinear regressors into your AR formulation. Thus instead of regressing the vector $x_{t+1}$ against some $x_{t-i}$ ($i=0,1,...$) only (which gives standard AR), re
26,641
What's a time series model for forecasting a percentage bound by (0,1)?
I asked this a long time ago but SO just popped it back up. In the case I was looking at, I ended up forecasting numerator and denominator separately, which made more sense for the metric anyway.
What's a time series model for forecasting a percentage bound by (0,1)?
I asked this a long time ago but SO just popped it back up. In the case I was looking at, I ended up forecasting numerator and denominator separately, which made more sense for the metric anyway.
What's a time series model for forecasting a percentage bound by (0,1)? I asked this a long time ago but SO just popped it back up. In the case I was looking at, I ended up forecasting numerator and denominator separately, which made more sense for the metric anyway.
What's a time series model for forecasting a percentage bound by (0,1)? I asked this a long time ago but SO just popped it back up. In the case I was looking at, I ended up forecasting numerator and denominator separately, which made more sense for the metric anyway.
26,642
What's a time series model for forecasting a percentage bound by (0,1)?
In my PhD Dissertation at Stanford in 1978 I constructed a family of first order autoregressive processes with uniform marginal distributions on $[0,1]$ For any integer $r\geq 2$ let $X(t) = X(t-1)/r+e(t)$ where $e(t)$ has the following discrete uniform distribution that is $P(e(t) = k/r)=1/r$ for $k=0,1,..., r-1$. It is interesting that even though $e(t)$ is discrete each $X(t)$ has a continuous uniform distribution on $[0,1]$ if you start out assuming $X(0)$ is uniform on $[0,1]$. Later Richard Davis and I extended this to negative correlation i.e. $X(t) =-X(t-1)/r + e(t)$. It is interesting as an example of a stationary autoregressive time series constrained to vary between $0$ and $1$ as you indicated you are interested in. It is a slightly pathological case because although the maximum of the sequences satisfies an extreme value limit similar to the limit for IID uniforms it has an extremal index less than $1$. In my thesis and Annals of Probability paper I showed that the extremal index was $(r-1)/r$. I didn't refer to it as the extremal index because that term was coined later by Leadbetter (most notably mentioned in his 1983 Springer text coauthored with Rootzen and Lindgren). I don't know if this model has much practical value. I think probably not since the noise distribution is so peculiar. But it does serve as a slightly pathological example.
What's a time series model for forecasting a percentage bound by (0,1)?
In my PhD Dissertation at Stanford in 1978 I constructed a family of first order autoregressive processes with uniform marginal distributions on $[0,1]$ For any integer $r\geq 2$ let $X(t) = X(t-1)/r
What's a time series model for forecasting a percentage bound by (0,1)? In my PhD Dissertation at Stanford in 1978 I constructed a family of first order autoregressive processes with uniform marginal distributions on $[0,1]$ For any integer $r\geq 2$ let $X(t) = X(t-1)/r+e(t)$ where $e(t)$ has the following discrete uniform distribution that is $P(e(t) = k/r)=1/r$ for $k=0,1,..., r-1$. It is interesting that even though $e(t)$ is discrete each $X(t)$ has a continuous uniform distribution on $[0,1]$ if you start out assuming $X(0)$ is uniform on $[0,1]$. Later Richard Davis and I extended this to negative correlation i.e. $X(t) =-X(t-1)/r + e(t)$. It is interesting as an example of a stationary autoregressive time series constrained to vary between $0$ and $1$ as you indicated you are interested in. It is a slightly pathological case because although the maximum of the sequences satisfies an extreme value limit similar to the limit for IID uniforms it has an extremal index less than $1$. In my thesis and Annals of Probability paper I showed that the extremal index was $(r-1)/r$. I didn't refer to it as the extremal index because that term was coined later by Leadbetter (most notably mentioned in his 1983 Springer text coauthored with Rootzen and Lindgren). I don't know if this model has much practical value. I think probably not since the noise distribution is so peculiar. But it does serve as a slightly pathological example.
What's a time series model for forecasting a percentage bound by (0,1)? In my PhD Dissertation at Stanford in 1978 I constructed a family of first order autoregressive processes with uniform marginal distributions on $[0,1]$ For any integer $r\geq 2$ let $X(t) = X(t-1)/r
26,643
Are regressions with student-t errors useless?
Your edit is correct. The results presented in the paper apply only to multivariate-t errors. If you are using independent t errors, then you are safe. I do not think the paper is well known, but I think it is correct. The statistical literature is full of "generalizations" which in many cases are either reparameterizations, one-to-one transformations or sometimes useless because they do not contribute significantly in generalizing some properties of the model in question.
Are regressions with student-t errors useless?
Your edit is correct. The results presented in the paper apply only to multivariate-t errors. If you are using independent t errors, then you are safe. I do not think the paper is well known, but I th
Are regressions with student-t errors useless? Your edit is correct. The results presented in the paper apply only to multivariate-t errors. If you are using independent t errors, then you are safe. I do not think the paper is well known, but I think it is correct. The statistical literature is full of "generalizations" which in many cases are either reparameterizations, one-to-one transformations or sometimes useless because they do not contribute significantly in generalizing some properties of the model in question.
Are regressions with student-t errors useless? Your edit is correct. The results presented in the paper apply only to multivariate-t errors. If you are using independent t errors, then you are safe. I do not think the paper is well known, but I th
26,644
Different prediction plot from survival coxph and rms cph
I think there should definitely be a point where the confidence interval is zero width. You might also try a third way which is to use solely rms functions. There is an example under the help file for contrast.rms to get a hazard ratio plot. It starts with the comment # show separate estimates by treatment and sex. You'll need to anti-log to get the ratio.
Different prediction plot from survival coxph and rms cph
I think there should definitely be a point where the confidence interval is zero width. You might also try a third way which is to use solely rms functions. There is an example under the help file f
Different prediction plot from survival coxph and rms cph I think there should definitely be a point where the confidence interval is zero width. You might also try a third way which is to use solely rms functions. There is an example under the help file for contrast.rms to get a hazard ratio plot. It starts with the comment # show separate estimates by treatment and sex. You'll need to anti-log to get the ratio.
Different prediction plot from survival coxph and rms cph I think there should definitely be a point where the confidence interval is zero width. You might also try a third way which is to use solely rms functions. There is an example under the help file f
26,645
Does significance test make sense to compare randomised groups at baseline?
A hypothesis test would be nonsensical, but a significance test may be useful. The hypothesis test would be testing a null hypothesis that is already known to be true, as your question makes clear. It is silly to apply a statistical test to any hypothesis that has a truth value already known via completely reliable information. A significance test provides a P value that, again as you already say, indicates the probability of getting data as extreme or more extreme given the null hypothesis. However, it seems to me that such a P value can be interpreted in a manner that equates to an answer to the question "How often might I expect to see a difference in baseline values as large as this time, or larger?" The answer might be useful even if it is not clear for what purpose.
Does significance test make sense to compare randomised groups at baseline?
A hypothesis test would be nonsensical, but a significance test may be useful. The hypothesis test would be testing a null hypothesis that is already known to be true, as your question makes clear. It
Does significance test make sense to compare randomised groups at baseline? A hypothesis test would be nonsensical, but a significance test may be useful. The hypothesis test would be testing a null hypothesis that is already known to be true, as your question makes clear. It is silly to apply a statistical test to any hypothesis that has a truth value already known via completely reliable information. A significance test provides a P value that, again as you already say, indicates the probability of getting data as extreme or more extreme given the null hypothesis. However, it seems to me that such a P value can be interpreted in a manner that equates to an answer to the question "How often might I expect to see a difference in baseline values as large as this time, or larger?" The answer might be useful even if it is not clear for what purpose.
Does significance test make sense to compare randomised groups at baseline? A hypothesis test would be nonsensical, but a significance test may be useful. The hypothesis test would be testing a null hypothesis that is already known to be true, as your question makes clear. It
26,646
Time series clustering
Step 1 Perform a fast Fourier transform on the time series data. This decomposes your time series data into mean and frequency components and allows you to use variables for clustering that do not show heavy autocorrelation like many raw time series. Step 2 If time series is real-valued, discard the second half of the fast Fourier transform elements because they are redundant. Step 3 Separate the real and imaginary parts of each fast Fourier transform element. Step 4 Perform model-based clustering on the real and imaginary parts of each frequency element. Step 5 Plot the percentiles of the time series by cluster to examine their shape. Alternately, you could omit the DC components of the fast Fourier transform to avoid your clusters being based on the mean and instead on the series defined by the Fourier transform, which represents the shape of the time series. You will also want to calculate the amplitudes and phase angles from the fast Fourier transform so that you can explore the distribution of time series spectra within clusters. See this StackOverflow answer on how to do that for real-valued data. You could also plot the percentiles of time series shape by cluster by computing the Fourier series from the amplitudes and phase angles (the resulting time series estimate will not perfectly match the original time series). You could also plot the percentiles of the raw time series data by cluster. Here is an example of such a plot, which came about from a harmonic analysis of NDVI data I just did today: Finally, if your time series is not stationary (i.e., mean and variance shift over time), it may be more appropriate to use a wavelet transform rather than a Fourier transform. You would do so at the cost of information about frequencies while gaining information about location.
Time series clustering
Step 1 Perform a fast Fourier transform on the time series data. This decomposes your time series data into mean and frequency components and allows you to use variables for clustering that do not sho
Time series clustering Step 1 Perform a fast Fourier transform on the time series data. This decomposes your time series data into mean and frequency components and allows you to use variables for clustering that do not show heavy autocorrelation like many raw time series. Step 2 If time series is real-valued, discard the second half of the fast Fourier transform elements because they are redundant. Step 3 Separate the real and imaginary parts of each fast Fourier transform element. Step 4 Perform model-based clustering on the real and imaginary parts of each frequency element. Step 5 Plot the percentiles of the time series by cluster to examine their shape. Alternately, you could omit the DC components of the fast Fourier transform to avoid your clusters being based on the mean and instead on the series defined by the Fourier transform, which represents the shape of the time series. You will also want to calculate the amplitudes and phase angles from the fast Fourier transform so that you can explore the distribution of time series spectra within clusters. See this StackOverflow answer on how to do that for real-valued data. You could also plot the percentiles of time series shape by cluster by computing the Fourier series from the amplitudes and phase angles (the resulting time series estimate will not perfectly match the original time series). You could also plot the percentiles of the raw time series data by cluster. Here is an example of such a plot, which came about from a harmonic analysis of NDVI data I just did today: Finally, if your time series is not stationary (i.e., mean and variance shift over time), it may be more appropriate to use a wavelet transform rather than a Fourier transform. You would do so at the cost of information about frequencies while gaining information about location.
Time series clustering Step 1 Perform a fast Fourier transform on the time series data. This decomposes your time series data into mean and frequency components and allows you to use variables for clustering that do not sho
26,647
Using MCMC to evaluate the expected value of a high-dimensional function
I would always remember, that MCMC is just a numerical integration tool (and a rather inefficient one at that). It is not some magic/mystical thing. It is very useful because it is reasonably easy to apply. It does not require much thinking compared to some other numerical integration techniques. For instance, you do not have to do any derivatives. You only have to generate "random numbers". However, like any numerical integration method, it is not a universal catch all tool. There are conditions when it is useful, and conditions when it isn't. It may be wiser to set up another technique. Depending on how big $h$ is, and how fast your computer is, and how much time you are prepared to wait for results. A uniform grid may do the job (although this requires small $h$ or a long amount of waiting). The "job" is to evaluate the integral - the equation does not care what meaning you or I attach to the result (and hence it does not care whether we obtained the result randomly or not). Additionally, if your estimates of $\omega$ are quite accurate, the $f(\omega)$ will be sharply peaked and closely resemble a delta function, so the integral is effectively substituting $\omega\rightarrow\omega_{max}$. Another numerical integration technique is using a taylor series under the integral. $f(\omega)\approx f(\omega_{max})+(\omega-\omega_{max})f'(\omega_{max})+\frac{1}{2}(\omega-\omega_{max})^{2}f''(\omega_{max})+\dots$ This is a useful strategy when the moments of $\omega$ are easily obtained. Edwin Jaynes has a nice quote on this: whenever there is a randomised way of doing something, there is a non-randomised way which yields better results, but requires more thinking One "more thinking" way is to use "stratified MCMC" to do the integral. So rather than "randomly" pick a spot on the whole parameter space: divide it up into "strata". These "strata" should be picked so that you get a good range of the high part of the integral. Then randomly sample within each strata. But this will require you to write your own code I would imagine (i.e. more thinking).
Using MCMC to evaluate the expected value of a high-dimensional function
I would always remember, that MCMC is just a numerical integration tool (and a rather inefficient one at that). It is not some magic/mystical thing. It is very useful because it is reasonably easy t
Using MCMC to evaluate the expected value of a high-dimensional function I would always remember, that MCMC is just a numerical integration tool (and a rather inefficient one at that). It is not some magic/mystical thing. It is very useful because it is reasonably easy to apply. It does not require much thinking compared to some other numerical integration techniques. For instance, you do not have to do any derivatives. You only have to generate "random numbers". However, like any numerical integration method, it is not a universal catch all tool. There are conditions when it is useful, and conditions when it isn't. It may be wiser to set up another technique. Depending on how big $h$ is, and how fast your computer is, and how much time you are prepared to wait for results. A uniform grid may do the job (although this requires small $h$ or a long amount of waiting). The "job" is to evaluate the integral - the equation does not care what meaning you or I attach to the result (and hence it does not care whether we obtained the result randomly or not). Additionally, if your estimates of $\omega$ are quite accurate, the $f(\omega)$ will be sharply peaked and closely resemble a delta function, so the integral is effectively substituting $\omega\rightarrow\omega_{max}$. Another numerical integration technique is using a taylor series under the integral. $f(\omega)\approx f(\omega_{max})+(\omega-\omega_{max})f'(\omega_{max})+\frac{1}{2}(\omega-\omega_{max})^{2}f''(\omega_{max})+\dots$ This is a useful strategy when the moments of $\omega$ are easily obtained. Edwin Jaynes has a nice quote on this: whenever there is a randomised way of doing something, there is a non-randomised way which yields better results, but requires more thinking One "more thinking" way is to use "stratified MCMC" to do the integral. So rather than "randomly" pick a spot on the whole parameter space: divide it up into "strata". These "strata" should be picked so that you get a good range of the high part of the integral. Then randomly sample within each strata. But this will require you to write your own code I would imagine (i.e. more thinking).
Using MCMC to evaluate the expected value of a high-dimensional function I would always remember, that MCMC is just a numerical integration tool (and a rather inefficient one at that). It is not some magic/mystical thing. It is very useful because it is reasonably easy t
26,648
Using MCMC to evaluate the expected value of a high-dimensional function
There isn't any indication that your variables here are correlated so I dont know why you would use MCMC as opposed to regular Monte Carlo. There are many different sampling methods including the mentioned stratified sampling (Latin hypercube) and QMC. Sparse quadrature methods are very good if the dimension of the problem is not too high (not more than 10) since sparse quadrature grids grow geometrically (curse of dimensionality). But it sounds like you are on the right track with respect to importance sampling. The key here is to choose a biased distribution that has large probability concentrated near your region of interest and that it has thicker tails than the nominal distribution. I'd like to add that this is an open research problem so if you can come up with something good it would be of great interest to the community!
Using MCMC to evaluate the expected value of a high-dimensional function
There isn't any indication that your variables here are correlated so I dont know why you would use MCMC as opposed to regular Monte Carlo. There are many different sampling methods including the ment
Using MCMC to evaluate the expected value of a high-dimensional function There isn't any indication that your variables here are correlated so I dont know why you would use MCMC as opposed to regular Monte Carlo. There are many different sampling methods including the mentioned stratified sampling (Latin hypercube) and QMC. Sparse quadrature methods are very good if the dimension of the problem is not too high (not more than 10) since sparse quadrature grids grow geometrically (curse of dimensionality). But it sounds like you are on the right track with respect to importance sampling. The key here is to choose a biased distribution that has large probability concentrated near your region of interest and that it has thicker tails than the nominal distribution. I'd like to add that this is an open research problem so if you can come up with something good it would be of great interest to the community!
Using MCMC to evaluate the expected value of a high-dimensional function There isn't any indication that your variables here are correlated so I dont know why you would use MCMC as opposed to regular Monte Carlo. There are many different sampling methods including the ment
26,649
Using MCMC to evaluate the expected value of a high-dimensional function
Since no one seemed to actually answer the question directly: yes you can use MCMC to sample from $g(\omega)$. MCMC can be used to sample from any distribution where the distribution is known only up to a constant of proportionality. In addition, you may want to look up variance reduction techniques in the MC integration field. A great self contained set of resources are the free book chapters available from Art Owen at Stanford. Specifically chapters 8, 9, and 10. There you will find in-depth treatments of adaptive sampling, recursion, and other techniques.
Using MCMC to evaluate the expected value of a high-dimensional function
Since no one seemed to actually answer the question directly: yes you can use MCMC to sample from $g(\omega)$. MCMC can be used to sample from any distribution where the distribution is known only up
Using MCMC to evaluate the expected value of a high-dimensional function Since no one seemed to actually answer the question directly: yes you can use MCMC to sample from $g(\omega)$. MCMC can be used to sample from any distribution where the distribution is known only up to a constant of proportionality. In addition, you may want to look up variance reduction techniques in the MC integration field. A great self contained set of resources are the free book chapters available from Art Owen at Stanford. Specifically chapters 8, 9, and 10. There you will find in-depth treatments of adaptive sampling, recursion, and other techniques.
Using MCMC to evaluate the expected value of a high-dimensional function Since no one seemed to actually answer the question directly: yes you can use MCMC to sample from $g(\omega)$. MCMC can be used to sample from any distribution where the distribution is known only up
26,650
Likelihood of my friend being able to guess skittle taste
Consider the first case with 2 out of 3 correct: Under the null hypothesis that your friend is purely guessing, the number correct is $X \sim \mathsf{Binom}(n=3, p=1/5).$ A test of the null hypothesis against the the alternative that $p > 1/5$ rejects for large values of $X.$ So the P-value for outcome $X = 2$ is $P(X \ge 2) = 0.104 > 0.05 = 5\%$ and you would not reject at the $5\%$ level. The evidence does not require you to believe your friend can identify color by taste. [Computation below in R, but summing two terms using the binomial PDF is not difficult. Note: If your friend got all three right, the probability of that just by guessing is $(1/5)^3 = 0.008$ and you should be convinced.] sum(dbinom(2:3, 3, 1/5)) [1] 0.104 However, if your friend gets 40 out of 100 correct, then the null distribution is $X \sim \mathsf{Binom}(n=100, p=1/5)$ and the P-value is $P(X \ge 40) \approx 0.$ So without ability to judge color by taste, this outcome would be very rare. You should believe your friend has some ability. sum(dbinom(40:100, 40, 1/5)) [1] 1.099512e-28 By normal approximation to $\mathsf{Binom}(n=100, p=1/5),$ you have $\mu = E(X) = np = 20,\;$ $\sigma^2 =Var(X) = 16,\;$ $\sigma = SD(X) = 4.$ Then $$P(X \ge 40) = P(X>39.5)\\ = P\left(\frac{X - \mu}{\sigma} > \frac{39.5-20}{4} = 4.875\right)\\ \approx P(Z > 4.875) \approx 0, $$ where $Z$ has a standard normal distribution. 1 - pnorm(4.875) [1] 5.440423e-07 In the figure below, the P-value is the (very small) sum of heights of bars to the right of the vertical dotted line. The red curve shows the density function of the approximating normal distribution.
Likelihood of my friend being able to guess skittle taste
Consider the first case with 2 out of 3 correct: Under the null hypothesis that your friend is purely guessing, the number correct is $X \sim \mathsf{Binom}(n=3, p=1/5).$ A test of the null hypothesis
Likelihood of my friend being able to guess skittle taste Consider the first case with 2 out of 3 correct: Under the null hypothesis that your friend is purely guessing, the number correct is $X \sim \mathsf{Binom}(n=3, p=1/5).$ A test of the null hypothesis against the the alternative that $p > 1/5$ rejects for large values of $X.$ So the P-value for outcome $X = 2$ is $P(X \ge 2) = 0.104 > 0.05 = 5\%$ and you would not reject at the $5\%$ level. The evidence does not require you to believe your friend can identify color by taste. [Computation below in R, but summing two terms using the binomial PDF is not difficult. Note: If your friend got all three right, the probability of that just by guessing is $(1/5)^3 = 0.008$ and you should be convinced.] sum(dbinom(2:3, 3, 1/5)) [1] 0.104 However, if your friend gets 40 out of 100 correct, then the null distribution is $X \sim \mathsf{Binom}(n=100, p=1/5)$ and the P-value is $P(X \ge 40) \approx 0.$ So without ability to judge color by taste, this outcome would be very rare. You should believe your friend has some ability. sum(dbinom(40:100, 40, 1/5)) [1] 1.099512e-28 By normal approximation to $\mathsf{Binom}(n=100, p=1/5),$ you have $\mu = E(X) = np = 20,\;$ $\sigma^2 =Var(X) = 16,\;$ $\sigma = SD(X) = 4.$ Then $$P(X \ge 40) = P(X>39.5)\\ = P\left(\frac{X - \mu}{\sigma} > \frac{39.5-20}{4} = 4.875\right)\\ \approx P(Z > 4.875) \approx 0, $$ where $Z$ has a standard normal distribution. 1 - pnorm(4.875) [1] 5.440423e-07 In the figure below, the P-value is the (very small) sum of heights of bars to the right of the vertical dotted line. The red curve shows the density function of the approximating normal distribution.
Likelihood of my friend being able to guess skittle taste Consider the first case with 2 out of 3 correct: Under the null hypothesis that your friend is purely guessing, the number correct is $X \sim \mathsf{Binom}(n=3, p=1/5).$ A test of the null hypothesis
26,651
Standardization of variables and collinearity
It was not so clear to me what sort of standardization was meant, and while looking for the history I picked up two interesting references. This recent article has a historic overview in the introduction: García, J., Salmerón, R., García, C., & López Martín, M. D. M. (2016). Standardization of variables and collinearity diagnostic in ridge regression. International Statistical Review, 84(2), 245-266 I found another interesting article that sort of claims to show that standardization, or centering, has no effect at all. Echambadi, R., & Hess, J. D. (2007). Mean-centering does not alleviate collinearity problems in moderated multiple regression models. Marketing Science, 26(3), 438-445. To me this criticism all seems a bit like missing the point about the idea of centering. The only thing that Echambadi and Hess show is that the models are equivalent and that you can express the coefficients of the centered model in terms of the coefficients of the non-centered model, and vice versa (resulting in similar variance/error of the coefficients). Echambadi and Hess' result is a bit trivial and I believe that this (those relations and equivalence between the coefficients) is not claimed to be untrue by anybody. Nobody claimed that those relations between the coefficients are not true. And it is not the point of centering variables. The point of the centering is that in models with linear and quadratic terms you can choose different coordinate scales such that you end up working in a frame that has no or less correlation between the variables. Say you wish to express the effect of time $t$ on some variable $Y$ and you wish to do this over some period expressed in terms of years AD say from 1998 to 2018. In that case, what the centering technique means to resolve is that "If you express the accuracy of the coefficients for the linear and quadratic dependencies on time, then they will have more variance when you use time $t$ ranging from 1998 to 2018 instead of a centered time $t^\prime$ ranging from -10 to 10". $$Y = a + bt + ct^2$$ versus $$Y = a^\prime + b^\prime(t-T) + c^\prime(t-T)^2$$ Of course, these two models are equivalent and instead of centering you can get the exact same result (and hence the same error of the estimated coefficients) by computing the coefficients like $$\begin{array}{} a &=& a^\prime - b^\prime T + c^\prime T^2 \\ b &=& b^\prime - 2 c^\prime T \\ c &=& c^\prime \end{array}$$ also when you do ANOVA or use expressions like $R^2$ then there will be no difference. However, that is not at all the point of mean-centering. The point of mean-centering is that sometimes one wants to communicate the coefficients and their estimated variance/accuracy or confidence intervals, and for those cases it does matter how the model is expressed. Example: a physicists wishes to express some experimental relation for some parameter X as a quadratic function of temperature. T X 298 1230 308 1308 318 1371 328 1470 338 1534 348 1601 358 1695 368 1780 378 1863 388 1940 398 2047 would it not be better to report the 95% intervals for coefficients like 2.5 % 97.5 % (Intercept) 1602 1621 T-348 7.87 8.26 (T-348)^2 0.0029 0.0166 instead of 2.5 % 97.5 % (Intercept) -839 816 T -3.52 6.05 T^2 0.0029 0.0166 In the latter case the coefficients will be expressed by seemingly large error margins (but telling nothing about the error in the model), and in addition the correlation between the distribution of the error won't be clear (in the first case the error in the coefficients will not be correlated). If one claims, like Echambadi and Hess, that the two expressions are just equivalent and the centering does not matter, then we should (as a consequence using similar arguments) also claim that expressions for model coefficients (when there is no natural intercept and the choice is arbitrary) in terms of confidence intervals or standard error are never making sense. In this question/answer an image is shown that also presents this idea how the 95% confidence intervals do not tell much about the certainty of the coefficients (at least not intuitively) when the the errors in the estimates of the coefficients are correlated.
Standardization of variables and collinearity
It was not so clear to me what sort of standardization was meant, and while looking for the history I picked up two interesting references. This recent article has a historic overview in the introduct
Standardization of variables and collinearity It was not so clear to me what sort of standardization was meant, and while looking for the history I picked up two interesting references. This recent article has a historic overview in the introduction: García, J., Salmerón, R., García, C., & López Martín, M. D. M. (2016). Standardization of variables and collinearity diagnostic in ridge regression. International Statistical Review, 84(2), 245-266 I found another interesting article that sort of claims to show that standardization, or centering, has no effect at all. Echambadi, R., & Hess, J. D. (2007). Mean-centering does not alleviate collinearity problems in moderated multiple regression models. Marketing Science, 26(3), 438-445. To me this criticism all seems a bit like missing the point about the idea of centering. The only thing that Echambadi and Hess show is that the models are equivalent and that you can express the coefficients of the centered model in terms of the coefficients of the non-centered model, and vice versa (resulting in similar variance/error of the coefficients). Echambadi and Hess' result is a bit trivial and I believe that this (those relations and equivalence between the coefficients) is not claimed to be untrue by anybody. Nobody claimed that those relations between the coefficients are not true. And it is not the point of centering variables. The point of the centering is that in models with linear and quadratic terms you can choose different coordinate scales such that you end up working in a frame that has no or less correlation between the variables. Say you wish to express the effect of time $t$ on some variable $Y$ and you wish to do this over some period expressed in terms of years AD say from 1998 to 2018. In that case, what the centering technique means to resolve is that "If you express the accuracy of the coefficients for the linear and quadratic dependencies on time, then they will have more variance when you use time $t$ ranging from 1998 to 2018 instead of a centered time $t^\prime$ ranging from -10 to 10". $$Y = a + bt + ct^2$$ versus $$Y = a^\prime + b^\prime(t-T) + c^\prime(t-T)^2$$ Of course, these two models are equivalent and instead of centering you can get the exact same result (and hence the same error of the estimated coefficients) by computing the coefficients like $$\begin{array}{} a &=& a^\prime - b^\prime T + c^\prime T^2 \\ b &=& b^\prime - 2 c^\prime T \\ c &=& c^\prime \end{array}$$ also when you do ANOVA or use expressions like $R^2$ then there will be no difference. However, that is not at all the point of mean-centering. The point of mean-centering is that sometimes one wants to communicate the coefficients and their estimated variance/accuracy or confidence intervals, and for those cases it does matter how the model is expressed. Example: a physicists wishes to express some experimental relation for some parameter X as a quadratic function of temperature. T X 298 1230 308 1308 318 1371 328 1470 338 1534 348 1601 358 1695 368 1780 378 1863 388 1940 398 2047 would it not be better to report the 95% intervals for coefficients like 2.5 % 97.5 % (Intercept) 1602 1621 T-348 7.87 8.26 (T-348)^2 0.0029 0.0166 instead of 2.5 % 97.5 % (Intercept) -839 816 T -3.52 6.05 T^2 0.0029 0.0166 In the latter case the coefficients will be expressed by seemingly large error margins (but telling nothing about the error in the model), and in addition the correlation between the distribution of the error won't be clear (in the first case the error in the coefficients will not be correlated). If one claims, like Echambadi and Hess, that the two expressions are just equivalent and the centering does not matter, then we should (as a consequence using similar arguments) also claim that expressions for model coefficients (when there is no natural intercept and the choice is arbitrary) in terms of confidence intervals or standard error are never making sense. In this question/answer an image is shown that also presents this idea how the 95% confidence intervals do not tell much about the certainty of the coefficients (at least not intuitively) when the the errors in the estimates of the coefficients are correlated.
Standardization of variables and collinearity It was not so clear to me what sort of standardization was meant, and while looking for the history I picked up two interesting references. This recent article has a historic overview in the introduct
26,652
Should OOB (Out Of Bag) error be less than a Test set error in Random Forests?
To my knowledge, no. There are more strange things in this plot, e.g. why does bagging outperform the random forest with respect to the OOB error? It's hard to explain the observed without more information on the data, e.g. how many samples were used in training and testing? How was training and testing performed? If the model was trained and tested on only a small set of samples, the observed difference in error rate might be not significant. Further, if the problem has a rather steep learning curve and testing was performed by holding out a portion of the data while OOB error estimation was performed on the entire data-set, under-fitting might be another explanation.
Should OOB (Out Of Bag) error be less than a Test set error in Random Forests?
To my knowledge, no. There are more strange things in this plot, e.g. why does bagging outperform the random forest with respect to the OOB error? It's hard to explain the observed without more infor
Should OOB (Out Of Bag) error be less than a Test set error in Random Forests? To my knowledge, no. There are more strange things in this plot, e.g. why does bagging outperform the random forest with respect to the OOB error? It's hard to explain the observed without more information on the data, e.g. how many samples were used in training and testing? How was training and testing performed? If the model was trained and tested on only a small set of samples, the observed difference in error rate might be not significant. Further, if the problem has a rather steep learning curve and testing was performed by holding out a portion of the data while OOB error estimation was performed on the entire data-set, under-fitting might be another explanation.
Should OOB (Out Of Bag) error be less than a Test set error in Random Forests? To my knowledge, no. There are more strange things in this plot, e.g. why does bagging outperform the random forest with respect to the OOB error? It's hard to explain the observed without more infor
26,653
ACF and PACF of residuals to determine ARIMA model
The thread Terms "cut off" and "tail off" about ACF, PACF functions on this forum will come in handy, Benjamin! In looking at your plots, I see that the PACF cuts off after 2 lags and the ACF 'decays' towards zero. As per the above thread, that would suggest an AR(2) process for the residuals from your initial regression model. In general, the 'decay' in the ACF can look like what you have in your plot (i.e., exponential decay) or have some sort of sinusoidal flavour, as seen in What does my ACF graph tell me about my data?, for instance. If you work in R, you could try to fit an ARIMA process to your residuals using the auto.arima function from the forecast package just to see how 'close' your guess (or mine) that the residuals follow an AR(p) process - where p = 4 in your guess or p = 2 in mine - would be to what auto.arima comes with after automated selection of a time series model from the ARIMA class for the model residuals. Just use something like this: install.packages("forecast") require(forecast) auto.arima(residuals(initialreg)) and see what comes out. Another thing to keep in mind is the length of your time series of regression residuals - if that series is not too long, you'll know upfront you won't be able to fit an AR model with too many parameters to it. In particular, you might be able yo fit an AR(1) or AR(2) model, but not an AR(4) or AR(5). The shorter the series, the less complex the AR model that it can support. Of course, after you fit your AR(2) model to the regression residuals, you have to look at diagnostic plots of the AR(2) model residuals to make sure they look fine (i.e., like white noise).
ACF and PACF of residuals to determine ARIMA model
The thread Terms "cut off" and "tail off" about ACF, PACF functions on this forum will come in handy, Benjamin! In looking at your plots, I see that the PACF cuts off after 2 lags and the ACF 'decays'
ACF and PACF of residuals to determine ARIMA model The thread Terms "cut off" and "tail off" about ACF, PACF functions on this forum will come in handy, Benjamin! In looking at your plots, I see that the PACF cuts off after 2 lags and the ACF 'decays' towards zero. As per the above thread, that would suggest an AR(2) process for the residuals from your initial regression model. In general, the 'decay' in the ACF can look like what you have in your plot (i.e., exponential decay) or have some sort of sinusoidal flavour, as seen in What does my ACF graph tell me about my data?, for instance. If you work in R, you could try to fit an ARIMA process to your residuals using the auto.arima function from the forecast package just to see how 'close' your guess (or mine) that the residuals follow an AR(p) process - where p = 4 in your guess or p = 2 in mine - would be to what auto.arima comes with after automated selection of a time series model from the ARIMA class for the model residuals. Just use something like this: install.packages("forecast") require(forecast) auto.arima(residuals(initialreg)) and see what comes out. Another thing to keep in mind is the length of your time series of regression residuals - if that series is not too long, you'll know upfront you won't be able to fit an AR model with too many parameters to it. In particular, you might be able yo fit an AR(1) or AR(2) model, but not an AR(4) or AR(5). The shorter the series, the less complex the AR model that it can support. Of course, after you fit your AR(2) model to the regression residuals, you have to look at diagnostic plots of the AR(2) model residuals to make sure they look fine (i.e., like white noise).
ACF and PACF of residuals to determine ARIMA model The thread Terms "cut off" and "tail off" about ACF, PACF functions on this forum will come in handy, Benjamin! In looking at your plots, I see that the PACF cuts off after 2 lags and the ACF 'decays'
26,654
ACF and PACF of residuals to determine ARIMA model
To review , auto.arima in a brute force list-based procedure that tries a fixed set of models and selects the calculated AIC based upon estimated parameters. The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual model's autoregressive effect and thus miscalculating the model parameters which leads directly to an incorrect error sum of squares and ultimately an incorrect AIC and bad model identification. Most SE responders do not point out this assumption when they promote the auto.arima tool as they are unaware of this subtlety. Modern/Correct/Advanced ARIMA time series analysis is conducted by identifying a starting model and then iterating to refine the initially suggested model as implied by @isabella-ghement and then carefully examining the residuals for the existence of structure BOTH arima AND deterministic structure like pulses, level.step shifts, seasonal pulses and.or local time trends. As an example of a very bad model identification using auto.arima see https://www.omicsonline.org/open-access/an-implementation-of-the-mycielski-algorithm-as-a-predictor-in-r-2090-4541-1000195.php?aid=65324 . Furthermore your model residuals may still have an impact from lags of your predictor. Thus is often the case when your combined regression model + ARIMA structure + Identified Interventions has insufficient lag X structures. If you wish to post your data I will try and help further to give better guidance.
ACF and PACF of residuals to determine ARIMA model
To review , auto.arima in a brute force list-based procedure that tries a fixed set of models and selects the calculated AIC based upon estimated parameters. The AIC should be calculated from residual
ACF and PACF of residuals to determine ARIMA model To review , auto.arima in a brute force list-based procedure that tries a fixed set of models and selects the calculated AIC based upon estimated parameters. The AIC should be calculated from residuals using models that control for intervention administration, otherwise the intervention effects are taken to be Gaussian noise, underestimating the actual model's autoregressive effect and thus miscalculating the model parameters which leads directly to an incorrect error sum of squares and ultimately an incorrect AIC and bad model identification. Most SE responders do not point out this assumption when they promote the auto.arima tool as they are unaware of this subtlety. Modern/Correct/Advanced ARIMA time series analysis is conducted by identifying a starting model and then iterating to refine the initially suggested model as implied by @isabella-ghement and then carefully examining the residuals for the existence of structure BOTH arima AND deterministic structure like pulses, level.step shifts, seasonal pulses and.or local time trends. As an example of a very bad model identification using auto.arima see https://www.omicsonline.org/open-access/an-implementation-of-the-mycielski-algorithm-as-a-predictor-in-r-2090-4541-1000195.php?aid=65324 . Furthermore your model residuals may still have an impact from lags of your predictor. Thus is often the case when your combined regression model + ARIMA structure + Identified Interventions has insufficient lag X structures. If you wish to post your data I will try and help further to give better guidance.
ACF and PACF of residuals to determine ARIMA model To review , auto.arima in a brute force list-based procedure that tries a fixed set of models and selects the calculated AIC based upon estimated parameters. The AIC should be calculated from residual
26,655
What ever happened to Fuzzy Logic?
My reply is technically more relevant to fuzzy sets rather than fuzzy logic, but the two concepts are practically inseparable. I delved into the academic journal articles on fuzzy logic a couple of years ago in order to write a tutorial series on implementing fuzzy sets in SQL Server. Although I can hardly be considered an expert, I'm fairly familiar with the literature and use the techniques regularly to solve practical problems. The strong impression I gleaned from the published research is that the practical potential of fuzzy sets is still untapped, mainly due to a deluge of research on dozens of other families of techniques that can solve complementary sets of questions. The Crowded Marketplace of Ideas in Data Science/Machine Learning etc. There's been such rapid progress in Support Vector Machines, neural nets, random forests, etc. that it's impossible for specialists, analysts, data scientists, programmers or consumers of their products to keep up with it all. In my series of blog posts I speak at length on how the development of algorithms for fuzzy sets and logical are generally 20+ years ahead of the available software, but the same can be said of many related fields; I read intensively on neural nets and can think of scores of worthwhile neural architectures that were developed decades ago but never put widely into practice, let alone coded in easily available software. That being said, fuzzy logic and sets are at an odd disadvantage in this crowded marketplace of ideas, mainly because of their moniker, which was controversial back when Lofti A. Zadeh coined it. The point of fuzzy techniques is simply to approximate certain classes of discretely valued data on continuous scales, but terms like "approximate continuous-valued logic" and “graded sets” aren't exactly eye-catching. Zadeh admitted that he used the term "fuzzy" in part because it was attention-getting, but looking back, it may have subtly garnered the wrong kind of attention. How the Term "Fuzz" Backfires To a data scientist, analyst or programmer, it's a term that may evoke a vibe of "cool tech"; to those interested in AI/data mining/etc. etc. only insofar as it can solve business problems, "fuzzy" sounds like an impractical hassle. To a corporate manager, doctor involved into medical research, or any other consumer not in the know, it may evoke images of stuffed animals, 70s cop shows or something out of George Carlin's fridge. There has always been a tension in industry between the two groups, with the latter often reining in the former from writing code and performing research merely for the sake of intellectual curiosity rather than profit; unless the first group can explain why these fuzzy techniques are profitable then the wariness of the first will prevent their adoption. Uncertainty Management & the Family of Fuzzy Set Applications The point of fuzzy set techniques are to remove fuzz that is already inherent in the data, in the form of imprecise discrete values that can be modeled better on approximated continuous scales, contrary to the widespread misperception that "fuzz" is something you add in, like a special topping on a pizza. That distinction may be simple but it encompasses a wide variety of potential applications, ranging from natural language processing to Decision Theory to control of nonlinear systems. Probability hasn't absorbed fuzzy logic as Cliff AB suggested primarily because it is just a small subset of the interpretations that can be attached to fuzzy values. Fuzzy membership functions are fairly simple in that they just grade how much a record belongs to a particular set by assigning one or more continuous values, usually on a scale of 0 to 1 (although for some applications I've found that -1 to 1 can be more useful). The meaning we assign to those numbers is up to us, because they can signify anything we want, such as Bayesian degrees of belief, confidence in a particular decision, possibility distributions, neural net activations, scaled variance, correlation, etc. etc., not just PDF, EDF or CDF values. I go into much greater detail in my blog series and at this CV post, much of which was derived by working through my favorite fuzzy resource, George J. Klir, and Bo Yuan's Fuzzy Sets and Fuzzy Logic: Theory and Applications (1995). They go into much greater detail into how to derive entire programs of “Uncertainty Management” from fuzzy sets. If fuzzy logic and sets were a consumer product, we could say that it's failed to date due to lack of marketing and product evangelization, plus a paradoxical choice of a brand name. While researching this I can't recall running into a single academic journal article that tried to debunk any of these applications in a manner akin to Minksy and Papert's infamous article on perceptrons. There's just a lot of competition in the marketplace of ideas these days for the attention of developers, theorists, data scientists and the like for products that are applicable to similar sets of problems, which is a positive side effect of rapid technical progress. The downside is that there's a lot of low-hanging fruit here that's going unpicked, especially in the realm of data modeling where they're most applicable. As a matter of fact, I recently used them to solve a particularly puzzling language modeling problem and was applying them to a similar one when I took a break to check CV and found this post.
What ever happened to Fuzzy Logic?
My reply is technically more relevant to fuzzy sets rather than fuzzy logic, but the two concepts are practically inseparable. I delved into the academic journal articles on fuzzy logic a couple of ye
What ever happened to Fuzzy Logic? My reply is technically more relevant to fuzzy sets rather than fuzzy logic, but the two concepts are practically inseparable. I delved into the academic journal articles on fuzzy logic a couple of years ago in order to write a tutorial series on implementing fuzzy sets in SQL Server. Although I can hardly be considered an expert, I'm fairly familiar with the literature and use the techniques regularly to solve practical problems. The strong impression I gleaned from the published research is that the practical potential of fuzzy sets is still untapped, mainly due to a deluge of research on dozens of other families of techniques that can solve complementary sets of questions. The Crowded Marketplace of Ideas in Data Science/Machine Learning etc. There's been such rapid progress in Support Vector Machines, neural nets, random forests, etc. that it's impossible for specialists, analysts, data scientists, programmers or consumers of their products to keep up with it all. In my series of blog posts I speak at length on how the development of algorithms for fuzzy sets and logical are generally 20+ years ahead of the available software, but the same can be said of many related fields; I read intensively on neural nets and can think of scores of worthwhile neural architectures that were developed decades ago but never put widely into practice, let alone coded in easily available software. That being said, fuzzy logic and sets are at an odd disadvantage in this crowded marketplace of ideas, mainly because of their moniker, which was controversial back when Lofti A. Zadeh coined it. The point of fuzzy techniques is simply to approximate certain classes of discretely valued data on continuous scales, but terms like "approximate continuous-valued logic" and “graded sets” aren't exactly eye-catching. Zadeh admitted that he used the term "fuzzy" in part because it was attention-getting, but looking back, it may have subtly garnered the wrong kind of attention. How the Term "Fuzz" Backfires To a data scientist, analyst or programmer, it's a term that may evoke a vibe of "cool tech"; to those interested in AI/data mining/etc. etc. only insofar as it can solve business problems, "fuzzy" sounds like an impractical hassle. To a corporate manager, doctor involved into medical research, or any other consumer not in the know, it may evoke images of stuffed animals, 70s cop shows or something out of George Carlin's fridge. There has always been a tension in industry between the two groups, with the latter often reining in the former from writing code and performing research merely for the sake of intellectual curiosity rather than profit; unless the first group can explain why these fuzzy techniques are profitable then the wariness of the first will prevent their adoption. Uncertainty Management & the Family of Fuzzy Set Applications The point of fuzzy set techniques are to remove fuzz that is already inherent in the data, in the form of imprecise discrete values that can be modeled better on approximated continuous scales, contrary to the widespread misperception that "fuzz" is something you add in, like a special topping on a pizza. That distinction may be simple but it encompasses a wide variety of potential applications, ranging from natural language processing to Decision Theory to control of nonlinear systems. Probability hasn't absorbed fuzzy logic as Cliff AB suggested primarily because it is just a small subset of the interpretations that can be attached to fuzzy values. Fuzzy membership functions are fairly simple in that they just grade how much a record belongs to a particular set by assigning one or more continuous values, usually on a scale of 0 to 1 (although for some applications I've found that -1 to 1 can be more useful). The meaning we assign to those numbers is up to us, because they can signify anything we want, such as Bayesian degrees of belief, confidence in a particular decision, possibility distributions, neural net activations, scaled variance, correlation, etc. etc., not just PDF, EDF or CDF values. I go into much greater detail in my blog series and at this CV post, much of which was derived by working through my favorite fuzzy resource, George J. Klir, and Bo Yuan's Fuzzy Sets and Fuzzy Logic: Theory and Applications (1995). They go into much greater detail into how to derive entire programs of “Uncertainty Management” from fuzzy sets. If fuzzy logic and sets were a consumer product, we could say that it's failed to date due to lack of marketing and product evangelization, plus a paradoxical choice of a brand name. While researching this I can't recall running into a single academic journal article that tried to debunk any of these applications in a manner akin to Minksy and Papert's infamous article on perceptrons. There's just a lot of competition in the marketplace of ideas these days for the attention of developers, theorists, data scientists and the like for products that are applicable to similar sets of problems, which is a positive side effect of rapid technical progress. The downside is that there's a lot of low-hanging fruit here that's going unpicked, especially in the realm of data modeling where they're most applicable. As a matter of fact, I recently used them to solve a particularly puzzling language modeling problem and was applying them to a similar one when I took a break to check CV and found this post.
What ever happened to Fuzzy Logic? My reply is technically more relevant to fuzzy sets rather than fuzzy logic, but the two concepts are practically inseparable. I delved into the academic journal articles on fuzzy logic a couple of ye
26,656
What ever happened to Fuzzy Logic?
The reason why fuzzy logic ideas have dropped out of fashion (in ML) are unclear to me. It may well be a bit of many reasons, be they technical, sociological, etc... One thing for sure is that the mathematics of ML for the past years have been dominated by probability/statistics and optimisation, two fields in which fuzzy logic (or ideas issued from the fuzzy literature) can fill in, but in which they usually bring more answers than questions. One another advantage of probabilities and optimisation is that while there may different trends/interpretation within them (e.g., Bayesian vs frequentists), the basic formal/mathematical framework is rather stable for those (it is less clear, in my opinion, for fuzzy logic understood in a broad sense). I think a nice piece of work to figure out the (recent) relation between fuzzy logic and machine learning is the following one: Hüllermeier, E. (2015). Does machine learning need fuzzy logic?. Fuzzy Sets and Systems, 281, 292-299. I think one of the basic idea of fuzzy logic, that is to model concepts that are gradual and provide reasoning tools (mainly extending logic, but not only) associated with it, is still present in some ML ideas, including very recent ones. You just have to look carefully for it as it is rather rare. Two examples include: Farnadi, G., Bach, S. H., Moens, M. F., Getoor, L., & De Cock, M. (2017). Soft quantification in statistical relational learning. Machine Learning, 106(12), 1971-1991. (where references include fuzzy logic ones, including Zadeh seminal paper) Cheng, W., Rademaker, M., De Baets, B., & Hüllermeier, E. (2010, September). Predicting partial orders: ranking with abstention. In Joint European conference on machine learning and knowledge discovery in databases (pp. 215-230). Springer, Berlin, Heidelberg. Overall, to answer your question on a more personal ground, my feeling is that there is not a clear perception of what fuzzy logic could accomplish (in recent views of ML) that probabilities could not, and since the latter is much older and clearly fits better with the ML framework of seeing data issued from a probabilistic population, it was more natural to go with probability and statistics than with fuzzy logic. This also means that if you want to use fuzzy logic in ML, you have to present a convincing, good reason to do so (e.g., using the fact that they extend logic by providing differentiable functions so that you can include logical rules in deep learning techniques).
What ever happened to Fuzzy Logic?
The reason why fuzzy logic ideas have dropped out of fashion (in ML) are unclear to me. It may well be a bit of many reasons, be they technical, sociological, etc... One thing for sure is that the mat
What ever happened to Fuzzy Logic? The reason why fuzzy logic ideas have dropped out of fashion (in ML) are unclear to me. It may well be a bit of many reasons, be they technical, sociological, etc... One thing for sure is that the mathematics of ML for the past years have been dominated by probability/statistics and optimisation, two fields in which fuzzy logic (or ideas issued from the fuzzy literature) can fill in, but in which they usually bring more answers than questions. One another advantage of probabilities and optimisation is that while there may different trends/interpretation within them (e.g., Bayesian vs frequentists), the basic formal/mathematical framework is rather stable for those (it is less clear, in my opinion, for fuzzy logic understood in a broad sense). I think a nice piece of work to figure out the (recent) relation between fuzzy logic and machine learning is the following one: Hüllermeier, E. (2015). Does machine learning need fuzzy logic?. Fuzzy Sets and Systems, 281, 292-299. I think one of the basic idea of fuzzy logic, that is to model concepts that are gradual and provide reasoning tools (mainly extending logic, but not only) associated with it, is still present in some ML ideas, including very recent ones. You just have to look carefully for it as it is rather rare. Two examples include: Farnadi, G., Bach, S. H., Moens, M. F., Getoor, L., & De Cock, M. (2017). Soft quantification in statistical relational learning. Machine Learning, 106(12), 1971-1991. (where references include fuzzy logic ones, including Zadeh seminal paper) Cheng, W., Rademaker, M., De Baets, B., & Hüllermeier, E. (2010, September). Predicting partial orders: ranking with abstention. In Joint European conference on machine learning and knowledge discovery in databases (pp. 215-230). Springer, Berlin, Heidelberg. Overall, to answer your question on a more personal ground, my feeling is that there is not a clear perception of what fuzzy logic could accomplish (in recent views of ML) that probabilities could not, and since the latter is much older and clearly fits better with the ML framework of seeing data issued from a probabilistic population, it was more natural to go with probability and statistics than with fuzzy logic. This also means that if you want to use fuzzy logic in ML, you have to present a convincing, good reason to do so (e.g., using the fact that they extend logic by providing differentiable functions so that you can include logical rules in deep learning techniques).
What ever happened to Fuzzy Logic? The reason why fuzzy logic ideas have dropped out of fashion (in ML) are unclear to me. It may well be a bit of many reasons, be they technical, sociological, etc... One thing for sure is that the mat
26,657
Probability calibration from LightGBM model with class imbalance
I would suggest not changing the (calibrated) predicted probabilities. Some further points: While calibrated probabilities appearing "low" might be counter-intuitive, it might also be more realistic given the nature of the problem. Especially when operating in an imbalanced setting, predicting that a particular user/person has a very high absolute probability of being in the very rare positive class might be misleading/over-confident. I am not 100% clear from your post how the calibration was done. Assuming we did repeated-CV $2$ times $5$-fold cross-validation: Within each of the 10 executions should use a separate say $K$-fold internal cross-validation with ($K-1$) folds for learning the model and $1$ for fitting the calibration map. Then $K$ calibrated classifiers are generated within each execution and the outputs of them are averaged to provide predictions on the test fold. (Platt's original paper Probabilities for SV Machines uses $K=3$ throughout but that is not a hard rule.) Given we are calibrating the probabilities of our classifier it would make sense to use proper scoring rule metrics like Brier score, Continuous Ranked Probability Score (CRPS), Logarithmic score too (the latter assuming we do not have any $0$ or $1$ probabilities being predicted). After we have decided the threshold $T$ for our probabilistic classifier, we are good to explain what it does. Indeeed, the risk classification might suggest to "treat any person with risk higher than $0.03$"; that is fine if we can relate it to the relevant misclassification costs. Similarly, if misclassification costs are unavailable, if we use a proper scoring rule like Brier, we are still good; we have calibrated probabilistic predictions, anyway.
Probability calibration from LightGBM model with class imbalance
I would suggest not changing the (calibrated) predicted probabilities. Some further points: While calibrated probabilities appearing "low" might be counter-intuitive, it might also be more realistic
Probability calibration from LightGBM model with class imbalance I would suggest not changing the (calibrated) predicted probabilities. Some further points: While calibrated probabilities appearing "low" might be counter-intuitive, it might also be more realistic given the nature of the problem. Especially when operating in an imbalanced setting, predicting that a particular user/person has a very high absolute probability of being in the very rare positive class might be misleading/over-confident. I am not 100% clear from your post how the calibration was done. Assuming we did repeated-CV $2$ times $5$-fold cross-validation: Within each of the 10 executions should use a separate say $K$-fold internal cross-validation with ($K-1$) folds for learning the model and $1$ for fitting the calibration map. Then $K$ calibrated classifiers are generated within each execution and the outputs of them are averaged to provide predictions on the test fold. (Platt's original paper Probabilities for SV Machines uses $K=3$ throughout but that is not a hard rule.) Given we are calibrating the probabilities of our classifier it would make sense to use proper scoring rule metrics like Brier score, Continuous Ranked Probability Score (CRPS), Logarithmic score too (the latter assuming we do not have any $0$ or $1$ probabilities being predicted). After we have decided the threshold $T$ for our probabilistic classifier, we are good to explain what it does. Indeeed, the risk classification might suggest to "treat any person with risk higher than $0.03$"; that is fine if we can relate it to the relevant misclassification costs. Similarly, if misclassification costs are unavailable, if we use a proper scoring rule like Brier, we are still good; we have calibrated probabilistic predictions, anyway.
Probability calibration from LightGBM model with class imbalance I would suggest not changing the (calibrated) predicted probabilities. Some further points: While calibrated probabilities appearing "low" might be counter-intuitive, it might also be more realistic
26,658
Probability calibration from LightGBM model with class imbalance
Instead of performing a sigmoid/Platt regression, you can try an isotonic one, as described here: https://scikit-learn.org/stable/modules/calibration.html#isotonic I have had better results with isotonic regressions, by which I mean that the calibrated model spans the whole probability range and is closer to a linear relation. The article that I referenced also describes the CalibratedClassifierCV which you can use to perform the calibration with both sigmoid and isotonic regressors.
Probability calibration from LightGBM model with class imbalance
Instead of performing a sigmoid/Platt regression, you can try an isotonic one, as described here: https://scikit-learn.org/stable/modules/calibration.html#isotonic I have had better results with isoto
Probability calibration from LightGBM model with class imbalance Instead of performing a sigmoid/Platt regression, you can try an isotonic one, as described here: https://scikit-learn.org/stable/modules/calibration.html#isotonic I have had better results with isotonic regressions, by which I mean that the calibrated model spans the whole probability range and is closer to a linear relation. The article that I referenced also describes the CalibratedClassifierCV which you can use to perform the calibration with both sigmoid and isotonic regressors.
Probability calibration from LightGBM model with class imbalance Instead of performing a sigmoid/Platt regression, you can try an isotonic one, as described here: https://scikit-learn.org/stable/modules/calibration.html#isotonic I have had better results with isoto
26,659
Probability calibration from LightGBM model with class imbalance
If you want the output to have a good range, you should definitely tackle the imbalanced data problem. You can choose - depending on your data and especially the number of occurence by class - either to oversample the underrepresented class (be careful this leads to lower the variance of this class). Or you can undersample the overrepresented class (the disadvantage of this method is that you don't use all your data and you may miss important samples). A third method exists: weights your data. Thus you give more weight to the underrepresented class. It allows you to use all your data and avoid to change the variance.
Probability calibration from LightGBM model with class imbalance
If you want the output to have a good range, you should definitely tackle the imbalanced data problem. You can choose - depending on your data and especially the number of occurence by class - either
Probability calibration from LightGBM model with class imbalance If you want the output to have a good range, you should definitely tackle the imbalanced data problem. You can choose - depending on your data and especially the number of occurence by class - either to oversample the underrepresented class (be careful this leads to lower the variance of this class). Or you can undersample the overrepresented class (the disadvantage of this method is that you don't use all your data and you may miss important samples). A third method exists: weights your data. Thus you give more weight to the underrepresented class. It allows you to use all your data and avoid to change the variance.
Probability calibration from LightGBM model with class imbalance If you want the output to have a good range, you should definitely tackle the imbalanced data problem. You can choose - depending on your data and especially the number of occurence by class - either
26,660
The Fishing Problem
Let $\lambda$ denote the rate of the Poisson process and let $S(x)=1-F(x)$ where $F(x)$ is the cumulative distribution function of the fish size distribution. Let $t=0$ denote the end of the day and let $g(t)$, $t\le 0$, denote the expected catch in the interval $(t,0)$ we obtain if using the optimal strategy. Clearly $g(0)=0$. Also, if we catch a fish of size $x$ at time $t$ we should keep it and stop fishing if it is larger then $g(t)$. So this is our decision rule. Thus, a realisation of the process and the realised decision (green point) may look as follows: Working in continuous time, using ideas from stochastic dynamic programming, the change in $g(t)$ backwards in time is described by a simple differential equation. Consider an infinitesimal time interval $(t-dt,t)$. The probability that we catch a fish of size $X>g(t)$ in this time interval is $$ \lambda dt S(g(t)), $$ otherwise our expected catch will be $g(t)$. Using a formula for mean residual life, the expected size of a fish larger than $g(t)$ as $$ E(X|X>g(t))=g(t)+\frac1{S(g(t))}\int_{g(t)}^\infty S(x)dx. $$ Hence, using the law of total expectation, the expected catch in the interval $(t-dt,0)$ becomes $$ g(t-dt) =[\lambda dt S(g(t))][g(t)+\frac1{S(g(t))}\int_{g(t)}^\infty S(x)dx] + [1-\lambda dt S(g(t)] g(t). $$ Rearranging, we find that $g(t)$ satisfies $$ \frac{dg}{dt}=-\lambda \int_{g(t)}^\infty S(x) dx. \tag{1} $$ Note how $g(t)$ towards the end of the day decline at a rate equal to the product of the Poisson rate $\lambda$ and the mean fish size $\int_0^\infty S(x)dx$ reflecting that we at that point will be best off keeping any fish we might catch. Example 1: Suppose that the fish sizes $X\sim \exp(\alpha)$ such that $S(x)=e^{-\alpha x}$. Equation (1) then simplifies to $$ \frac{dg}{dt}=-\frac\lambda\alpha e^{-\alpha g(t)} $$ which is a separable differential equation. Using the above boundary condition, the solution is $$ g(t) = \frac1\alpha\ln(1-\lambda t), $$ for $t\le 0$ shown in the above Figure for $\alpha=\lambda=1$. The following code compares the mean catch using this strategy computed based on simulations with the theoretical mean $g(-12)$. g <- function(t,lambda, rate) { 1/rate*log(1-lambda*t) } catch <- function(daylength=12, lambda=1, rfn=runif, gfn=g, ...) { n <- rpois(1,daylength*lambda) starttime <- -daylength arrivaltimes <- sort(runif(n,starttime,0)) X <- rfn(n,...) j <- match(TRUE, X > gfn(arrivaltimes,lambda,...)) if (is.na(j)) 0 else X[j] } nsim <- 1e+5 catches <- rep(0,nsim) for (i in 1:nsim) catches[i] <- catch(gfn=g,rfn=rexp,rate=1,lambda=1) > mean(catches) [1] 2.55802 > g(-12,1,1) [1] 2.564949 Example 2: If $X \sim U(0,1)$ a similar derivation leads to $$ g(t) = 1 - \frac1{1-\lambda t/2} $$ as the solution of (1). Note how $g(t)$ tends to the maximum fish size as $t\rightarrow -\infty$.
The Fishing Problem
Let $\lambda$ denote the rate of the Poisson process and let $S(x)=1-F(x)$ where $F(x)$ is the cumulative distribution function of the fish size distribution. Let $t=0$ denote the end of the day and l
The Fishing Problem Let $\lambda$ denote the rate of the Poisson process and let $S(x)=1-F(x)$ where $F(x)$ is the cumulative distribution function of the fish size distribution. Let $t=0$ denote the end of the day and let $g(t)$, $t\le 0$, denote the expected catch in the interval $(t,0)$ we obtain if using the optimal strategy. Clearly $g(0)=0$. Also, if we catch a fish of size $x$ at time $t$ we should keep it and stop fishing if it is larger then $g(t)$. So this is our decision rule. Thus, a realisation of the process and the realised decision (green point) may look as follows: Working in continuous time, using ideas from stochastic dynamic programming, the change in $g(t)$ backwards in time is described by a simple differential equation. Consider an infinitesimal time interval $(t-dt,t)$. The probability that we catch a fish of size $X>g(t)$ in this time interval is $$ \lambda dt S(g(t)), $$ otherwise our expected catch will be $g(t)$. Using a formula for mean residual life, the expected size of a fish larger than $g(t)$ as $$ E(X|X>g(t))=g(t)+\frac1{S(g(t))}\int_{g(t)}^\infty S(x)dx. $$ Hence, using the law of total expectation, the expected catch in the interval $(t-dt,0)$ becomes $$ g(t-dt) =[\lambda dt S(g(t))][g(t)+\frac1{S(g(t))}\int_{g(t)}^\infty S(x)dx] + [1-\lambda dt S(g(t)] g(t). $$ Rearranging, we find that $g(t)$ satisfies $$ \frac{dg}{dt}=-\lambda \int_{g(t)}^\infty S(x) dx. \tag{1} $$ Note how $g(t)$ towards the end of the day decline at a rate equal to the product of the Poisson rate $\lambda$ and the mean fish size $\int_0^\infty S(x)dx$ reflecting that we at that point will be best off keeping any fish we might catch. Example 1: Suppose that the fish sizes $X\sim \exp(\alpha)$ such that $S(x)=e^{-\alpha x}$. Equation (1) then simplifies to $$ \frac{dg}{dt}=-\frac\lambda\alpha e^{-\alpha g(t)} $$ which is a separable differential equation. Using the above boundary condition, the solution is $$ g(t) = \frac1\alpha\ln(1-\lambda t), $$ for $t\le 0$ shown in the above Figure for $\alpha=\lambda=1$. The following code compares the mean catch using this strategy computed based on simulations with the theoretical mean $g(-12)$. g <- function(t,lambda, rate) { 1/rate*log(1-lambda*t) } catch <- function(daylength=12, lambda=1, rfn=runif, gfn=g, ...) { n <- rpois(1,daylength*lambda) starttime <- -daylength arrivaltimes <- sort(runif(n,starttime,0)) X <- rfn(n,...) j <- match(TRUE, X > gfn(arrivaltimes,lambda,...)) if (is.na(j)) 0 else X[j] } nsim <- 1e+5 catches <- rep(0,nsim) for (i in 1:nsim) catches[i] <- catch(gfn=g,rfn=rexp,rate=1,lambda=1) > mean(catches) [1] 2.55802 > g(-12,1,1) [1] 2.564949 Example 2: If $X \sim U(0,1)$ a similar derivation leads to $$ g(t) = 1 - \frac1{1-\lambda t/2} $$ as the solution of (1). Note how $g(t)$ tends to the maximum fish size as $t\rightarrow -\infty$.
The Fishing Problem Let $\lambda$ denote the rate of the Poisson process and let $S(x)=1-F(x)$ where $F(x)$ is the cumulative distribution function of the fish size distribution. Let $t=0$ denote the end of the day and l
26,661
On Yolo, and its loss function
Basically, yolo combines detection and classification into one loss function: the green part corresponds to whether or not any object is there, while the red part corresponds to encouraging correctly determining which object is there, if one is present. Since we are training on some labeled dataset, it means that $p_i(c)$ should be zero except for one class $c$, right? Yes. Notice we are only penalizing the network when there is indeed an object present. But if your question is whether $p_i(c)\in\{0,1\}$, then usually yes, that is how it is done. Why are we interested in confidence score? At the end of the neural net, do we have some decision algorithm that says: if this bounding box as confidence above threshold $c_0$ then displays it and choose class with highest probability? Usually, yes, a threshold is needed exactly as you describe. Often it is a hyper-parameter that can be chosen or cross-validate over. As for your other questions about the "confidence" score, I must agree that the nomenclature is confusing. There are two "viewpoints" one can have about this: (1) a probabilistic confidence measure of whether any object exists in the locale, and (2) a deterministic prediction of the overlap between the local predicted bounding box $\hat{B}$ and the ground truth one $B$. Both outlooks are often conflated, and in some sense can be treated as "equivalent", since we can view $|B\cap \hat{B}|/|B\cup\hat{B}|\in[0,1]$ as a probability. As an aside, there are already a couple other discussions of the yolo loss: Yolo Loss function explanation How to calculate the class probability of a grid cell in YOLO object detection algorithm?
On Yolo, and its loss function
Basically, yolo combines detection and classification into one loss function: the green part corresponds to whether or not any object is there, while the red part corresponds to encouraging correctly
On Yolo, and its loss function Basically, yolo combines detection and classification into one loss function: the green part corresponds to whether or not any object is there, while the red part corresponds to encouraging correctly determining which object is there, if one is present. Since we are training on some labeled dataset, it means that $p_i(c)$ should be zero except for one class $c$, right? Yes. Notice we are only penalizing the network when there is indeed an object present. But if your question is whether $p_i(c)\in\{0,1\}$, then usually yes, that is how it is done. Why are we interested in confidence score? At the end of the neural net, do we have some decision algorithm that says: if this bounding box as confidence above threshold $c_0$ then displays it and choose class with highest probability? Usually, yes, a threshold is needed exactly as you describe. Often it is a hyper-parameter that can be chosen or cross-validate over. As for your other questions about the "confidence" score, I must agree that the nomenclature is confusing. There are two "viewpoints" one can have about this: (1) a probabilistic confidence measure of whether any object exists in the locale, and (2) a deterministic prediction of the overlap between the local predicted bounding box $\hat{B}$ and the ground truth one $B$. Both outlooks are often conflated, and in some sense can be treated as "equivalent", since we can view $|B\cap \hat{B}|/|B\cup\hat{B}|\in[0,1]$ as a probability. As an aside, there are already a couple other discussions of the yolo loss: Yolo Loss function explanation How to calculate the class probability of a grid cell in YOLO object detection algorithm?
On Yolo, and its loss function Basically, yolo combines detection and classification into one loss function: the green part corresponds to whether or not any object is there, while the red part corresponds to encouraging correctly
26,662
Oversampling: whole set or training set
Your test set should be as close to a sample from the distribution on which you are actually going to apply your classifier as possible. I would definitely split your dataset first (in fact, that is usually the first thing I would do after obtaining a dataset), put away your test set, and then do everything you want to do on the training set. Otherwise, it is very easy for biases to creep in. Same applies to your validation sets, e.g. if you are using cross-validation. What you really want is an estimate of how well your approach would work out-of-sample so that you can select the best approach. The best way to get that is to evaluate your approach on actual, unmodified out-of-sample data.
Oversampling: whole set or training set
Your test set should be as close to a sample from the distribution on which you are actually going to apply your classifier as possible. I would definitely split your dataset first (in fact, that is u
Oversampling: whole set or training set Your test set should be as close to a sample from the distribution on which you are actually going to apply your classifier as possible. I would definitely split your dataset first (in fact, that is usually the first thing I would do after obtaining a dataset), put away your test set, and then do everything you want to do on the training set. Otherwise, it is very easy for biases to creep in. Same applies to your validation sets, e.g. if you are using cross-validation. What you really want is an estimate of how well your approach would work out-of-sample so that you can select the best approach. The best way to get that is to evaluate your approach on actual, unmodified out-of-sample data.
Oversampling: whole set or training set Your test set should be as close to a sample from the distribution on which you are actually going to apply your classifier as possible. I would definitely split your dataset first (in fact, that is u
26,663
Does the definition of regular estimator depend on the rate of convergence? If not, should it?
This paper by Van der Vaart (in section 27.3ff) looks at regularity with scaling rates other than $\sqrt{n}$. He argues that the point of regularity is basically to show that you can't get superefficiency. This viewpoint means you want the offsets $h/\sqrt{n}$ to be the ones that give contiguous sequences of distributions; the ones that are distinguishable from $h=0$, but not with power going to 1. So the $\sqrt{n}$ is the consistency rate of the efficient estimator. There's an example on p406 where the rate is $n$ rather than $\sqrt{n}$. For smooth parametric models you get $\sqrt{n}$, but for models where the consistency rate is something else you get something else.
Does the definition of regular estimator depend on the rate of convergence? If not, should it?
This paper by Van der Vaart (in section 27.3ff) looks at regularity with scaling rates other than $\sqrt{n}$. He argues that the point of regularity is basically to show that you can't get supereffic
Does the definition of regular estimator depend on the rate of convergence? If not, should it? This paper by Van der Vaart (in section 27.3ff) looks at regularity with scaling rates other than $\sqrt{n}$. He argues that the point of regularity is basically to show that you can't get superefficiency. This viewpoint means you want the offsets $h/\sqrt{n}$ to be the ones that give contiguous sequences of distributions; the ones that are distinguishable from $h=0$, but not with power going to 1. So the $\sqrt{n}$ is the consistency rate of the efficient estimator. There's an example on p406 where the rate is $n$ rather than $\sqrt{n}$. For smooth parametric models you get $\sqrt{n}$, but for models where the consistency rate is something else you get something else.
Does the definition of regular estimator depend on the rate of convergence? If not, should it? This paper by Van der Vaart (in section 27.3ff) looks at regularity with scaling rates other than $\sqrt{n}$. He argues that the point of regularity is basically to show that you can't get supereffic
26,664
Bayes estimator are immune to selection Bias
As described above, the issue stands with drawing inference on the index and value, (i⁰,μ⁰), of the largest mean of a sample of Normal rvs. What I find surprising in Dawid's presentation is that the Bayesian analysis does not sound that much Bayesian. If given the whole sample, a Bayesian approach should produce a posterior distribution on (i⁰,μ⁰), rather than follow estimation steps, from estimating i⁰ to estimating the associated mean. And if needed, estimators should come from the definition of a particular loss function. When, instead, given the largest point in the sample, and only that point, its distribution changes, so I am fairly bemused by the statement that no adjustment is needed. The prior modelling is also rather surprising in that the priors on the means should be joint rather than a product of independent Normals, since these means are compared and hence comparable. For instance a hierarchical prior seems more appropriate, with location and scale to be estimated from the whole data. Creating a connection between the means... A relevant objection to the use of independent improper priors is that the maximum mean μ⁰ then does not have a well-defined measure. However, I do not think a criticism of some priors versus other is a relevant attack on this "paradox".
Bayes estimator are immune to selection Bias
As described above, the issue stands with drawing inference on the index and value, (i⁰,μ⁰), of the largest mean of a sample of Normal rvs. What I find surprising in Dawid's presentation is that the B
Bayes estimator are immune to selection Bias As described above, the issue stands with drawing inference on the index and value, (i⁰,μ⁰), of the largest mean of a sample of Normal rvs. What I find surprising in Dawid's presentation is that the Bayesian analysis does not sound that much Bayesian. If given the whole sample, a Bayesian approach should produce a posterior distribution on (i⁰,μ⁰), rather than follow estimation steps, from estimating i⁰ to estimating the associated mean. And if needed, estimators should come from the definition of a particular loss function. When, instead, given the largest point in the sample, and only that point, its distribution changes, so I am fairly bemused by the statement that no adjustment is needed. The prior modelling is also rather surprising in that the priors on the means should be joint rather than a product of independent Normals, since these means are compared and hence comparable. For instance a hierarchical prior seems more appropriate, with location and scale to be estimated from the whole data. Creating a connection between the means... A relevant objection to the use of independent improper priors is that the maximum mean μ⁰ then does not have a well-defined measure. However, I do not think a criticism of some priors versus other is a relevant attack on this "paradox".
Bayes estimator are immune to selection Bias As described above, the issue stands with drawing inference on the index and value, (i⁰,μ⁰), of the largest mean of a sample of Normal rvs. What I find surprising in Dawid's presentation is that the B
26,665
Bayes estimator are immune to selection Bias
Even if a bit counter-intuitive the statement is correct. Assume $i^*=5$ for this experiment, then the posterior for $\mu_5$ is really $N(x_5,\sigma^2)$. This counter-intuitive fact is a bit similar to Bayes being immune to (secret) early stopping (that is also very counter-intuitive). The Bayesian reasoning would lead to false conclusions if for each such experiment (imagine your repeat it a few times), only the results for the best variety would be kept. There would be data selection and Bayesian methods are clearly not immune to (secret) data selection. Actually no statistical method is immune to data selection. If such a selection was done, a complete Bayesian reasoning taking this selection into account would easily correct the illusion. However the sentence "Bayes estimator are immune to selection Bias" is a bit dangerous. It is easy to imagine situations where "selection" means something else, like for example selection of explanatory variables, or selection of data. Bayes is not clearly immune to this.
Bayes estimator are immune to selection Bias
Even if a bit counter-intuitive the statement is correct. Assume $i^*=5$ for this experiment, then the posterior for $\mu_5$ is really $N(x_5,\sigma^2)$. This counter-intuitive fact is a bit similar t
Bayes estimator are immune to selection Bias Even if a bit counter-intuitive the statement is correct. Assume $i^*=5$ for this experiment, then the posterior for $\mu_5$ is really $N(x_5,\sigma^2)$. This counter-intuitive fact is a bit similar to Bayes being immune to (secret) early stopping (that is also very counter-intuitive). The Bayesian reasoning would lead to false conclusions if for each such experiment (imagine your repeat it a few times), only the results for the best variety would be kept. There would be data selection and Bayesian methods are clearly not immune to (secret) data selection. Actually no statistical method is immune to data selection. If such a selection was done, a complete Bayesian reasoning taking this selection into account would easily correct the illusion. However the sentence "Bayes estimator are immune to selection Bias" is a bit dangerous. It is easy to imagine situations where "selection" means something else, like for example selection of explanatory variables, or selection of data. Bayes is not clearly immune to this.
Bayes estimator are immune to selection Bias Even if a bit counter-intuitive the statement is correct. Assume $i^*=5$ for this experiment, then the posterior for $\mu_5$ is really $N(x_5,\sigma^2)$. This counter-intuitive fact is a bit similar t
26,666
Why are PCA eigenvectors orthogonal but correlated?
Let $X$ be a random vector $X=(x_1,x_2,\cdots,x_d)^T $ with expected value $\mu$ and variance $\Sigma$. We are looking for such ordered vectors $u_i$, that maximize the variance of $u_i^TX$. Essentialy we are solving $$\max\limits_{u_i} Var(u_i^TX)$$ $$s.t. \quad u_i^T u_i=1.$$ Because we are only interested in the direction of such vectors, we are additionally assuming the unit length of vectors $u_i^T u_i=1$. Vectors $u_i$ are actually not random (because we are working theoretically now, in reality we are replacing the unknown $\Sigma$ and unknown $\mu$ with Empirical sample covariance matrix and mean respectively, @whuber was explaining this from a different perspective) so $$Var(u_i^TX)=u_i^T\Sigma u_i.$$ The optimization problem can be trivially solved by using the Lagrange function $$L(u_i,\lambda_i):=u_i^T \Sigma u_i -\lambda_i(u_i^Tu_i-1).$$ From there we get the necessary condition for constrained extrema $$ \frac{\partial L(u_i,\lambda_i)}{\partial u_i} = 2\Sigma u_i -2\lambda_i u_i=0,$$ which can be reduced to $$\Sigma u_i =\lambda_i u_i,$$ that is by definition the problem of eigenvalues and eigenvectors. Because $\Sigma$ is symmetric and positive semidefinite matrix, the spectral theorem applies and we are able to find orthonormal basis that satisfies $\Sigma=Q\Lambda Q^{-1}=Q\Lambda Q^T$, where $Q$ is made of orthogonal eigenvectors and $\Lambda$ is a diagonal matrix with eigenvalues which are all real. Now we can show that $$cov(u_i^TX,u_j^TX)=u_i^T\Sigma u_j=\lambda_j u_i^Tu_j=0, \quad \forall j \neq i.$$ Trivially for $i=j: \quad cov(u_i^TX,u_j^TX)=\lambda_i.$ So not the eigenvectors, but the projections are uncorrelated.
Why are PCA eigenvectors orthogonal but correlated?
Let $X$ be a random vector $X=(x_1,x_2,\cdots,x_d)^T $ with expected value $\mu$ and variance $\Sigma$. We are looking for such ordered vectors $u_i$, that maximize the variance of $u_i^TX$. Essential
Why are PCA eigenvectors orthogonal but correlated? Let $X$ be a random vector $X=(x_1,x_2,\cdots,x_d)^T $ with expected value $\mu$ and variance $\Sigma$. We are looking for such ordered vectors $u_i$, that maximize the variance of $u_i^TX$. Essentialy we are solving $$\max\limits_{u_i} Var(u_i^TX)$$ $$s.t. \quad u_i^T u_i=1.$$ Because we are only interested in the direction of such vectors, we are additionally assuming the unit length of vectors $u_i^T u_i=1$. Vectors $u_i$ are actually not random (because we are working theoretically now, in reality we are replacing the unknown $\Sigma$ and unknown $\mu$ with Empirical sample covariance matrix and mean respectively, @whuber was explaining this from a different perspective) so $$Var(u_i^TX)=u_i^T\Sigma u_i.$$ The optimization problem can be trivially solved by using the Lagrange function $$L(u_i,\lambda_i):=u_i^T \Sigma u_i -\lambda_i(u_i^Tu_i-1).$$ From there we get the necessary condition for constrained extrema $$ \frac{\partial L(u_i,\lambda_i)}{\partial u_i} = 2\Sigma u_i -2\lambda_i u_i=0,$$ which can be reduced to $$\Sigma u_i =\lambda_i u_i,$$ that is by definition the problem of eigenvalues and eigenvectors. Because $\Sigma$ is symmetric and positive semidefinite matrix, the spectral theorem applies and we are able to find orthonormal basis that satisfies $\Sigma=Q\Lambda Q^{-1}=Q\Lambda Q^T$, where $Q$ is made of orthogonal eigenvectors and $\Lambda$ is a diagonal matrix with eigenvalues which are all real. Now we can show that $$cov(u_i^TX,u_j^TX)=u_i^T\Sigma u_j=\lambda_j u_i^Tu_j=0, \quad \forall j \neq i.$$ Trivially for $i=j: \quad cov(u_i^TX,u_j^TX)=\lambda_i.$ So not the eigenvectors, but the projections are uncorrelated.
Why are PCA eigenvectors orthogonal but correlated? Let $X$ be a random vector $X=(x_1,x_2,\cdots,x_d)^T $ with expected value $\mu$ and variance $\Sigma$. We are looking for such ordered vectors $u_i$, that maximize the variance of $u_i^TX$. Essential
26,667
Why are PCA eigenvectors orthogonal but correlated?
Note that L is the loadings matrix, aka the eigenvectors themselves. This isn't the PCA data matrix. The eigenvectors are bound to provide orthogonality but not $cov=0$. For example, take the matrix: > X <- iris > X$Species <- as.numeric(X$Species) > head(X) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 1 2 4.9 3.0 1.4 0.2 1 3 4.7 3.2 1.3 0.2 1 4 4.6 3.1 1.5 0.2 1 5 5.0 3.6 1.4 0.2 1 6 5.4 3.9 1.7 0.4 1 In PCA, not only you get the eigenvectors of the covariance/correlation matrix (depends on the method) but they are also being orthonormal (that is, $\left \| u_j \right \|=1$ for each eigenvector $u_j$), so we get: > prcomp(X)$rotation PC1 PC2 PC3 PC4 PC5 Sepal.Length 0.33402494 -0.68852577 0.4414776 -0.43312829 0.1784853 Sepal.Width -0.08034626 -0.68474905 -0.6114140 0.30348725 -0.2423462 Petal.Length 0.80059273 0.09713877 0.1466787 0.49080356 -0.2953177 Petal.Width 0.33657862 0.06894557 -0.4202025 0.06667133 0.8372253 Species 0.35740442 0.20703034 -0.4828930 -0.68917499 -0.3482135 and > cor(prcomp(X)$rotation) PC1 PC2 PC3 PC4 PC5 PC1 1.00000000 0.62712979 0.57079328 0.147574029 -0.072934736 PC2 0.62712979 1.00000000 -0.22763304 -0.058852698 0.029086459 PC3 0.57079328 -0.22763304 1.00000000 -0.053565825 0.026473556 PC4 0.14757403 -0.05885270 -0.05356582 1.000000000 0.006844526 PC5 -0.07293474 0.02908646 0.02647356 0.006844526 1.000000000 but note that the PCA'd data is > head(prcomp(X)$x) PC1 PC2 PC3 PC4 PC5 [1,] -2.865415 -0.2962946 -0.041870662 -0.078464301 -0.032047052 [2,] -2.892047 0.1837851 0.175540800 -0.143582265 0.053428970 [3,] -3.054980 0.1748266 -0.049705391 -0.045339514 -0.001205543 [4,] -2.920230 0.3315818 -0.003376012 0.065785303 -0.053882996 [5,] -2.906852 -0.2959169 -0.147159821 -0.004802747 -0.074130194 [6,] -2.489852 -0.7338212 -0.194029844 0.073567444 0.003409809 and its correlation is > round(cor(prcomp(X)$x),14) PC1 PC2 PC3 PC4 PC5 PC1 1 0 0 0 0 PC2 0 1 0 0 0 PC3 0 0 1 0 0 PC4 0 0 0 1 0 PC5 0 0 0 0 1
Why are PCA eigenvectors orthogonal but correlated?
Note that L is the loadings matrix, aka the eigenvectors themselves. This isn't the PCA data matrix. The eigenvectors are bound to provide orthogonality but not $cov=0$. For example, take the matrix:
Why are PCA eigenvectors orthogonal but correlated? Note that L is the loadings matrix, aka the eigenvectors themselves. This isn't the PCA data matrix. The eigenvectors are bound to provide orthogonality but not $cov=0$. For example, take the matrix: > X <- iris > X$Species <- as.numeric(X$Species) > head(X) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 1 2 4.9 3.0 1.4 0.2 1 3 4.7 3.2 1.3 0.2 1 4 4.6 3.1 1.5 0.2 1 5 5.0 3.6 1.4 0.2 1 6 5.4 3.9 1.7 0.4 1 In PCA, not only you get the eigenvectors of the covariance/correlation matrix (depends on the method) but they are also being orthonormal (that is, $\left \| u_j \right \|=1$ for each eigenvector $u_j$), so we get: > prcomp(X)$rotation PC1 PC2 PC3 PC4 PC5 Sepal.Length 0.33402494 -0.68852577 0.4414776 -0.43312829 0.1784853 Sepal.Width -0.08034626 -0.68474905 -0.6114140 0.30348725 -0.2423462 Petal.Length 0.80059273 0.09713877 0.1466787 0.49080356 -0.2953177 Petal.Width 0.33657862 0.06894557 -0.4202025 0.06667133 0.8372253 Species 0.35740442 0.20703034 -0.4828930 -0.68917499 -0.3482135 and > cor(prcomp(X)$rotation) PC1 PC2 PC3 PC4 PC5 PC1 1.00000000 0.62712979 0.57079328 0.147574029 -0.072934736 PC2 0.62712979 1.00000000 -0.22763304 -0.058852698 0.029086459 PC3 0.57079328 -0.22763304 1.00000000 -0.053565825 0.026473556 PC4 0.14757403 -0.05885270 -0.05356582 1.000000000 0.006844526 PC5 -0.07293474 0.02908646 0.02647356 0.006844526 1.000000000 but note that the PCA'd data is > head(prcomp(X)$x) PC1 PC2 PC3 PC4 PC5 [1,] -2.865415 -0.2962946 -0.041870662 -0.078464301 -0.032047052 [2,] -2.892047 0.1837851 0.175540800 -0.143582265 0.053428970 [3,] -3.054980 0.1748266 -0.049705391 -0.045339514 -0.001205543 [4,] -2.920230 0.3315818 -0.003376012 0.065785303 -0.053882996 [5,] -2.906852 -0.2959169 -0.147159821 -0.004802747 -0.074130194 [6,] -2.489852 -0.7338212 -0.194029844 0.073567444 0.003409809 and its correlation is > round(cor(prcomp(X)$x),14) PC1 PC2 PC3 PC4 PC5 PC1 1 0 0 0 0 PC2 0 1 0 0 0 PC3 0 0 1 0 0 PC4 0 0 0 1 0 PC5 0 0 0 0 1
Why are PCA eigenvectors orthogonal but correlated? Note that L is the loadings matrix, aka the eigenvectors themselves. This isn't the PCA data matrix. The eigenvectors are bound to provide orthogonality but not $cov=0$. For example, take the matrix:
26,668
Why are PCA eigenvectors orthogonal but correlated?
A simpler example of how this isn't a contradiction: Suppose $A$ and $B$ are random vectors in $\mathbb{R}^2$ whose distribution is given by: $$ \mathbb{P}\left(A = \begin{bmatrix} +1 \cr 0 \end{bmatrix}, \:B = \begin{bmatrix} 0 \cr +1 \end{bmatrix}\right) = \frac12\\ \mathbb{P}\left(A = \begin{bmatrix} 0 \cr +1 \end{bmatrix}, \:B = \begin{bmatrix} -1 \cr 0 \end{bmatrix} \right) = \frac12\\ $$ So $B$ is always equal to $A$ rotated by 90 degrees anticlockwise. Then $A$ and $B$ are always orthogonal, but they are not uncorrelated.
Why are PCA eigenvectors orthogonal but correlated?
A simpler example of how this isn't a contradiction: Suppose $A$ and $B$ are random vectors in $\mathbb{R}^2$ whose distribution is given by: $$ \mathbb{P}\left(A = \begin{bmatrix} +1 \cr 0 \end{bmatr
Why are PCA eigenvectors orthogonal but correlated? A simpler example of how this isn't a contradiction: Suppose $A$ and $B$ are random vectors in $\mathbb{R}^2$ whose distribution is given by: $$ \mathbb{P}\left(A = \begin{bmatrix} +1 \cr 0 \end{bmatrix}, \:B = \begin{bmatrix} 0 \cr +1 \end{bmatrix}\right) = \frac12\\ \mathbb{P}\left(A = \begin{bmatrix} 0 \cr +1 \end{bmatrix}, \:B = \begin{bmatrix} -1 \cr 0 \end{bmatrix} \right) = \frac12\\ $$ So $B$ is always equal to $A$ rotated by 90 degrees anticlockwise. Then $A$ and $B$ are always orthogonal, but they are not uncorrelated.
Why are PCA eigenvectors orthogonal but correlated? A simpler example of how this isn't a contradiction: Suppose $A$ and $B$ are random vectors in $\mathbb{R}^2$ whose distribution is given by: $$ \mathbb{P}\left(A = \begin{bmatrix} +1 \cr 0 \end{bmatr
26,669
Are there unbiased, non-linear estimators with lower variance than the OLS estimator?
No. Linear regression is also BUE. Source: https://www.ssc.wisc.edu/~bhansen/papers/gauss.pdf
Are there unbiased, non-linear estimators with lower variance than the OLS estimator?
No. Linear regression is also BUE. Source: https://www.ssc.wisc.edu/~bhansen/papers/gauss.pdf
Are there unbiased, non-linear estimators with lower variance than the OLS estimator? No. Linear regression is also BUE. Source: https://www.ssc.wisc.edu/~bhansen/papers/gauss.pdf
Are there unbiased, non-linear estimators with lower variance than the OLS estimator? No. Linear regression is also BUE. Source: https://www.ssc.wisc.edu/~bhansen/papers/gauss.pdf
26,670
Are there unbiased, non-linear estimators with lower variance than the OLS estimator?
The Gauss-Markov theorem gives the conditions where the OLS estimator is the BLUE, and those conditions do not include normality of the residuals. When we also include that normality assumption, then we can remove the "L" and wind up with the "Best Unbiased Estimator", not just the best linear unbiased estimator (section 2.1, example 1 of the Ohio State econometrics notes). However, if we do not make the normality assumption, then we can wind up with nonlinear estimators of the coefficients that have lower variance than the OLS estimate but are unbiased. For example, consider heavy-tailed errors and the solution given by minimizing absolute loss (quantile regression at the median), as I do here.
Are there unbiased, non-linear estimators with lower variance than the OLS estimator?
The Gauss-Markov theorem gives the conditions where the OLS estimator is the BLUE, and those conditions do not include normality of the residuals. When we also include that normality assumption, then
Are there unbiased, non-linear estimators with lower variance than the OLS estimator? The Gauss-Markov theorem gives the conditions where the OLS estimator is the BLUE, and those conditions do not include normality of the residuals. When we also include that normality assumption, then we can remove the "L" and wind up with the "Best Unbiased Estimator", not just the best linear unbiased estimator (section 2.1, example 1 of the Ohio State econometrics notes). However, if we do not make the normality assumption, then we can wind up with nonlinear estimators of the coefficients that have lower variance than the OLS estimate but are unbiased. For example, consider heavy-tailed errors and the solution given by minimizing absolute loss (quantile regression at the median), as I do here.
Are there unbiased, non-linear estimators with lower variance than the OLS estimator? The Gauss-Markov theorem gives the conditions where the OLS estimator is the BLUE, and those conditions do not include normality of the residuals. When we also include that normality assumption, then
26,671
Bias in average age for grandmaster title qualification by age groups?
I think the average age to attain GM title will continue to decrease due to the ratings inflation (discussed in chessbase.com) and other factors such as the increase in number of players who are awarded the title and perhaps even the Flynn effect. However, I do expect that the mean decrease to bottom out at some point as you aren't just born a GM. It requires some minimum amount of deliberate practice and I will go with the 10,000 hour rule as a guess. The year 1950 was when the GM title was first awarded to 27 players who were regarded as the best in the world at the time and were probably GM strength for decades before they were granted the title. Last I recall the GM title requires a minimum rating of 2500 ELO and requires scoring 3 GM norms by attaining required performance levels in FIDE sanctioned tournaments in games against other GMs. If there are more GMs there are greater opportunities to score such norms. It was much harder in the past to find tournaments in the US to obtain such norms. Other ways to get a GM title are to win certain national events and international events (for youngsters) such as the World Junior Open. Wikipedia has the list of grandmasters as of November 2016. Per the "simple approach" I calculated the mean per year and here is a graph showing average age of GMs by year as well as the number of GM titles awarded that year. For the last 5 years: Year Mean Age ----- --------- 2011: 23.786885 2012: 25.925000 2013: 23.086207 2014: 25.250000 2015: 22.194444
Bias in average age for grandmaster title qualification by age groups?
I think the average age to attain GM title will continue to decrease due to the ratings inflation (discussed in chessbase.com) and other factors such as the increase in number of players who are award
Bias in average age for grandmaster title qualification by age groups? I think the average age to attain GM title will continue to decrease due to the ratings inflation (discussed in chessbase.com) and other factors such as the increase in number of players who are awarded the title and perhaps even the Flynn effect. However, I do expect that the mean decrease to bottom out at some point as you aren't just born a GM. It requires some minimum amount of deliberate practice and I will go with the 10,000 hour rule as a guess. The year 1950 was when the GM title was first awarded to 27 players who were regarded as the best in the world at the time and were probably GM strength for decades before they were granted the title. Last I recall the GM title requires a minimum rating of 2500 ELO and requires scoring 3 GM norms by attaining required performance levels in FIDE sanctioned tournaments in games against other GMs. If there are more GMs there are greater opportunities to score such norms. It was much harder in the past to find tournaments in the US to obtain such norms. Other ways to get a GM title are to win certain national events and international events (for youngsters) such as the World Junior Open. Wikipedia has the list of grandmasters as of November 2016. Per the "simple approach" I calculated the mean per year and here is a graph showing average age of GMs by year as well as the number of GM titles awarded that year. For the last 5 years: Year Mean Age ----- --------- 2011: 23.786885 2012: 25.925000 2013: 23.086207 2014: 25.250000 2015: 22.194444
Bias in average age for grandmaster title qualification by age groups? I think the average age to attain GM title will continue to decrease due to the ratings inflation (discussed in chessbase.com) and other factors such as the increase in number of players who are award
26,672
Bias in average age for grandmaster title qualification by age groups?
A simple approach with the given data is a different slicing of the data: Take all chess grandmasters that become grandmaster in a given year (or 5 years bin or 10 years bin) and compute the average age of them. This kind of slicing will be more robust (it is not influenced by grandmasters from the future, but is is sensitive to other effects, mainly for the number of chess players trying to become grandmaster: When it is increasing over time, it will make the average lower over time). There is probably a kind of correction to this effect available.
Bias in average age for grandmaster title qualification by age groups?
A simple approach with the given data is a different slicing of the data: Take all chess grandmasters that become grandmaster in a given year (or 5 years bin or 10 years bin) and compute the average a
Bias in average age for grandmaster title qualification by age groups? A simple approach with the given data is a different slicing of the data: Take all chess grandmasters that become grandmaster in a given year (or 5 years bin or 10 years bin) and compute the average age of them. This kind of slicing will be more robust (it is not influenced by grandmasters from the future, but is is sensitive to other effects, mainly for the number of chess players trying to become grandmaster: When it is increasing over time, it will make the average lower over time). There is probably a kind of correction to this effect available.
Bias in average age for grandmaster title qualification by age groups? A simple approach with the given data is a different slicing of the data: Take all chess grandmasters that become grandmaster in a given year (or 5 years bin or 10 years bin) and compute the average a
26,673
Comparing models using the deviance and log-likelihood ratio tests
The residual deviance is twice the difference between the likelihood in the log scale of the saturated model and that of your proposed model: $$ResidualDeviance=2\times(ll(SaturatedModel)-ll(Proposed Model)) $$ It can not be calculated simply as -2*logLik(model) in R generally, because the likelihood in the log scale of the saturated model is not always $0$. Read this post for mathematical evidence. -2*logLik(model) works for the logistic regression because in this case the likelihood in the log scale of the saturated model is $0$. To calculate the residual deviance of the negative binomial regression model manually in R, you can try this: sum(residuals.glm(m1, "deviance")^2) You are right about the likelihood that adding parameters will always increase the likelihood of a GLM. It is just a matter of statistical significance. It is recommended to choose a model based on the AIC and the BIC rather than the deviance only because the AIC and the BIC penalize you for adding more parameters. I hope it will help.
Comparing models using the deviance and log-likelihood ratio tests
The residual deviance is twice the difference between the likelihood in the log scale of the saturated model and that of your proposed model: $$ResidualDeviance=2\times(ll(SaturatedModel)-ll(Proposed
Comparing models using the deviance and log-likelihood ratio tests The residual deviance is twice the difference between the likelihood in the log scale of the saturated model and that of your proposed model: $$ResidualDeviance=2\times(ll(SaturatedModel)-ll(Proposed Model)) $$ It can not be calculated simply as -2*logLik(model) in R generally, because the likelihood in the log scale of the saturated model is not always $0$. Read this post for mathematical evidence. -2*logLik(model) works for the logistic regression because in this case the likelihood in the log scale of the saturated model is $0$. To calculate the residual deviance of the negative binomial regression model manually in R, you can try this: sum(residuals.glm(m1, "deviance")^2) You are right about the likelihood that adding parameters will always increase the likelihood of a GLM. It is just a matter of statistical significance. It is recommended to choose a model based on the AIC and the BIC rather than the deviance only because the AIC and the BIC penalize you for adding more parameters. I hope it will help.
Comparing models using the deviance and log-likelihood ratio tests The residual deviance is twice the difference between the likelihood in the log scale of the saturated model and that of your proposed model: $$ResidualDeviance=2\times(ll(SaturatedModel)-ll(Proposed
26,674
How many parameters can your model possibly have?
Yes, there should not be 10 million parameters of a model which trained on CIFAR-10 as its input dimension is small (32*32*3 = 3072). It can barely reach to million of parameters, but that model becomes prone to over-fitting. Here is a reasonable structure of convnet trained on CIFAR-10; 2 convolution layer 1 fully connected layer and 1 classification layer (also fully connected). Most of the parameters are concentrated on the last two layers as they are fully connected. Filter size at the first convolution layer is 7x7@32 Pooling size at the first pooling layer is 2x2 Filter size at the second convolution layer is 5x5@16 Pooling size at the second pooling layer is 1x1 (no pooling) I'm assuming a valid convolution and stride number of pooling is equal to pooling size. At his configurations, dimension of the first feature maps are (32-7+1)/2 x (32-7+1)/2 = 13x13@32. Dimension of the second feature maps are (13-5+1)/1 x (13-5+1)/1 = 9x9@16 As convolution layers are rolled into vector before passing into fully connected layer, dimension of the first fully connected layer is 9*9*16 = 1296. Lets assume that the last hidden layer contains 500 units (this hyper-parameter is the most significant one for total number of parameters). And the last layer is 10. At total the number of parameters are 7*7*32 + 5*5*16 + 1296*500 + 500*10 = 1568 + 400 + 648000 + 5000 = 654968. But I expect smaller network can yield better results as the number of samples is relatively small. So if the 500 neurons reduced to 100 neurons, the total number of parameters reduces to 1568 + 400 + 129600 + 5000 = 136568. Maybe it would be better to include another pooling layer at the second layer, or discarding first fully connected layer. As you can see most of the parameters are concentrated at the first fully connected layer. I don't think deeper network can yield significant gain as the dimension of input layer is small (relative to ImageNet). So your point is right. If you concern about over-fitting you can check 'Recuding overfitting' section of Alex's convnet paper
How many parameters can your model possibly have?
Yes, there should not be 10 million parameters of a model which trained on CIFAR-10 as its input dimension is small (32*32*3 = 3072). It can barely reach to million of parameters, but that model becom
How many parameters can your model possibly have? Yes, there should not be 10 million parameters of a model which trained on CIFAR-10 as its input dimension is small (32*32*3 = 3072). It can barely reach to million of parameters, but that model becomes prone to over-fitting. Here is a reasonable structure of convnet trained on CIFAR-10; 2 convolution layer 1 fully connected layer and 1 classification layer (also fully connected). Most of the parameters are concentrated on the last two layers as they are fully connected. Filter size at the first convolution layer is 7x7@32 Pooling size at the first pooling layer is 2x2 Filter size at the second convolution layer is 5x5@16 Pooling size at the second pooling layer is 1x1 (no pooling) I'm assuming a valid convolution and stride number of pooling is equal to pooling size. At his configurations, dimension of the first feature maps are (32-7+1)/2 x (32-7+1)/2 = 13x13@32. Dimension of the second feature maps are (13-5+1)/1 x (13-5+1)/1 = 9x9@16 As convolution layers are rolled into vector before passing into fully connected layer, dimension of the first fully connected layer is 9*9*16 = 1296. Lets assume that the last hidden layer contains 500 units (this hyper-parameter is the most significant one for total number of parameters). And the last layer is 10. At total the number of parameters are 7*7*32 + 5*5*16 + 1296*500 + 500*10 = 1568 + 400 + 648000 + 5000 = 654968. But I expect smaller network can yield better results as the number of samples is relatively small. So if the 500 neurons reduced to 100 neurons, the total number of parameters reduces to 1568 + 400 + 129600 + 5000 = 136568. Maybe it would be better to include another pooling layer at the second layer, or discarding first fully connected layer. As you can see most of the parameters are concentrated at the first fully connected layer. I don't think deeper network can yield significant gain as the dimension of input layer is small (relative to ImageNet). So your point is right. If you concern about over-fitting you can check 'Recuding overfitting' section of Alex's convnet paper
How many parameters can your model possibly have? Yes, there should not be 10 million parameters of a model which trained on CIFAR-10 as its input dimension is small (32*32*3 = 3072). It can barely reach to million of parameters, but that model becom
26,675
What are the ''desirable'' statistical properties of the likelihood ratio test?
It might be good to read What follows if we fail to reject the null hypothesis? before the explanation below. Desirable properties: power In hypothesis testing, the goal is to find 'statistical evidence' for $H_1$. Thereby we can make type I errors, i.e. we reject $H_0$ (and decide that there is evidence in favour of $H_1$) while $H_0$ was true (i.e. $H_1$ is false). So a type I error is 'finding false evidence' for $H_1$. A type II error is made when $H_0$ can not be rejected while it is false in reality, i.e. we ''accept $H_0$'' and we 'miss' the evidence for $H_1$. The probability of a type I error is denoted by $\alpha$, the choosen significance level. The probability of a type II error is denoted as $\beta$ and $1-\beta$ is called the power of the test, it is the probability to find evidence in favour of $H_1$ when $H_1$ is true. In statitistical hypothesis testing the scientist fixes an upper threshold for the probability of a type I error and under that constraint tries to find a test with maximum power, given $\alpha$. The desirable properties of likelihood ratio tests have to do with power In a hypothesis test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ the null hypothesis and the alternative hypothesis are called ''simple'', i.e. the parameter is fixed to one value, just as well under $H_0$ as under $H_1$ (more precisely; the distributions are fully determined). The Neyman-Pearson Lemma states that, for hypothesis tests with simple hypothesises, and for given type I error probability, a likelihood ratio test has the highest power. Obviously, high power given $\alpha$ is a desirable property: power is a measure of 'how easy it is to find evidence for $H_1$'. When the hypothesis is composite; like e.g. $H_0: \theta = \theta_1$ versus $H_1: \theta > \theta_1$ then the Neyman-Pearson lemma can not be applied because there are 'multiple values in $H_1$'. If one can find a test such that it is most powerfull for every value 'under $H_1$' then that test is said to be 'uniformly most powerfull' (UMP) (i.e. most powerfull for every value under $H_1$). There is a theorem by Karlin and Rubin that gives the necessary conditions for a likelihood ratio test to be uniformly most powerfull. These conditions are fullfilled for many one-sided (univariate) tests. So the desirable property of the likelihood ratio test lies in the fact that in several cases it has the highest power (although not in all cases). In most cases the existence of an UMP test can not be shown and in many cases (especially the multivariate) it can be shown that an UMP test does not exist. Nevertheless, in some of these cases likelihood ratio tests are applied because of their desirable properties (in the above context), because they are relatively easy to apply, and sometimes because no other tests can be defined. As an example, the one-sided test based on the standard normal distribution is UMP. Intuition behind the likelihood ratio test: If I want to test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ then we need an observation $o$ derived from a sample. Note that this is one single value. We know that either $H_0$ is true or $H_1$ is true, so one can compute the probability of $o$ when $H_0$ is true (lets call it $L_0$) and also the probability of observing $o$ when $H_1$ is true (call it $L_1$). If $L_1 > L_0$ then we are inclined to believe that ''probably $H_1$ is true''. So if the ration $\frac{L_1}{L_0} > 1$ we have reasons to believe that $H_1$ is more realistic than $H_0$. If $\frac{L_1}{L_0}$ would be something like $1.001$ then we might conclude that it could be due to chance, so to decide we need a test and thus the distribution of $\frac{L_1}{L_0}$ which is ... a ratio of two likelihoods. I found this pdf on the internet.
What are the ''desirable'' statistical properties of the likelihood ratio test?
It might be good to read What follows if we fail to reject the null hypothesis? before the explanation below. Desirable properties: power In hypothesis testing, the goal is to find 'statistical evide
What are the ''desirable'' statistical properties of the likelihood ratio test? It might be good to read What follows if we fail to reject the null hypothesis? before the explanation below. Desirable properties: power In hypothesis testing, the goal is to find 'statistical evidence' for $H_1$. Thereby we can make type I errors, i.e. we reject $H_0$ (and decide that there is evidence in favour of $H_1$) while $H_0$ was true (i.e. $H_1$ is false). So a type I error is 'finding false evidence' for $H_1$. A type II error is made when $H_0$ can not be rejected while it is false in reality, i.e. we ''accept $H_0$'' and we 'miss' the evidence for $H_1$. The probability of a type I error is denoted by $\alpha$, the choosen significance level. The probability of a type II error is denoted as $\beta$ and $1-\beta$ is called the power of the test, it is the probability to find evidence in favour of $H_1$ when $H_1$ is true. In statitistical hypothesis testing the scientist fixes an upper threshold for the probability of a type I error and under that constraint tries to find a test with maximum power, given $\alpha$. The desirable properties of likelihood ratio tests have to do with power In a hypothesis test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ the null hypothesis and the alternative hypothesis are called ''simple'', i.e. the parameter is fixed to one value, just as well under $H_0$ as under $H_1$ (more precisely; the distributions are fully determined). The Neyman-Pearson Lemma states that, for hypothesis tests with simple hypothesises, and for given type I error probability, a likelihood ratio test has the highest power. Obviously, high power given $\alpha$ is a desirable property: power is a measure of 'how easy it is to find evidence for $H_1$'. When the hypothesis is composite; like e.g. $H_0: \theta = \theta_1$ versus $H_1: \theta > \theta_1$ then the Neyman-Pearson lemma can not be applied because there are 'multiple values in $H_1$'. If one can find a test such that it is most powerfull for every value 'under $H_1$' then that test is said to be 'uniformly most powerfull' (UMP) (i.e. most powerfull for every value under $H_1$). There is a theorem by Karlin and Rubin that gives the necessary conditions for a likelihood ratio test to be uniformly most powerfull. These conditions are fullfilled for many one-sided (univariate) tests. So the desirable property of the likelihood ratio test lies in the fact that in several cases it has the highest power (although not in all cases). In most cases the existence of an UMP test can not be shown and in many cases (especially the multivariate) it can be shown that an UMP test does not exist. Nevertheless, in some of these cases likelihood ratio tests are applied because of their desirable properties (in the above context), because they are relatively easy to apply, and sometimes because no other tests can be defined. As an example, the one-sided test based on the standard normal distribution is UMP. Intuition behind the likelihood ratio test: If I want to test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ then we need an observation $o$ derived from a sample. Note that this is one single value. We know that either $H_0$ is true or $H_1$ is true, so one can compute the probability of $o$ when $H_0$ is true (lets call it $L_0$) and also the probability of observing $o$ when $H_1$ is true (call it $L_1$). If $L_1 > L_0$ then we are inclined to believe that ''probably $H_1$ is true''. So if the ration $\frac{L_1}{L_0} > 1$ we have reasons to believe that $H_1$ is more realistic than $H_0$. If $\frac{L_1}{L_0}$ would be something like $1.001$ then we might conclude that it could be due to chance, so to decide we need a test and thus the distribution of $\frac{L_1}{L_0}$ which is ... a ratio of two likelihoods. I found this pdf on the internet.
What are the ''desirable'' statistical properties of the likelihood ratio test? It might be good to read What follows if we fail to reject the null hypothesis? before the explanation below. Desirable properties: power In hypothesis testing, the goal is to find 'statistical evide
26,676
Identifiability of a state space model (Dynamic Linear Model)
In my understanding you have to put restrictions on parameters, for example setting them to a constant, to ensure identification. There is no way to rewrite an unidentified model, while preserving all parameters, to a identified model. There is however an algorithm to check whether a SS-model is identified. Try looking up the article: J. Casals, A. Garcia-Hiernaux and M. Jerez, From general State-Space to VARMAX models, Mathematics and Computers in Simulation In this article they give a cookbook to check for identification, but one step is left unexplained, it is the so-called "stair case algorithm" from the book, H. H. Rosenbrock, State Space and Multivariable Theory, John Wiley, New York, 1970, which I never had any luck locating.
Identifiability of a state space model (Dynamic Linear Model)
In my understanding you have to put restrictions on parameters, for example setting them to a constant, to ensure identification. There is no way to rewrite an unidentified model, while preserving all
Identifiability of a state space model (Dynamic Linear Model) In my understanding you have to put restrictions on parameters, for example setting them to a constant, to ensure identification. There is no way to rewrite an unidentified model, while preserving all parameters, to a identified model. There is however an algorithm to check whether a SS-model is identified. Try looking up the article: J. Casals, A. Garcia-Hiernaux and M. Jerez, From general State-Space to VARMAX models, Mathematics and Computers in Simulation In this article they give a cookbook to check for identification, but one step is left unexplained, it is the so-called "stair case algorithm" from the book, H. H. Rosenbrock, State Space and Multivariable Theory, John Wiley, New York, 1970, which I never had any luck locating.
Identifiability of a state space model (Dynamic Linear Model) In my understanding you have to put restrictions on parameters, for example setting them to a constant, to ensure identification. There is no way to rewrite an unidentified model, while preserving all
26,677
Identifiability of a state space model (Dynamic Linear Model)
It is not true that Gaussian state space models (GSSM) are unindentifiable. First, inference on GSSM is inherently Bayesian. It can be shown that Kalman filter recurrences are identical to the equations used to update prior mean and covariances under a Bayesian perspective. Second, a sufficient condition for a GSSM to be identified is that its observability matrix is full rank. Take a look at chapter 5, page 143 of the book by West and Harrison.
Identifiability of a state space model (Dynamic Linear Model)
It is not true that Gaussian state space models (GSSM) are unindentifiable. First, inference on GSSM is inherently Bayesian. It can be shown that Kalman filter recurrences are identical to the equatio
Identifiability of a state space model (Dynamic Linear Model) It is not true that Gaussian state space models (GSSM) are unindentifiable. First, inference on GSSM is inherently Bayesian. It can be shown that Kalman filter recurrences are identical to the equations used to update prior mean and covariances under a Bayesian perspective. Second, a sufficient condition for a GSSM to be identified is that its observability matrix is full rank. Take a look at chapter 5, page 143 of the book by West and Harrison.
Identifiability of a state space model (Dynamic Linear Model) It is not true that Gaussian state space models (GSSM) are unindentifiable. First, inference on GSSM is inherently Bayesian. It can be shown that Kalman filter recurrences are identical to the equatio
26,678
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance?
The bias in the variance stemms from the fact that the mean has been estimated from the data and therefore the 'spread of that data around this estimated mean' (i.e. tha variance) is smaller than the spread of the data around the 'true' mean. See also : Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The fixed effets determine the model 'for the mean', therefore, if you can find a variance estimate that was derived without estimating the mean from the data (by 'marginalising out the fixed effects (i.e. the mean)') then this underestimation of the spread (i.e. variance) will be mitigated. This is the 'intuitive' understanding why REML estimates eliminate the bias; you find an estimate for the variance without using the 'estimated mean'.
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance?
The bias in the variance stemms from the fact that the mean has been estimated from the data and therefore the 'spread of that data around this estimated mean' (i.e. tha variance) is smaller than the
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance? The bias in the variance stemms from the fact that the mean has been estimated from the data and therefore the 'spread of that data around this estimated mean' (i.e. tha variance) is smaller than the spread of the data around the 'true' mean. See also : Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The fixed effets determine the model 'for the mean', therefore, if you can find a variance estimate that was derived without estimating the mean from the data (by 'marginalising out the fixed effects (i.e. the mean)') then this underestimation of the spread (i.e. variance) will be mitigated. This is the 'intuitive' understanding why REML estimates eliminate the bias; you find an estimate for the variance without using the 'estimated mean'.
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance? The bias in the variance stemms from the fact that the mean has been estimated from the data and therefore the 'spread of that data around this estimated mean' (i.e. tha variance) is smaller than the
26,679
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance?
Check out APPENDIX: THE REML ESTIMATION METHOD from within this SAS-related resource from author David Dickey. "We can always find (n-1) numbers Z with known mean 0 and the same sum of squares and theoretical variance as the n Y values. This motivates the division of the Z sum of squares by the number of Zs, which is n-1." When I was in grad school, REML was made out to be the best thing since sliced bread. From studying the lme4 package, I learned that it doesn't really generalize that well and maybe it isn't that important in the grand scheme of things.
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance?
Check out APPENDIX: THE REML ESTIMATION METHOD from within this SAS-related resource from author David Dickey. "We can always find (n-1) numbers Z with known mean 0 and the same sum of squares and th
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance? Check out APPENDIX: THE REML ESTIMATION METHOD from within this SAS-related resource from author David Dickey. "We can always find (n-1) numbers Z with known mean 0 and the same sum of squares and theoretical variance as the n Y values. This motivates the division of the Z sum of squares by the number of Zs, which is n-1." When I was in grad school, REML was made out to be the best thing since sliced bread. From studying the lme4 package, I learned that it doesn't really generalize that well and maybe it isn't that important in the grand scheme of things.
Why does Restricted maximum likelihood yield a better (unbiased) estimate of the variance? Check out APPENDIX: THE REML ESTIMATION METHOD from within this SAS-related resource from author David Dickey. "We can always find (n-1) numbers Z with known mean 0 and the same sum of squares and th
26,680
Minimizing symmetric mean absolute percentage error (SMAPE)
I don't think there is a closed-form solution to this question. (I'd be interested in being proven wrong.) I'd assume you will need to simulate. And hope that your predictive posterior is not misspecified too badly. In case it is interesting, we wrote a little paper (see also this presentation) once that explained how minimizing percentage errors can lead to forecasting bias, by rolling standard six-sided dice. We also looked at various flavors of MAPE and wMAPE, but let's concentrate on the sMAPE here. Here is a plot where we simulate "sales" by rolling $n=8$ six-sided dice $N=1,000$ times and plot the average sMAPE, together with pointwise quantiles: fcst <- seq(1,6,by=.01) n.sims <- 1000 n.sales <- 10 confidence <- .8 result.smape <- matrix(nrow=n.sims,ncol=length(fcst)) set.seed(2011) for ( jj in 1:n.sims ) { sales <- sample(seq(1,6),size=n.sales,replace=TRUE) for ( ii in 1:length(fcst) ) { result.smape[jj,ii] <- 2*mean(abs(sales-rep(fcst[ii],n.sales))/(sales+rep(fcst[ii],n.sales))) } } (Note that I'm using the alternative sMAPE formula which divides the denominator by 2.) plot(sales,type="o",ylab="",xlab="",pch=21,bg="black",ylim=c(1,6), main=paste("Sales:",n.sales,"throws of a six-sided die")) plot(fcst,fcst,type="n",ylab="sMAPE",xlab="Forecast",ylim=c(0.3,1.1)) polygon(c(fcst,rev(fcst)),c( apply(result.smape,2,quantile,probs=(1-confidence)/2), rev(apply(result.smape,2,quantile,probs=1-(1-confidence)/2))), density=10,angle=45) lines(fcst,apply(result.smape,2,mean)) legend(x="topright",inset=.02,col="black",lwd=1,legend="sMAPE") Something along these lines may help in your case. (Again, you will need to assume that your posterior predictive distribution is "correct enough" to do this kind of simulation - but you would need to assume that for any other approach, too, so this just adds a general caveat, not a specific issue.) In this simple example of rolling standard six-sided dice, we can actually calculate and plot the expected s(M)APE as a function of the forecast: expected.sape <- function ( fcst ) sum(abs(fcst-seq(1,6))/(seq(1,6)+fcst))/3 plot(fcst,mapply(expected.sape,fcst),type="l",xlab="Forecast",ylab="Expected sAPE") This agrees rather well with the simulation averages above. And it shows nicely that the EsAPE-minimal forecast for rolling a standard six-sided die is a biased 4, instead of the unbiased expectation of 3.5. Additional fun fact: if your predictive distribution is a Poisson with a predicted parameter $\hat{\lambda}<1$, then the forecast that minimizes the expected sAPE is $\hat{y}=1$ - independently of the specific value of $\hat{\lambda}$. At least this is claimed in footnote 1 in Seaman & Bowman (in press, IJF, commentary on the M5 forecasting competiton) without a proof. It's quite easy to see that the EsAPE-minimal forecast satisfies $\hat{y}\geq 1$ (you just show that any alternative forecast $\hat{y}'<1$ will lead to a larger EsAPE). Showing that $\hat{y}'>1$ will lead to a larger EsAPE than $\hat{y}=1$ seems to be a little tedious. However, simulations look reassuring.
Minimizing symmetric mean absolute percentage error (SMAPE)
I don't think there is a closed-form solution to this question. (I'd be interested in being proven wrong.) I'd assume you will need to simulate. And hope that your predictive posterior is not misspeci
Minimizing symmetric mean absolute percentage error (SMAPE) I don't think there is a closed-form solution to this question. (I'd be interested in being proven wrong.) I'd assume you will need to simulate. And hope that your predictive posterior is not misspecified too badly. In case it is interesting, we wrote a little paper (see also this presentation) once that explained how minimizing percentage errors can lead to forecasting bias, by rolling standard six-sided dice. We also looked at various flavors of MAPE and wMAPE, but let's concentrate on the sMAPE here. Here is a plot where we simulate "sales" by rolling $n=8$ six-sided dice $N=1,000$ times and plot the average sMAPE, together with pointwise quantiles: fcst <- seq(1,6,by=.01) n.sims <- 1000 n.sales <- 10 confidence <- .8 result.smape <- matrix(nrow=n.sims,ncol=length(fcst)) set.seed(2011) for ( jj in 1:n.sims ) { sales <- sample(seq(1,6),size=n.sales,replace=TRUE) for ( ii in 1:length(fcst) ) { result.smape[jj,ii] <- 2*mean(abs(sales-rep(fcst[ii],n.sales))/(sales+rep(fcst[ii],n.sales))) } } (Note that I'm using the alternative sMAPE formula which divides the denominator by 2.) plot(sales,type="o",ylab="",xlab="",pch=21,bg="black",ylim=c(1,6), main=paste("Sales:",n.sales,"throws of a six-sided die")) plot(fcst,fcst,type="n",ylab="sMAPE",xlab="Forecast",ylim=c(0.3,1.1)) polygon(c(fcst,rev(fcst)),c( apply(result.smape,2,quantile,probs=(1-confidence)/2), rev(apply(result.smape,2,quantile,probs=1-(1-confidence)/2))), density=10,angle=45) lines(fcst,apply(result.smape,2,mean)) legend(x="topright",inset=.02,col="black",lwd=1,legend="sMAPE") Something along these lines may help in your case. (Again, you will need to assume that your posterior predictive distribution is "correct enough" to do this kind of simulation - but you would need to assume that for any other approach, too, so this just adds a general caveat, not a specific issue.) In this simple example of rolling standard six-sided dice, we can actually calculate and plot the expected s(M)APE as a function of the forecast: expected.sape <- function ( fcst ) sum(abs(fcst-seq(1,6))/(seq(1,6)+fcst))/3 plot(fcst,mapply(expected.sape,fcst),type="l",xlab="Forecast",ylab="Expected sAPE") This agrees rather well with the simulation averages above. And it shows nicely that the EsAPE-minimal forecast for rolling a standard six-sided die is a biased 4, instead of the unbiased expectation of 3.5. Additional fun fact: if your predictive distribution is a Poisson with a predicted parameter $\hat{\lambda}<1$, then the forecast that minimizes the expected sAPE is $\hat{y}=1$ - independently of the specific value of $\hat{\lambda}$. At least this is claimed in footnote 1 in Seaman & Bowman (in press, IJF, commentary on the M5 forecasting competiton) without a proof. It's quite easy to see that the EsAPE-minimal forecast satisfies $\hat{y}\geq 1$ (you just show that any alternative forecast $\hat{y}'<1$ will lead to a larger EsAPE). Showing that $\hat{y}'>1$ will lead to a larger EsAPE than $\hat{y}=1$ seems to be a little tedious. However, simulations look reassuring.
Minimizing symmetric mean absolute percentage error (SMAPE) I don't think there is a closed-form solution to this question. (I'd be interested in being proven wrong.) I'd assume you will need to simulate. And hope that your predictive posterior is not misspeci
26,681
Torn between PET-PEESE and multilevel approaches to meta-analysis: is there a happy medium?
I have worked on a meta-analysis following mainly the Cheung approach (but not the 3 levels) and recently came across the PET-PEESE approach for correcting publication bias. I was also intrigued in combinations of the two approaches. So far my experience. I think there are two ways to tackle your problem. A simple one and a more complicated one. The quote below seems to suggest that random effects exacerbate the publication bias so to me it seems that if you suspect publication bias to be an issue, you cannot simply use a random effects model. With selection for statistical significance, REE is always more biased than FEE (Table 3). This predictable inferiority is due to the fact that REE is itself a weighted average of the simple mean, which has the largest publication bias, and FEE. I am assuming that publication bias is a serious concern. Simple approach: Model the heterogeneity under PET-PEESE If I understood the questions correctly, I think this approach is the most pragmatic starting point. The PET-PEESE approach lends itself to extensions to meta-analytic regressions. If the source of heterogeneity stems mainly from the different variables in the effect sizes than you can model the heterogeneity as fixed effects by including indicator variables (1/0) for each variable*. In addition, if you suspect that some variables have better measurement properties or are closer related to your construct of interest, you might want to have a look at the Hunter and Schmidt style of meta-analyis. They propose some corrections for measurement error. This approach would probably give you an initial idea of the size of the publication bias via the PET and PEESE intercepts and of the heterogeneity based on the variance in the fixed effects. The more complicated approach: Model heterogeneity and publication bias explicitly I mean that you explicitly model the occurrence of publication bias according to the Stanley and Doucouliagos paper. You also have to explicitly write out the three levels of Cheung as random effects. In other words, this approach requires you to specify the likelihood yourself and would probably be a methodological contribution in itself. I think it is possible to specify such a likelihood (with appropriate priors) following a hierarchical Bayes approach in Stan and use the posterior estimates. The manual has a short section on meta-analysis. The users list is also very helpful. The second approach is probably overkill for what you want at this stage but it would probably be more correct than the first approach. And I would be interested in whether it works. * If you have a lot of variables (and not a lot of effect sizes) than it might be better to group similar variables into groups (yes, that is a judgement call), and use group indicator variables.
Torn between PET-PEESE and multilevel approaches to meta-analysis: is there a happy medium?
I have worked on a meta-analysis following mainly the Cheung approach (but not the 3 levels) and recently came across the PET-PEESE approach for correcting publication bias. I was also intrigued in co
Torn between PET-PEESE and multilevel approaches to meta-analysis: is there a happy medium? I have worked on a meta-analysis following mainly the Cheung approach (but not the 3 levels) and recently came across the PET-PEESE approach for correcting publication bias. I was also intrigued in combinations of the two approaches. So far my experience. I think there are two ways to tackle your problem. A simple one and a more complicated one. The quote below seems to suggest that random effects exacerbate the publication bias so to me it seems that if you suspect publication bias to be an issue, you cannot simply use a random effects model. With selection for statistical significance, REE is always more biased than FEE (Table 3). This predictable inferiority is due to the fact that REE is itself a weighted average of the simple mean, which has the largest publication bias, and FEE. I am assuming that publication bias is a serious concern. Simple approach: Model the heterogeneity under PET-PEESE If I understood the questions correctly, I think this approach is the most pragmatic starting point. The PET-PEESE approach lends itself to extensions to meta-analytic regressions. If the source of heterogeneity stems mainly from the different variables in the effect sizes than you can model the heterogeneity as fixed effects by including indicator variables (1/0) for each variable*. In addition, if you suspect that some variables have better measurement properties or are closer related to your construct of interest, you might want to have a look at the Hunter and Schmidt style of meta-analyis. They propose some corrections for measurement error. This approach would probably give you an initial idea of the size of the publication bias via the PET and PEESE intercepts and of the heterogeneity based on the variance in the fixed effects. The more complicated approach: Model heterogeneity and publication bias explicitly I mean that you explicitly model the occurrence of publication bias according to the Stanley and Doucouliagos paper. You also have to explicitly write out the three levels of Cheung as random effects. In other words, this approach requires you to specify the likelihood yourself and would probably be a methodological contribution in itself. I think it is possible to specify such a likelihood (with appropriate priors) following a hierarchical Bayes approach in Stan and use the posterior estimates. The manual has a short section on meta-analysis. The users list is also very helpful. The second approach is probably overkill for what you want at this stage but it would probably be more correct than the first approach. And I would be interested in whether it works. * If you have a lot of variables (and not a lot of effect sizes) than it might be better to group similar variables into groups (yes, that is a judgement call), and use group indicator variables.
Torn between PET-PEESE and multilevel approaches to meta-analysis: is there a happy medium? I have worked on a meta-analysis following mainly the Cheung approach (but not the 3 levels) and recently came across the PET-PEESE approach for correcting publication bias. I was also intrigued in co
26,682
VIF for generalized linear model
If we look at the function library(car) getS3method("vif", "default") #R function (mod, ...) #R { #R v <- vcov(mod) #R assign <- attr(model.matrix(mod), "assign") #R [...] #R terms <- labels(terms(mod)) #R n.terms <- length(terms) #R [...] #R R <- cov2cor(v) #R detR <- det(R) #R result <- matrix(0, n.terms, 3) #R rownames(result) <- terms #R colnames(result) <- c("GVIF", "Df", "GVIF^(1/(2*Df))") #R for (term in 1:n.terms) { #R subs <- which(assign == term) #R result[term, 1] <- det(as.matrix(R[subs, subs])) * det(as.matrix(R[-subs, #R -subs]))/detR #R result[term, 2] <- length(subs) #R } #R if (all(result[, 2] == 1)) #R result <- result[, 1] #R else result[, 3] <- result[, 1]^(1/(2 * result[, 2])) #R result #R } then it calls vcov which will differ for a glm then lm. In the glm case it depends on the outcome. Thus, you get the different results. All the above is consistent with the 1992 article Fox, J., & Monette, G. (1992). Generalized collinearity diagnostics. Journal of the American Statistical Association, 87(417), 178-183. in the linear model case. See particularly Equation (10) and #R result[term, 1] <- det(as.matrix(R[subs, subs])) * det(as.matrix(R[-subs, #R -subs]))/detR To the question Is the variance inflation factor useful for GLM models Then I gather that the results in the 1992 article may still hold asymptotically. However, some pen and paper is likely need to justify this claim and I am may be wrong.
VIF for generalized linear model
If we look at the function library(car) getS3method("vif", "default") #R function (mod, ...) #R { #R v <- vcov(mod) #R assign <- attr(model.matrix(mod), "assign") #R [...] #R terms <-
VIF for generalized linear model If we look at the function library(car) getS3method("vif", "default") #R function (mod, ...) #R { #R v <- vcov(mod) #R assign <- attr(model.matrix(mod), "assign") #R [...] #R terms <- labels(terms(mod)) #R n.terms <- length(terms) #R [...] #R R <- cov2cor(v) #R detR <- det(R) #R result <- matrix(0, n.terms, 3) #R rownames(result) <- terms #R colnames(result) <- c("GVIF", "Df", "GVIF^(1/(2*Df))") #R for (term in 1:n.terms) { #R subs <- which(assign == term) #R result[term, 1] <- det(as.matrix(R[subs, subs])) * det(as.matrix(R[-subs, #R -subs]))/detR #R result[term, 2] <- length(subs) #R } #R if (all(result[, 2] == 1)) #R result <- result[, 1] #R else result[, 3] <- result[, 1]^(1/(2 * result[, 2])) #R result #R } then it calls vcov which will differ for a glm then lm. In the glm case it depends on the outcome. Thus, you get the different results. All the above is consistent with the 1992 article Fox, J., & Monette, G. (1992). Generalized collinearity diagnostics. Journal of the American Statistical Association, 87(417), 178-183. in the linear model case. See particularly Equation (10) and #R result[term, 1] <- det(as.matrix(R[subs, subs])) * det(as.matrix(R[-subs, #R -subs]))/detR To the question Is the variance inflation factor useful for GLM models Then I gather that the results in the 1992 article may still hold asymptotically. However, some pen and paper is likely need to justify this claim and I am may be wrong.
VIF for generalized linear model If we look at the function library(car) getS3method("vif", "default") #R function (mod, ...) #R { #R v <- vcov(mod) #R assign <- attr(model.matrix(mod), "assign") #R [...] #R terms <-
26,683
The best way for clustering an adjacency matrix
I have done some work in the past on spectral clustering which might be of use here. The basic idea is that one can use the adjacency matrix to form the so called Laplacian matrix: $L = I-D^{-1/2}AD^{-1/2}$ You can check for yourself that the lowest eigenvalue of the Laplacian is zero. The first nonzero eigenvalue is often called the algebraic connectivity, and the corresponding eignevector will have positive part and negative part corresponding to two partitions $(B_1,B_2)$ of the underlying graph. Roughly speaking, the magnitude of the first nonzero eignevalue is a measure of the strength of the connections between the two partitions. Perhaps you could employ this approach recursively or consider the first few nonzero eignvalues of the Laplacian.The following Wikipedia article about spectral clustering is a good start.
The best way for clustering an adjacency matrix
I have done some work in the past on spectral clustering which might be of use here. The basic idea is that one can use the adjacency matrix to form the so called Laplacian matrix: $L = I-D^{-1/2}AD^{
The best way for clustering an adjacency matrix I have done some work in the past on spectral clustering which might be of use here. The basic idea is that one can use the adjacency matrix to form the so called Laplacian matrix: $L = I-D^{-1/2}AD^{-1/2}$ You can check for yourself that the lowest eigenvalue of the Laplacian is zero. The first nonzero eigenvalue is often called the algebraic connectivity, and the corresponding eignevector will have positive part and negative part corresponding to two partitions $(B_1,B_2)$ of the underlying graph. Roughly speaking, the magnitude of the first nonzero eignevalue is a measure of the strength of the connections between the two partitions. Perhaps you could employ this approach recursively or consider the first few nonzero eignvalues of the Laplacian.The following Wikipedia article about spectral clustering is a good start.
The best way for clustering an adjacency matrix I have done some work in the past on spectral clustering which might be of use here. The basic idea is that one can use the adjacency matrix to form the so called Laplacian matrix: $L = I-D^{-1/2}AD^{
26,684
The best way for clustering an adjacency matrix
I am looking at the same problem at the moment. From quick review, it seems like Spectral clustering is the most "natural" way to analyze an Adjacency matrix. See this blog post for more details.
The best way for clustering an adjacency matrix
I am looking at the same problem at the moment. From quick review, it seems like Spectral clustering is the most "natural" way to analyze an Adjacency matrix. See this blog post for more details.
The best way for clustering an adjacency matrix I am looking at the same problem at the moment. From quick review, it seems like Spectral clustering is the most "natural" way to analyze an Adjacency matrix. See this blog post for more details.
The best way for clustering an adjacency matrix I am looking at the same problem at the moment. From quick review, it seems like Spectral clustering is the most "natural" way to analyze an Adjacency matrix. See this blog post for more details.
26,685
The best way for clustering an adjacency matrix
Alternatively... Neural data (real or artificial) is often a highly compressed representation of data, which means the data is very random, which means you won't find any correlations. Which you have!! Congratulations! :)
The best way for clustering an adjacency matrix
Alternatively... Neural data (real or artificial) is often a highly compressed representation of data, which means the data is very random, which means you won't find any correlations. Which you have
The best way for clustering an adjacency matrix Alternatively... Neural data (real or artificial) is often a highly compressed representation of data, which means the data is very random, which means you won't find any correlations. Which you have!! Congratulations! :)
The best way for clustering an adjacency matrix Alternatively... Neural data (real or artificial) is often a highly compressed representation of data, which means the data is very random, which means you won't find any correlations. Which you have
26,686
Moment Generating Functions and Fourier Transforms?
The MGF is $M_{X}(t)=E\left[ e^{tX} \right]$ for real values of $t$ where the expectation exists. In terms of a probability density function $f(x)$, $M_{X}(t)=\int_{-\infty}^{\infty} e^{tx}f(x) dx.$ This is not a Fourier transform (which would have $e^{itx}$ rather than $e^{tx}$. The moment generating function is almost a two-sided Laplace transform, but the two-sided Laplace transform has $e^{-tx}$ rather than $e^{tx}$.
Moment Generating Functions and Fourier Transforms?
The MGF is $M_{X}(t)=E\left[ e^{tX} \right]$ for real values of $t$ where the expectation exists. In terms of a probability density function $f(x)$, $M_{X}(t)=\int_{-\infty}^{\infty} e^{tx}f(x) dx.
Moment Generating Functions and Fourier Transforms? The MGF is $M_{X}(t)=E\left[ e^{tX} \right]$ for real values of $t$ where the expectation exists. In terms of a probability density function $f(x)$, $M_{X}(t)=\int_{-\infty}^{\infty} e^{tx}f(x) dx.$ This is not a Fourier transform (which would have $e^{itx}$ rather than $e^{tx}$. The moment generating function is almost a two-sided Laplace transform, but the two-sided Laplace transform has $e^{-tx}$ rather than $e^{tx}$.
Moment Generating Functions and Fourier Transforms? The MGF is $M_{X}(t)=E\left[ e^{tX} \right]$ for real values of $t$ where the expectation exists. In terms of a probability density function $f(x)$, $M_{X}(t)=\int_{-\infty}^{\infty} e^{tx}f(x) dx.
26,687
Required number of simulations for Monte Carlo analysis
I usually conduct the convergence study, and determine the number of simulations required, then use this number in subsequent simulations. I also throw a warning if the error is larger than suggested by the chosen number. The typical way to determine the required number of simulations is by computing the variance of the simulation $\hat\sigma_N^2$ for N paths, then the standard error is $\frac{\hat\sigma_N}{\sqrt{N}}$, see section on error estimation of MC in "Monte Carlo Methods in Finance" by Peter Jackel, also a chapter "Evaluating a definite integral" in Sobol's little book Alternatively, you could compute the error for each simulation, and stop when it goes beyond certain threshold or max number of paths is reached, where this number was again determined by the convergence study.
Required number of simulations for Monte Carlo analysis
I usually conduct the convergence study, and determine the number of simulations required, then use this number in subsequent simulations. I also throw a warning if the error is larger than suggested
Required number of simulations for Monte Carlo analysis I usually conduct the convergence study, and determine the number of simulations required, then use this number in subsequent simulations. I also throw a warning if the error is larger than suggested by the chosen number. The typical way to determine the required number of simulations is by computing the variance of the simulation $\hat\sigma_N^2$ for N paths, then the standard error is $\frac{\hat\sigma_N}{\sqrt{N}}$, see section on error estimation of MC in "Monte Carlo Methods in Finance" by Peter Jackel, also a chapter "Evaluating a definite integral" in Sobol's little book Alternatively, you could compute the error for each simulation, and stop when it goes beyond certain threshold or max number of paths is reached, where this number was again determined by the convergence study.
Required number of simulations for Monte Carlo analysis I usually conduct the convergence study, and determine the number of simulations required, then use this number in subsequent simulations. I also throw a warning if the error is larger than suggested
26,688
Validate web a/b tests by re-running an experiment - is this valid?
Ignoring the probabilities of a false positive for the moment, I would look at it like this: If you run the experiment twice an get the same result, you have no idea whether there was two true positive results or two false positive results in a row. If you run the experiment twice and get two different results, then you do not know which is the true positive and which was the false positive result. In either case you should then run a third experiment, just to be certain. This maybe fine for experiments that are relatively inexpensive, but where the cost is potentially high (like losing customers) you really need to consider the benefit. Looking at the probabilities, the first time you run the experiment, there is a 1/20 chance of a false positive. The second time you run the experiment there is still a 1/20 chance of a false positive (think of it as rolling a die where each roll has a 1/6 chance of obtaining a certain number). There is only a 1/400 chance of having two false positives in a row. The real issue is to have a well defined hypothesis with stringent procedures, and to have a sample size, level of error, and confidence interval you can live with or afford. Repetition of the experiment should be left to exploring customers over time changes made by the organisation changes made by the competition rather than second guessing results. Although explaining this to managers is easier said than done.
Validate web a/b tests by re-running an experiment - is this valid?
Ignoring the probabilities of a false positive for the moment, I would look at it like this: If you run the experiment twice an get the same result, you have no idea whether there was two true positi
Validate web a/b tests by re-running an experiment - is this valid? Ignoring the probabilities of a false positive for the moment, I would look at it like this: If you run the experiment twice an get the same result, you have no idea whether there was two true positive results or two false positive results in a row. If you run the experiment twice and get two different results, then you do not know which is the true positive and which was the false positive result. In either case you should then run a third experiment, just to be certain. This maybe fine for experiments that are relatively inexpensive, but where the cost is potentially high (like losing customers) you really need to consider the benefit. Looking at the probabilities, the first time you run the experiment, there is a 1/20 chance of a false positive. The second time you run the experiment there is still a 1/20 chance of a false positive (think of it as rolling a die where each roll has a 1/6 chance of obtaining a certain number). There is only a 1/400 chance of having two false positives in a row. The real issue is to have a well defined hypothesis with stringent procedures, and to have a sample size, level of error, and confidence interval you can live with or afford. Repetition of the experiment should be left to exploring customers over time changes made by the organisation changes made by the competition rather than second guessing results. Although explaining this to managers is easier said than done.
Validate web a/b tests by re-running an experiment - is this valid? Ignoring the probabilities of a false positive for the moment, I would look at it like this: If you run the experiment twice an get the same result, you have no idea whether there was two true positi
26,689
Validate web a/b tests by re-running an experiment - is this valid?
Yeah that statement is correct, assuming your experiment is ideal. But getting an ideal experiment is way way harder than this sentiment gives credence. "Real world" data is messy, complicated, and hard to interpret in the first place. There's tremendous room for flawed analysis, hidden variables (there's very rarely "the same constraints"), or miscommunications between a data scientist doing their job and a marking exec doing theirs. From a business standpoint ensure good methodology and not being overconfident in results; a trickier challenge than you might think. Once you get those down, then work on that 5%.
Validate web a/b tests by re-running an experiment - is this valid?
Yeah that statement is correct, assuming your experiment is ideal. But getting an ideal experiment is way way harder than this sentiment gives credence. "Real world" data is messy, complicated, and ha
Validate web a/b tests by re-running an experiment - is this valid? Yeah that statement is correct, assuming your experiment is ideal. But getting an ideal experiment is way way harder than this sentiment gives credence. "Real world" data is messy, complicated, and hard to interpret in the first place. There's tremendous room for flawed analysis, hidden variables (there's very rarely "the same constraints"), or miscommunications between a data scientist doing their job and a marking exec doing theirs. From a business standpoint ensure good methodology and not being overconfident in results; a trickier challenge than you might think. Once you get those down, then work on that 5%.
Validate web a/b tests by re-running an experiment - is this valid? Yeah that statement is correct, assuming your experiment is ideal. But getting an ideal experiment is way way harder than this sentiment gives credence. "Real world" data is messy, complicated, and ha
26,690
Negative values in predictions for an always-positive response variable in linear regression
I assume that you are using the OLS estimator on this linear regression model. You can use the inequality constrained least-squares estimator, which will be the solution to a minimization problem under inequality constraints. Using standard matrix notation (vectors are column vectors) the minimization problem is stated as $$\min_{\beta} (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) \\s.t.-\mathbf Z\beta \le \mathbf 0 $$ ...where $\mathbf y$ is $n \times 1$ , $\mathbf X$ is $n\times k$, $\beta$ is $k\times 1$ and $\mathbf Z$ is the $m \times k$ matrix containing the out-of-sample regressor series of length $m$ that are used for prediction. We have $m$ linear inequality constraints (and the objective function is convex, so the first order conditions are sufficient for a minimum). The Lagrangean of this problem is $$L = (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) -\lambda'\mathbf Z\beta = \mathbf y'\mathbf y-\mathbf y'\mathbf X\beta - \beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta$$ $$= \mathbf y'\mathbf y - 2\beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta $$ where $\lambda$ is a $m \times 1$ column vector of non-negative Karush -Kuhn -Tucker multipliers. The first order conditions are (you may want to review rules for matrix and vector differentiation) $$\frac {\partial L}{\partial \beta}= \mathbb 0\Rightarrow - 2\mathbf X'\mathbf y +2\mathbf X'\mathbf X\beta - \mathbf Z'\lambda $$ $$\Rightarrow \hat \beta_R = \left(\mathbf X'\mathbf X\right)^{-1}\mathbf X'\mathbf y + \frac 12\left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\lambda = \hat \beta_{OLS}+ \left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\xi \qquad [1]$$ ...where $\xi = \frac 12 \lambda$, for convenience, and $\hat \beta_{OLS}$ is the estimator we would obtain from ordinary least squares estimation. The method is fully elaborated in Liew (1976).
Negative values in predictions for an always-positive response variable in linear regression
I assume that you are using the OLS estimator on this linear regression model. You can use the inequality constrained least-squares estimator, which will be the solution to a minimization problem unde
Negative values in predictions for an always-positive response variable in linear regression I assume that you are using the OLS estimator on this linear regression model. You can use the inequality constrained least-squares estimator, which will be the solution to a minimization problem under inequality constraints. Using standard matrix notation (vectors are column vectors) the minimization problem is stated as $$\min_{\beta} (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) \\s.t.-\mathbf Z\beta \le \mathbf 0 $$ ...where $\mathbf y$ is $n \times 1$ , $\mathbf X$ is $n\times k$, $\beta$ is $k\times 1$ and $\mathbf Z$ is the $m \times k$ matrix containing the out-of-sample regressor series of length $m$ that are used for prediction. We have $m$ linear inequality constraints (and the objective function is convex, so the first order conditions are sufficient for a minimum). The Lagrangean of this problem is $$L = (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) -\lambda'\mathbf Z\beta = \mathbf y'\mathbf y-\mathbf y'\mathbf X\beta - \beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta$$ $$= \mathbf y'\mathbf y - 2\beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta $$ where $\lambda$ is a $m \times 1$ column vector of non-negative Karush -Kuhn -Tucker multipliers. The first order conditions are (you may want to review rules for matrix and vector differentiation) $$\frac {\partial L}{\partial \beta}= \mathbb 0\Rightarrow - 2\mathbf X'\mathbf y +2\mathbf X'\mathbf X\beta - \mathbf Z'\lambda $$ $$\Rightarrow \hat \beta_R = \left(\mathbf X'\mathbf X\right)^{-1}\mathbf X'\mathbf y + \frac 12\left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\lambda = \hat \beta_{OLS}+ \left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\xi \qquad [1]$$ ...where $\xi = \frac 12 \lambda$, for convenience, and $\hat \beta_{OLS}$ is the estimator we would obtain from ordinary least squares estimation. The method is fully elaborated in Liew (1976).
Negative values in predictions for an always-positive response variable in linear regression I assume that you are using the OLS estimator on this linear regression model. You can use the inequality constrained least-squares estimator, which will be the solution to a minimization problem unde
26,691
Generalised least squares: from regression coefficients to correlation coefficients?
The answer is yes, the linear regression coefficients are the correlations of the predictors with the response, but only if you use the correct coordinate system. To see what I mean, recall that if $x_1, x_2, \ldots, x_n$ and $y$ are centered and standardized, then the correlation between each $x_i$ and $y$ is just the dot product $x_i^t y$. Also, the least squares solution to linear regression is $$ \beta = (X^t X)^{-1} X^t y $$ If it so happens that $X^{t} X = I$ (the identity matrix) then $$ \beta = X^t y $$ and we recover the correlation vector. It is often attractive to recast a regression problem in terms of predictors $\tilde{x}_i$ that satisfy $\tilde{X}^t \tilde{X} = I$ by finding appropriate linear combinations of the original predictors that make this relation true (or equivalently, a linear change of coordinates); these new predictors are called the principal components. So overall, the answer to your question is yes, but only when the predictors are themselves uncorrelated. Otherwise, the expression $$X^t X \beta = X^t y$$ shows that the betas must be mixed together with the correllations between the predictors themselves to recover the predictor-response correlations. As a side note, this also explains why the result is always true for one variable linear regression. Once the predictor vector $x$ is standardized, then: $$ x_0^t x = \sum_i x_{i} = 0 $$ where $x_0$ is the intercept vector of all ones. So the (two column) data matrix $X$ automatically satisfies $X^t X = I$, and the result follows.
Generalised least squares: from regression coefficients to correlation coefficients?
The answer is yes, the linear regression coefficients are the correlations of the predictors with the response, but only if you use the correct coordinate system. To see what I mean, recall that if $x
Generalised least squares: from regression coefficients to correlation coefficients? The answer is yes, the linear regression coefficients are the correlations of the predictors with the response, but only if you use the correct coordinate system. To see what I mean, recall that if $x_1, x_2, \ldots, x_n$ and $y$ are centered and standardized, then the correlation between each $x_i$ and $y$ is just the dot product $x_i^t y$. Also, the least squares solution to linear regression is $$ \beta = (X^t X)^{-1} X^t y $$ If it so happens that $X^{t} X = I$ (the identity matrix) then $$ \beta = X^t y $$ and we recover the correlation vector. It is often attractive to recast a regression problem in terms of predictors $\tilde{x}_i$ that satisfy $\tilde{X}^t \tilde{X} = I$ by finding appropriate linear combinations of the original predictors that make this relation true (or equivalently, a linear change of coordinates); these new predictors are called the principal components. So overall, the answer to your question is yes, but only when the predictors are themselves uncorrelated. Otherwise, the expression $$X^t X \beta = X^t y$$ shows that the betas must be mixed together with the correllations between the predictors themselves to recover the predictor-response correlations. As a side note, this also explains why the result is always true for one variable linear regression. Once the predictor vector $x$ is standardized, then: $$ x_0^t x = \sum_i x_{i} = 0 $$ where $x_0$ is the intercept vector of all ones. So the (two column) data matrix $X$ automatically satisfies $X^t X = I$, and the result follows.
Generalised least squares: from regression coefficients to correlation coefficients? The answer is yes, the linear regression coefficients are the correlations of the predictors with the response, but only if you use the correct coordinate system. To see what I mean, recall that if $x
26,692
Bayesian inference and degrees of freedom
At least from a theoretical point of view, identifiably is not important from a Bayesian perspective. If the data is not informative about some parameters under the model then the posterior of those parameters will just be highly influenced by the prior. From a practical point of view if the posterior is broad then approximate methods such as MCMC will take longer maybe much longer to run. Another practical problem is that if you have a large parameter space and little data as it sounds like you do then the results, if you can manage to compute them, are likely to be very sensitive to prior specification.
Bayesian inference and degrees of freedom
At least from a theoretical point of view, identifiably is not important from a Bayesian perspective. If the data is not informative about some parameters under the model then the posterior of those
Bayesian inference and degrees of freedom At least from a theoretical point of view, identifiably is not important from a Bayesian perspective. If the data is not informative about some parameters under the model then the posterior of those parameters will just be highly influenced by the prior. From a practical point of view if the posterior is broad then approximate methods such as MCMC will take longer maybe much longer to run. Another practical problem is that if you have a large parameter space and little data as it sounds like you do then the results, if you can manage to compute them, are likely to be very sensitive to prior specification.
Bayesian inference and degrees of freedom At least from a theoretical point of view, identifiably is not important from a Bayesian perspective. If the data is not informative about some parameters under the model then the posterior of those
26,693
Bayesian inference and degrees of freedom
There is literature on Bayesian inference on over-identified models (e.g. Gelfand and Sahu, 1999. J. Amer. Statist. Assoc. 94:247-253), that is when the number of estimands in a model exceeds the number of (independent) observations. If priors are proper, the posterior is proper as well, but Bayesian learning on non-identified parameters depends on how much is learned about items that are identified. Hence, priors are influential, and this may be a serious matter with Bayesian models fitted, say, to DNA data, where the number of unknowns is in the dozens of millions. Caution should be exercised, e.g., in medical genetics. There is a concept called "effective number of parameters" or neff (see, for example, in the Deviance Information Criterion, or in regression models with shrinkage). In all cases, the neff is at most n. As in the good all times: the number of independent questions that one can ask from a data set is, at most, n. Hence if you pose n+k questions, k of the answers will b redundant with respect the first n answers. In short, statistical learning must be imperfect in overidentified models no matter how fancy you are in "regularizing" the model or how eloquent your local Bayesian resident expert is. Daniel Gianola
Bayesian inference and degrees of freedom
There is literature on Bayesian inference on over-identified models (e.g. Gelfand and Sahu, 1999. J. Amer. Statist. Assoc. 94:247-253), that is when the number of estimands in a model exceeds the numb
Bayesian inference and degrees of freedom There is literature on Bayesian inference on over-identified models (e.g. Gelfand and Sahu, 1999. J. Amer. Statist. Assoc. 94:247-253), that is when the number of estimands in a model exceeds the number of (independent) observations. If priors are proper, the posterior is proper as well, but Bayesian learning on non-identified parameters depends on how much is learned about items that are identified. Hence, priors are influential, and this may be a serious matter with Bayesian models fitted, say, to DNA data, where the number of unknowns is in the dozens of millions. Caution should be exercised, e.g., in medical genetics. There is a concept called "effective number of parameters" or neff (see, for example, in the Deviance Information Criterion, or in regression models with shrinkage). In all cases, the neff is at most n. As in the good all times: the number of independent questions that one can ask from a data set is, at most, n. Hence if you pose n+k questions, k of the answers will b redundant with respect the first n answers. In short, statistical learning must be imperfect in overidentified models no matter how fancy you are in "regularizing" the model or how eloquent your local Bayesian resident expert is. Daniel Gianola
Bayesian inference and degrees of freedom There is literature on Bayesian inference on over-identified models (e.g. Gelfand and Sahu, 1999. J. Amer. Statist. Assoc. 94:247-253), that is when the number of estimands in a model exceeds the numb
26,694
Maximum number of classes for RandomForest multiclass estimation
I have at least one experience doing so. For the NHTS 2017 dataset, I have modeled a number of variables. Notably, random forests perform quite well on predicting vehicle ownership per household (using most of the other household-level variables as features), somewhat outperforming logit models (which are, for whatever reason, state-of-the-art in travel modeling). There are a dozen classes here. On the other hand, modeling individuals' work schedules (jointly hour leaving to go to work and hour leaving from work) has a large quantity of combinations. After some data preprocessing, there are over 200 classes. Random forest models perform abysmally here, in terms of accuracy. I get about 20% accuracy for an RF model with optimized max depth, and almost 60% accuracy for a logistic regression. Interestingly, the log loss of the RF model is still lower than that of the logistic model. These results ended up as an extended abstract at TRB. You can read the paper unpaywalled here
Maximum number of classes for RandomForest multiclass estimation
I have at least one experience doing so. For the NHTS 2017 dataset, I have modeled a number of variables. Notably, random forests perform quite well on predicting vehicle ownership per household (us
Maximum number of classes for RandomForest multiclass estimation I have at least one experience doing so. For the NHTS 2017 dataset, I have modeled a number of variables. Notably, random forests perform quite well on predicting vehicle ownership per household (using most of the other household-level variables as features), somewhat outperforming logit models (which are, for whatever reason, state-of-the-art in travel modeling). There are a dozen classes here. On the other hand, modeling individuals' work schedules (jointly hour leaving to go to work and hour leaving from work) has a large quantity of combinations. After some data preprocessing, there are over 200 classes. Random forest models perform abysmally here, in terms of accuracy. I get about 20% accuracy for an RF model with optimized max depth, and almost 60% accuracy for a logistic regression. Interestingly, the log loss of the RF model is still lower than that of the logistic model. These results ended up as an extended abstract at TRB. You can read the paper unpaywalled here
Maximum number of classes for RandomForest multiclass estimation I have at least one experience doing so. For the NHTS 2017 dataset, I have modeled a number of variables. Notably, random forests perform quite well on predicting vehicle ownership per household (us
26,695
Voting system that uses accuracy of each voter and the associated uncertainty
You should consider the expertise of a voter as a latent variable of your system. You may then be able to solve your problem with bayesian inference. A representation as graphical model could be like this : Let's denote the variables $A$ for the true answer, $V_i$ for the vote of the voter $i$ and $H_i$ for its history. Say that you also have an "expertise" parameter $\mu_i$ such that $\Pr(A=V_i) = \mu_i$. If you put some prior on these $\mu_i$ -for example a Beta prior- you should be able to use the Bayes theorem to infer $\Pr(\mu_i \mid H_i)$, and then integrate over $\mu_i$ to compute $$\Pr(A \mid V_i, H_i) = \int_{\mu_i} \Pr(A, \mu_i \mid A_i, H_i)~ \mathrm{d}\mu_i$$ These systems are difficult to solve. You can use the EM algorithm as an approximation, or use complete likelihood maximisation scheme to perform exact Bayesian inference. Take a look on this paper Variational Inference for Crowdsourcing, Liu, Peng and Ihler 2012 (presented yesterday at NIPS !) for detailed algorithms for solving this task.
Voting system that uses accuracy of each voter and the associated uncertainty
You should consider the expertise of a voter as a latent variable of your system. You may then be able to solve your problem with bayesian inference. A representation as graphical model could be like
Voting system that uses accuracy of each voter and the associated uncertainty You should consider the expertise of a voter as a latent variable of your system. You may then be able to solve your problem with bayesian inference. A representation as graphical model could be like this : Let's denote the variables $A$ for the true answer, $V_i$ for the vote of the voter $i$ and $H_i$ for its history. Say that you also have an "expertise" parameter $\mu_i$ such that $\Pr(A=V_i) = \mu_i$. If you put some prior on these $\mu_i$ -for example a Beta prior- you should be able to use the Bayes theorem to infer $\Pr(\mu_i \mid H_i)$, and then integrate over $\mu_i$ to compute $$\Pr(A \mid V_i, H_i) = \int_{\mu_i} \Pr(A, \mu_i \mid A_i, H_i)~ \mathrm{d}\mu_i$$ These systems are difficult to solve. You can use the EM algorithm as an approximation, or use complete likelihood maximisation scheme to perform exact Bayesian inference. Take a look on this paper Variational Inference for Crowdsourcing, Liu, Peng and Ihler 2012 (presented yesterday at NIPS !) for detailed algorithms for solving this task.
Voting system that uses accuracy of each voter and the associated uncertainty You should consider the expertise of a voter as a latent variable of your system. You may then be able to solve your problem with bayesian inference. A representation as graphical model could be like
26,696
Voting system that uses accuracy of each voter and the associated uncertainty
I know this is really old now but I just stumbled across this question while searching and I think another way to think about how to solve it is using the framework of online learning with expert advice. In this setting, a learner receives predictions (votes) from a set of "experts" (voters), and must choose what to predict itself based on this advice. After the learner makes a prediction, the true outcome is revealed, and the learner adjusts its weightings for how much to attend to each expert's advice accordingly, to minimise long-term regret (the loss incurred by a wrong decision minus the loss incurred by the best expert). Suitable references are "Tracking the Best Expert" (Warmuth & Herbster, 1998): https://users.soe.ucsc.edu/~manfred/pubs/J39.pdf and "Tracking a small set of experts by mixing past posteriors" (Bousquet & Warmuth, 2002): https://jmlr.csail.mit.edu/papers/volume3/bousquet02b/bousquet02bbw.pdf . These algorithms come with proven "regret bounds", but their actual performance will vary depending on how the population of experts changes over time.
Voting system that uses accuracy of each voter and the associated uncertainty
I know this is really old now but I just stumbled across this question while searching and I think another way to think about how to solve it is using the framework of online learning with expert advi
Voting system that uses accuracy of each voter and the associated uncertainty I know this is really old now but I just stumbled across this question while searching and I think another way to think about how to solve it is using the framework of online learning with expert advice. In this setting, a learner receives predictions (votes) from a set of "experts" (voters), and must choose what to predict itself based on this advice. After the learner makes a prediction, the true outcome is revealed, and the learner adjusts its weightings for how much to attend to each expert's advice accordingly, to minimise long-term regret (the loss incurred by a wrong decision minus the loss incurred by the best expert). Suitable references are "Tracking the Best Expert" (Warmuth & Herbster, 1998): https://users.soe.ucsc.edu/~manfred/pubs/J39.pdf and "Tracking a small set of experts by mixing past posteriors" (Bousquet & Warmuth, 2002): https://jmlr.csail.mit.edu/papers/volume3/bousquet02b/bousquet02bbw.pdf . These algorithms come with proven "regret bounds", but their actual performance will vary depending on how the population of experts changes over time.
Voting system that uses accuracy of each voter and the associated uncertainty I know this is really old now but I just stumbled across this question while searching and I think another way to think about how to solve it is using the framework of online learning with expert advi
26,697
Why are distributions important?
Using an assumed distribution (ie. parametric analysis) will reduce the computational cost of your method. I am assuming that you would like to perform a regression or classification task. This means that at some point you are going to estimate the distribution of some data. Nonparametric methods are useful when the data does not conform to a well studied distribution, but they typically take either more time to compute or more memory to store. Also if the data are generated by a process that conforms to a distribution, such as they are an average of some uniformly random processes, then using that distribution makes more sense. In the case of averaging a set of uniform variable the correct distribution is probably the Gaussian Distribution.
Why are distributions important?
Using an assumed distribution (ie. parametric analysis) will reduce the computational cost of your method. I am assuming that you would like to perform a regression or classification task. This means
Why are distributions important? Using an assumed distribution (ie. parametric analysis) will reduce the computational cost of your method. I am assuming that you would like to perform a regression or classification task. This means that at some point you are going to estimate the distribution of some data. Nonparametric methods are useful when the data does not conform to a well studied distribution, but they typically take either more time to compute or more memory to store. Also if the data are generated by a process that conforms to a distribution, such as they are an average of some uniformly random processes, then using that distribution makes more sense. In the case of averaging a set of uniform variable the correct distribution is probably the Gaussian Distribution.
Why are distributions important? Using an assumed distribution (ie. parametric analysis) will reduce the computational cost of your method. I am assuming that you would like to perform a regression or classification task. This means
26,698
Why are distributions important?
Complementing James answer: parametric models also (usually) require less samples in order to have a good fit: this may increase their generalization power: that is, they may predicted new data better, even being wrong. Of course, this depends in the situation, the models and the sample sizes.
Why are distributions important?
Complementing James answer: parametric models also (usually) require less samples in order to have a good fit: this may increase their generalization power: that is, they may predicted new data better
Why are distributions important? Complementing James answer: parametric models also (usually) require less samples in order to have a good fit: this may increase their generalization power: that is, they may predicted new data better, even being wrong. Of course, this depends in the situation, the models and the sample sizes.
Why are distributions important? Complementing James answer: parametric models also (usually) require less samples in order to have a good fit: this may increase their generalization power: that is, they may predicted new data better
26,699
Parallel straight lines on residual vs fitted plot
One possible model it one of a "rounded" or "censored" variable : let $y_1,\ldots y_{10}$ being your 10 observed values. One could suppose that there is a latent variable $Z$ representing the "real" price, which you do not fully know. However, you can write $Y_i=y_j\Rightarrow{}y_{j-1}\leq{}Z_i\leq{}y_{j+1}$ (with $y_0=-\infty, y_{11}=+\infty$, if you forgive this abuse of notation). If you are willing to risk a statement about the distribution of Z in each of these intervals, a Bayesian regression becomes trivial ; a maximum likelihood estimation needs a bit more work (but not much, as far as I can tell). Analogues of this problem are treated by Gelman & Hill (2007).
Parallel straight lines on residual vs fitted plot
One possible model it one of a "rounded" or "censored" variable : let $y_1,\ldots y_{10}$ being your 10 observed values. One could suppose that there is a latent variable $Z$ representing the "real" p
Parallel straight lines on residual vs fitted plot One possible model it one of a "rounded" or "censored" variable : let $y_1,\ldots y_{10}$ being your 10 observed values. One could suppose that there is a latent variable $Z$ representing the "real" price, which you do not fully know. However, you can write $Y_i=y_j\Rightarrow{}y_{j-1}\leq{}Z_i\leq{}y_{j+1}$ (with $y_0=-\infty, y_{11}=+\infty$, if you forgive this abuse of notation). If you are willing to risk a statement about the distribution of Z in each of these intervals, a Bayesian regression becomes trivial ; a maximum likelihood estimation needs a bit more work (but not much, as far as I can tell). Analogues of this problem are treated by Gelman & Hill (2007).
Parallel straight lines on residual vs fitted plot One possible model it one of a "rounded" or "censored" variable : let $y_1,\ldots y_{10}$ being your 10 observed values. One could suppose that there is a latent variable $Z$ representing the "real" p
26,700
Hyperprior distributions for the parameters (scale matrix and degrees of freedom) of a wishart prior to an inverse covariance matrix
R's DPpackage allows a hierarchy that goes as far as you are suggesting on the scale matrix in the function DPdensity. You can peek at what they do in their manual or in the associated vignette to get some ideas. Let $\Sigma$ be the covariance matrix. It sets $\Sigma \sim IW(\nu_1, \Psi_1)$ and $\Psi_1 \sim IW(\nu_2, \Psi_2)$ where $IW(\nu, \Psi)$ is inverse-Wishart with degrees of freedom $\nu$ and mean $\frac{\Psi^{-1}}{\nu - p - 1}$ where $p$ is the dimension of the data. This looked a little backwards to me at first, but if you play with the density you can see it is conjugate. The Wishart density doesn't look promising for putting anything analytical on $\nu$. You could always put just about anything on $\nu$ and use a Metropolis-Hastings step. EDIT: I just noticed you are using jags. There's a good chance I think that it will puke if you try to put any prior on $\Psi_1$, even though inverse-Wishart is conjugate. BUGS implementations can be fickle about what they allow for their multivariate distributions, so it might not know how to do the conjugate update. I don't know for sure though.
Hyperprior distributions for the parameters (scale matrix and degrees of freedom) of a wishart prior
R's DPpackage allows a hierarchy that goes as far as you are suggesting on the scale matrix in the function DPdensity. You can peek at what they do in their manual or in the associated vignette to get
Hyperprior distributions for the parameters (scale matrix and degrees of freedom) of a wishart prior to an inverse covariance matrix R's DPpackage allows a hierarchy that goes as far as you are suggesting on the scale matrix in the function DPdensity. You can peek at what they do in their manual or in the associated vignette to get some ideas. Let $\Sigma$ be the covariance matrix. It sets $\Sigma \sim IW(\nu_1, \Psi_1)$ and $\Psi_1 \sim IW(\nu_2, \Psi_2)$ where $IW(\nu, \Psi)$ is inverse-Wishart with degrees of freedom $\nu$ and mean $\frac{\Psi^{-1}}{\nu - p - 1}$ where $p$ is the dimension of the data. This looked a little backwards to me at first, but if you play with the density you can see it is conjugate. The Wishart density doesn't look promising for putting anything analytical on $\nu$. You could always put just about anything on $\nu$ and use a Metropolis-Hastings step. EDIT: I just noticed you are using jags. There's a good chance I think that it will puke if you try to put any prior on $\Psi_1$, even though inverse-Wishart is conjugate. BUGS implementations can be fickle about what they allow for their multivariate distributions, so it might not know how to do the conjugate update. I don't know for sure though.
Hyperprior distributions for the parameters (scale matrix and degrees of freedom) of a wishart prior R's DPpackage allows a hierarchy that goes as far as you are suggesting on the scale matrix in the function DPdensity. You can peek at what they do in their manual or in the associated vignette to get