idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
37,701 | Cramer-Rao bound for $\chi^2$ distribution parameter estimates | For small $\lambda$, lower bound for $\theta$ is $1/4 \times\nu^2 \lambda^{-3} \le \theta \lambda^{-1} \sigma^{-2}$
simplifying, $\theta$ varies with $\sigma^2 / \lambda^2$, thus $\theta\lambda^{-1}$ varies with $\sigma^2/\lambda$.
The inverse of $\theta \lambda^{-1} - 1$ will thus still increase with a lower value for $\lambda$. | Cramer-Rao bound for $\chi^2$ distribution parameter estimates | For small $\lambda$, lower bound for $\theta$ is $1/4 \times\nu^2 \lambda^{-3} \le \theta \lambda^{-1} \sigma^{-2}$
simplifying, $\theta$ varies with $\sigma^2 / \lambda^2$, thus $\theta\lambda^{-1}$ | Cramer-Rao bound for $\chi^2$ distribution parameter estimates
For small $\lambda$, lower bound for $\theta$ is $1/4 \times\nu^2 \lambda^{-3} \le \theta \lambda^{-1} \sigma^{-2}$
simplifying, $\theta$ varies with $\sigma^2 / \lambda^2$, thus $\theta\lambda^{-1}$ varies with $\sigma^2/\lambda$.
The inverse of $\theta \lambda^{-1} - 1$ will thus still increase with a lower value for $\lambda$. | Cramer-Rao bound for $\chi^2$ distribution parameter estimates
For small $\lambda$, lower bound for $\theta$ is $1/4 \times\nu^2 \lambda^{-3} \le \theta \lambda^{-1} \sigma^{-2}$
simplifying, $\theta$ varies with $\sigma^2 / \lambda^2$, thus $\theta\lambda^{-1}$ |
37,702 | If coefficient variance is incorrect (for a regression parameter), does that mean the model's log-likelihood is incorrect? | Assume that you have $m$ observations for which there is no overlap, and $m_o$ observations that each appear twice in the sample. Your total sample size is therefore $N = m+2m_o$.
If observation $y_k$ is overlapped, then in the sample there exists another observation $\tilde y_k = 1-y_k$, but for which all explanatory variables are identical, both as a set, and as numerical realizations, with those associated with $y_k$. We assume that some $y_k$ have the value $1$ while others the values $0$ (so in the subset with the overlaps, both ones and zeros appear).
If I understand correctly, overlap is akin to sample contamination of some sort. If this is the case, then the correctly specified model and likelihood should be one that includes only the $m+m_o$ observations, i.e. discarding the subset including the $\tilde y_k$'s.
Therefore the correct log-likelihood is (written separately for later purposes)
$$\ln L=\sum_{i=1}^m\left[y_i \ln p_i+(1-y_i)\ln (1-p_i)\right] + \sum_{k=m+1}^{m+m_o}\left[y_k \ln p_k+(1-y_k)\ln (1-p_k)\right]$$
where $p_i$ and $p_k$ are logistic functions of the explanatory variables and the unknown coefficients in the usual way.
The first order conditions that the maximum likelihood estimator should satisfy are (taking the gradient with respect to the unknown coefficients)
$$\sum_{i=1}^m\left[y_i - p_i\right]\mathbf x_i + \sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k=0 \tag{1}$$
Suppose now that we ignore the issue of overlap and we go on and specify a log-likelihood over all $m+2m_o$ observations
$$\ln L_o=\sum_{i=1}^{m+2m_o}\left[y_i \ln p_i+(1-y_i)\ln (1-p_i)\right]$$
which will give us the first-order conditions that the estimator must satisfy
$$\sum_{i=1}^{m+2m_o}\left[y_i - p_i\right]\mathbf x_i =0$$
which we can decompose due to overlap (that exists irrespective of whether we dealt with it or not), into
$$\sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k +\sum_{k=m+1}^{m+m_o}\left[\tilde y_k - p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k+\sum_{k=m+1}^{m+m_o}\left[1- y_k - p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k +1 - y_k-p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[1-2p_k \right]\mathbf x_k =0 \tag{2}$$
Compare $(1)$ and $(2)$. I don't see how the coefficient estimates that will satisfy $(2)$, will be the same as those that satisfy $(1)$. So I cannot understand the claim that "coefficient estimates are correct even in the presence of overlap", let alone the variance issue. | If coefficient variance is incorrect (for a regression parameter), does that mean the model's log-li | Assume that you have $m$ observations for which there is no overlap, and $m_o$ observations that each appear twice in the sample. Your total sample size is therefore $N = m+2m_o$.
If observation $y_k$ | If coefficient variance is incorrect (for a regression parameter), does that mean the model's log-likelihood is incorrect?
Assume that you have $m$ observations for which there is no overlap, and $m_o$ observations that each appear twice in the sample. Your total sample size is therefore $N = m+2m_o$.
If observation $y_k$ is overlapped, then in the sample there exists another observation $\tilde y_k = 1-y_k$, but for which all explanatory variables are identical, both as a set, and as numerical realizations, with those associated with $y_k$. We assume that some $y_k$ have the value $1$ while others the values $0$ (so in the subset with the overlaps, both ones and zeros appear).
If I understand correctly, overlap is akin to sample contamination of some sort. If this is the case, then the correctly specified model and likelihood should be one that includes only the $m+m_o$ observations, i.e. discarding the subset including the $\tilde y_k$'s.
Therefore the correct log-likelihood is (written separately for later purposes)
$$\ln L=\sum_{i=1}^m\left[y_i \ln p_i+(1-y_i)\ln (1-p_i)\right] + \sum_{k=m+1}^{m+m_o}\left[y_k \ln p_k+(1-y_k)\ln (1-p_k)\right]$$
where $p_i$ and $p_k$ are logistic functions of the explanatory variables and the unknown coefficients in the usual way.
The first order conditions that the maximum likelihood estimator should satisfy are (taking the gradient with respect to the unknown coefficients)
$$\sum_{i=1}^m\left[y_i - p_i\right]\mathbf x_i + \sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k=0 \tag{1}$$
Suppose now that we ignore the issue of overlap and we go on and specify a log-likelihood over all $m+2m_o$ observations
$$\ln L_o=\sum_{i=1}^{m+2m_o}\left[y_i \ln p_i+(1-y_i)\ln (1-p_i)\right]$$
which will give us the first-order conditions that the estimator must satisfy
$$\sum_{i=1}^{m+2m_o}\left[y_i - p_i\right]\mathbf x_i =0$$
which we can decompose due to overlap (that exists irrespective of whether we dealt with it or not), into
$$\sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k +\sum_{k=m+1}^{m+m_o}\left[\tilde y_k - p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k\right]\mathbf x_k+\sum_{k=m+1}^{m+m_o}\left[1- y_k - p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[y_k - p_k +1 - y_k-p_k\right]\mathbf x_k =0$$
$$\Rightarrow \sum_{i=1}^{m}\left[y_i - p_i\right]\mathbf x_i +\sum_{k=m+1}^{m+m_o}\left[1-2p_k \right]\mathbf x_k =0 \tag{2}$$
Compare $(1)$ and $(2)$. I don't see how the coefficient estimates that will satisfy $(2)$, will be the same as those that satisfy $(1)$. So I cannot understand the claim that "coefficient estimates are correct even in the presence of overlap", let alone the variance issue. | If coefficient variance is incorrect (for a regression parameter), does that mean the model's log-li
Assume that you have $m$ observations for which there is no overlap, and $m_o$ observations that each appear twice in the sample. Your total sample size is therefore $N = m+2m_o$.
If observation $y_k$ |
37,703 | Information gain is KL divergence | I found myself stuck with the same question in the past. Here's a series of relationships that may help to understand differences between Mutual Information (MI), Information Gain(IG) and Kullback–Leibler divergence (DKL). I used a simplified version of Wikipedia notation.
\begin{equation}
\begin{aligned}
MI(X,A) =& \sum_a P(a) IG_{X,A}{(X,a)} \\
&=\sum_a P(a) D_{\text{KL}}{\left(P{(x|a)}\|P{(x|I)}\right)}\\
&=-\sum_{a,x} P(a) P{(x|a)} \ln \left(\frac{P{(x)}}{P{(x|a)}}\right) \\
&=-\sum_{a,x} P(a) P{(x|a)} \ln \left(\frac{P{(x)}P(a)}{P{(x|a)}P(a)}\right) \\
&=-\sum_{a,x} P(x,a) \ln \left(\frac{P{(x)}P(a)}{P(x,a)}\right) \\
&\equiv D_{\text{KL}}{\left(P(x,a)\|P{(x)}P(a)\right)}\\
&= -\sum_{a,x} P{(x|a)} P(a) \ln \left(P{(x)}\right) +\sum_{a,x} P(a) P{(x|a)} \ln \left(P{(x|a)}\right)\\
&= -\sum_{x} P{(x)} \ln \left(P{(x)}\right) +\sum_{a,x} P(a) P{(x|a)} \ln \left(P{(x|a)}\right)\\
&\equiv H(x) - H(x|a)
\end{aligned}
\end{equation} | Information gain is KL divergence | I found myself stuck with the same question in the past. Here's a series of relationships that may help to understand differences between Mutual Information (MI), Information Gain(IG) and Kullback–Lei | Information gain is KL divergence
I found myself stuck with the same question in the past. Here's a series of relationships that may help to understand differences between Mutual Information (MI), Information Gain(IG) and Kullback–Leibler divergence (DKL). I used a simplified version of Wikipedia notation.
\begin{equation}
\begin{aligned}
MI(X,A) =& \sum_a P(a) IG_{X,A}{(X,a)} \\
&=\sum_a P(a) D_{\text{KL}}{\left(P{(x|a)}\|P{(x|I)}\right)}\\
&=-\sum_{a,x} P(a) P{(x|a)} \ln \left(\frac{P{(x)}}{P{(x|a)}}\right) \\
&=-\sum_{a,x} P(a) P{(x|a)} \ln \left(\frac{P{(x)}P(a)}{P{(x|a)}P(a)}\right) \\
&=-\sum_{a,x} P(x,a) \ln \left(\frac{P{(x)}P(a)}{P(x,a)}\right) \\
&\equiv D_{\text{KL}}{\left(P(x,a)\|P{(x)}P(a)\right)}\\
&= -\sum_{a,x} P{(x|a)} P(a) \ln \left(P{(x)}\right) +\sum_{a,x} P(a) P{(x|a)} \ln \left(P{(x|a)}\right)\\
&= -\sum_{x} P{(x)} \ln \left(P{(x)}\right) +\sum_{a,x} P(a) P{(x|a)} \ln \left(P{(x|a)}\right)\\
&\equiv H(x) - H(x|a)
\end{aligned}
\end{equation} | Information gain is KL divergence
I found myself stuck with the same question in the past. Here's a series of relationships that may help to understand differences between Mutual Information (MI), Information Gain(IG) and Kullback–Lei |
37,704 | Information gain is KL divergence | $Q$ is "wrong" distribution, while $P$ is the "right" distribution. The length of a code $i$ under the "wrong" coding is $-log(q_{i})$. So, the average message length under the "wrong" coding is:
$-\sum p_{i} log(q_{i}) = H(P,Q)$
Under the "right" coding, the average message length is $H(P)$. Imagine that we are using entropy coding like arithmetic coding, but we have not estimated the probability distribution right. Then the average message length is a bit larger then the theoretical limit $H(P)$. The difference is Kullback–Leibler divergence. | Information gain is KL divergence | $Q$ is "wrong" distribution, while $P$ is the "right" distribution. The length of a code $i$ under the "wrong" coding is $-log(q_{i})$. So, the average message length under the "wrong" coding is:
$-\s | Information gain is KL divergence
$Q$ is "wrong" distribution, while $P$ is the "right" distribution. The length of a code $i$ under the "wrong" coding is $-log(q_{i})$. So, the average message length under the "wrong" coding is:
$-\sum p_{i} log(q_{i}) = H(P,Q)$
Under the "right" coding, the average message length is $H(P)$. Imagine that we are using entropy coding like arithmetic coding, but we have not estimated the probability distribution right. Then the average message length is a bit larger then the theoretical limit $H(P)$. The difference is Kullback–Leibler divergence. | Information gain is KL divergence
$Q$ is "wrong" distribution, while $P$ is the "right" distribution. The length of a code $i$ under the "wrong" coding is $-log(q_{i})$. So, the average message length under the "wrong" coding is:
$-\s |
37,705 | Can I subsample a large dataset at every MCMC iteration? | About the subsampling strategies: just for example consider to have two observations $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$ and consider to put some priors on the mean and variance. Let $\theta = (\mu_1, \mu_2, \sigma_1^2, \sigma_2^2)$, the posterior we want to evaluate is
$$
f(\theta|X_1, X_2) \propto f(X_1|\theta)f(X_2 | \theta)f(\theta)
$$
COnsider now a binomial variable $\delta \sim B(0.5)$. If $\delta=0$ we chose $X_1$, if $\delta =1$ we chose $X_2$, the new posterior is$$
f(\theta, \delta|X_1, X_2) \propto f(X_1, X_2|\delta,\theta)f(\theta)f(\delta)
$$
where $f(X_1, X_2|\delta,\theta) = f(X_1|\theta)^{\delta} f(X_2|\theta)^{1-\delta}$ and $f(\delta) = 0.5$. Now if you want to sample $\delta$ with a Gibbs step you have to compute $f(X_1|\theta)$ and $f(X_2|\theta)$ because $P(\delta=1)= \frac{f(X_1|\theta) }{f(X_1|\theta) +f(X_2|\theta) }$. If you otherwise use the Metropolis Hastings then you propose a new state $\delta^*$ and you have to compute only one between $f(X_1|\theta)$ and $f(X_2|\theta)$, the one associated with the proposed states but you have to compute one between $f(X_1|\theta)$ and $f(X_2|\theta)$ even for the last accepted state of $\delta$. Then I am not sure that the metropolis will give you some advantage. Moreover here we are considering a bivariate process, but with a multivariate process the sampling of the $\delta$s can be very complicated with the metropolis. | Can I subsample a large dataset at every MCMC iteration? | About the subsampling strategies: just for example consider to have two observations $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$ and consider to put some priors on the mean and | Can I subsample a large dataset at every MCMC iteration?
About the subsampling strategies: just for example consider to have two observations $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$ and consider to put some priors on the mean and variance. Let $\theta = (\mu_1, \mu_2, \sigma_1^2, \sigma_2^2)$, the posterior we want to evaluate is
$$
f(\theta|X_1, X_2) \propto f(X_1|\theta)f(X_2 | \theta)f(\theta)
$$
COnsider now a binomial variable $\delta \sim B(0.5)$. If $\delta=0$ we chose $X_1$, if $\delta =1$ we chose $X_2$, the new posterior is$$
f(\theta, \delta|X_1, X_2) \propto f(X_1, X_2|\delta,\theta)f(\theta)f(\delta)
$$
where $f(X_1, X_2|\delta,\theta) = f(X_1|\theta)^{\delta} f(X_2|\theta)^{1-\delta}$ and $f(\delta) = 0.5$. Now if you want to sample $\delta$ with a Gibbs step you have to compute $f(X_1|\theta)$ and $f(X_2|\theta)$ because $P(\delta=1)= \frac{f(X_1|\theta) }{f(X_1|\theta) +f(X_2|\theta) }$. If you otherwise use the Metropolis Hastings then you propose a new state $\delta^*$ and you have to compute only one between $f(X_1|\theta)$ and $f(X_2|\theta)$, the one associated with the proposed states but you have to compute one between $f(X_1|\theta)$ and $f(X_2|\theta)$ even for the last accepted state of $\delta$. Then I am not sure that the metropolis will give you some advantage. Moreover here we are considering a bivariate process, but with a multivariate process the sampling of the $\delta$s can be very complicated with the metropolis. | Can I subsample a large dataset at every MCMC iteration?
About the subsampling strategies: just for example consider to have two observations $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$ and consider to put some priors on the mean and |
37,706 | Nonparametric mixture model and clusters | Answering to your point "Would you advise another point of view for this problem?", I would suggest that you actually have a look at your data. This can help you better plan what next steps to take. After all, the human eye-brain system is quite good in pattern recognition and you might be able to better decide upon the number of clusters, should you opt for an unsupervised clustering.
Accordingly, and since your data seems to be "high"-dimensional, you could try to perform a Principal Components Analysis (PCA) as this is a very quick analysis, especially for your dataset of 100k points. PCA, though, is not the only and not necessarily the most appropriate approach for dimensiona reduction with the goal of (2D/3D) visualization as it is a parametric, linear method. Your data may behave nonlinear though. I can suggest the dimension reduction toolbox for Matlab from Laurens van der Maaten which include a lot of different techniques. However, some of the techniques therein are inherently slow, so you might want to test them on subsampled data. A very recent and powerful nonparametric and nonlinear dimension reduction technique is BH-SNE which should also work for your dataset size, although it could take around 30 minutes to 1 hour depending on your available hardware. Since you are interested in the detection of clusters, BH-SNE might be a good choice as it (and it's "predecessor" t-SNE) has shown impressive performance in these regards on various datasets (s.a. the manuscript).
Finally, addressing your point on continuous/discrete data, this is something where I do not yet have experience how this influences the dimension reduction. Accordingly, you might want to try either discretizing the continuous variables or ignoring the (few?) discrete variables, if possible. Alternatively, you might want to take the binary variable (person's reaction) to color-code the points in the low-dimensional (2D/3D) visualization.
P.S. Performing a hierarchical clustering (linkage analysis) and looking at the resulting dendrogram is another way for creating a low-dimensional representation of you data which can help you better estimate if there are clusters and potentially also how many clusters there are. | Nonparametric mixture model and clusters | Answering to your point "Would you advise another point of view for this problem?", I would suggest that you actually have a look at your data. This can help you better plan what next steps to take. A | Nonparametric mixture model and clusters
Answering to your point "Would you advise another point of view for this problem?", I would suggest that you actually have a look at your data. This can help you better plan what next steps to take. After all, the human eye-brain system is quite good in pattern recognition and you might be able to better decide upon the number of clusters, should you opt for an unsupervised clustering.
Accordingly, and since your data seems to be "high"-dimensional, you could try to perform a Principal Components Analysis (PCA) as this is a very quick analysis, especially for your dataset of 100k points. PCA, though, is not the only and not necessarily the most appropriate approach for dimensiona reduction with the goal of (2D/3D) visualization as it is a parametric, linear method. Your data may behave nonlinear though. I can suggest the dimension reduction toolbox for Matlab from Laurens van der Maaten which include a lot of different techniques. However, some of the techniques therein are inherently slow, so you might want to test them on subsampled data. A very recent and powerful nonparametric and nonlinear dimension reduction technique is BH-SNE which should also work for your dataset size, although it could take around 30 minutes to 1 hour depending on your available hardware. Since you are interested in the detection of clusters, BH-SNE might be a good choice as it (and it's "predecessor" t-SNE) has shown impressive performance in these regards on various datasets (s.a. the manuscript).
Finally, addressing your point on continuous/discrete data, this is something where I do not yet have experience how this influences the dimension reduction. Accordingly, you might want to try either discretizing the continuous variables or ignoring the (few?) discrete variables, if possible. Alternatively, you might want to take the binary variable (person's reaction) to color-code the points in the low-dimensional (2D/3D) visualization.
P.S. Performing a hierarchical clustering (linkage analysis) and looking at the resulting dendrogram is another way for creating a low-dimensional representation of you data which can help you better estimate if there are clusters and potentially also how many clusters there are. | Nonparametric mixture model and clusters
Answering to your point "Would you advise another point of view for this problem?", I would suggest that you actually have a look at your data. This can help you better plan what next steps to take. A |
37,707 | Violation Proportionality Cox model - Repeat analysis? | Though I'm not an expert in survival analysis, I put here my suggestions and hope they will be helpful.
First of all, selection of variables looking at their p-values is a wrong way, especially when the model is aimed to make statistical inferences. You can read about that in multiple sources searching for "stepwise regression drawbacks". The selection of variables should be based on your domain-specific knowledge. All variables which are relevant (on your opinion) should be present, no matter whether their influence is significant or not. In such way you will report the effect of Sit adjusted for the list of used variables, and that is right. It seems that your research is exploratory but not confirmatory. In such case while interpreting the results, you'd better make emphasis on the sizes of effect (model coefficients, odds ratios or risk ratios) rather then on p-values.
As for violation of proportionality assumption: taking into consideration the interaction between Sit and time, you are incorporating linear dependence of Sit on time into the model. So if the true relationship between Sit and time is really close to linear, then proportionality assumption will be held. Thus all model diagnostics methods remains relevant. | Violation Proportionality Cox model - Repeat analysis? | Though I'm not an expert in survival analysis, I put here my suggestions and hope they will be helpful.
First of all, selection of variables looking at their p-values is a wrong way, especially when t | Violation Proportionality Cox model - Repeat analysis?
Though I'm not an expert in survival analysis, I put here my suggestions and hope they will be helpful.
First of all, selection of variables looking at their p-values is a wrong way, especially when the model is aimed to make statistical inferences. You can read about that in multiple sources searching for "stepwise regression drawbacks". The selection of variables should be based on your domain-specific knowledge. All variables which are relevant (on your opinion) should be present, no matter whether their influence is significant or not. In such way you will report the effect of Sit adjusted for the list of used variables, and that is right. It seems that your research is exploratory but not confirmatory. In such case while interpreting the results, you'd better make emphasis on the sizes of effect (model coefficients, odds ratios or risk ratios) rather then on p-values.
As for violation of proportionality assumption: taking into consideration the interaction between Sit and time, you are incorporating linear dependence of Sit on time into the model. So if the true relationship between Sit and time is really close to linear, then proportionality assumption will be held. Thus all model diagnostics methods remains relevant. | Violation Proportionality Cox model - Repeat analysis?
Though I'm not an expert in survival analysis, I put here my suggestions and hope they will be helpful.
First of all, selection of variables looking at their p-values is a wrong way, especially when t |
37,708 | Gaussian mixture regression in higher dimensions | Your idea is exactly what I would recommend. To enforce the positive definiteness of the covariance matrices, just use the Cholesky decomposition $\Sigma_i = L_iL_i^T$, and minimise the objective with respect to $L_i$ instead of $\Sigma_i$. I refer to this article for more details.
Also, you may try to optimise the log-likelihood instead of the mean square error. | Gaussian mixture regression in higher dimensions | Your idea is exactly what I would recommend. To enforce the positive definiteness of the covariance matrices, just use the Cholesky decomposition $\Sigma_i = L_iL_i^T$, and minimise the objective with | Gaussian mixture regression in higher dimensions
Your idea is exactly what I would recommend. To enforce the positive definiteness of the covariance matrices, just use the Cholesky decomposition $\Sigma_i = L_iL_i^T$, and minimise the objective with respect to $L_i$ instead of $\Sigma_i$. I refer to this article for more details.
Also, you may try to optimise the log-likelihood instead of the mean square error. | Gaussian mixture regression in higher dimensions
Your idea is exactly what I would recommend. To enforce the positive definiteness of the covariance matrices, just use the Cholesky decomposition $\Sigma_i = L_iL_i^T$, and minimise the objective with |
37,709 | Interval censoring | The most important issue here is the understanding of censoring and which type applies in your situation. So for your problems 1. and 3., understand the context of your problem. This will help you define the appropriate censoring method.
The R output says that the first group of failures is in the interval (14,16]. This doesn't mean the failure occurred at 14. It means that R assumed the data to be right-censored, which is the most common assumption for survival analysis. Why is the failure quoted as a range (14,16] as opposed to just a probability at 16? It's likely due to a confidence limit estimation.
Interpreting the R result, which is similar to SAS: The probability of failure at t=16 is 50%, at t=40 is 30%, at t=94 is 20%.
Forget about trying to understand the issue by using three analysis packages. Pick one, understand the options you can set for censoring, and use it. A good link for R: here | Interval censoring | The most important issue here is the understanding of censoring and which type applies in your situation. So for your problems 1. and 3., understand the context of your problem. This will help you def | Interval censoring
The most important issue here is the understanding of censoring and which type applies in your situation. So for your problems 1. and 3., understand the context of your problem. This will help you define the appropriate censoring method.
The R output says that the first group of failures is in the interval (14,16]. This doesn't mean the failure occurred at 14. It means that R assumed the data to be right-censored, which is the most common assumption for survival analysis. Why is the failure quoted as a range (14,16] as opposed to just a probability at 16? It's likely due to a confidence limit estimation.
Interpreting the R result, which is similar to SAS: The probability of failure at t=16 is 50%, at t=40 is 30%, at t=94 is 20%.
Forget about trying to understand the issue by using three analysis packages. Pick one, understand the options you can set for censoring, and use it. A good link for R: here | Interval censoring
The most important issue here is the understanding of censoring and which type applies in your situation. So for your problems 1. and 3., understand the context of your problem. This will help you def |
37,710 | Overlapping sample t-test | You can try to use regression. Let $Y$, the response, be the performance measurtement. Then you make dummys for the persons and for the weeks. Now, this might get you more $x$-variables than observations, but maybe it can be salvaged, you can try to treat the person variable as a random effect, and week as a fixed effect. | Overlapping sample t-test | You can try to use regression. Let $Y$, the response, be the performance measurtement. Then you make dummys for the persons and for the weeks. Now, this might get you more $x$-variables than observati | Overlapping sample t-test
You can try to use regression. Let $Y$, the response, be the performance measurtement. Then you make dummys for the persons and for the weeks. Now, this might get you more $x$-variables than observations, but maybe it can be salvaged, you can try to treat the person variable as a random effect, and week as a fixed effect. | Overlapping sample t-test
You can try to use regression. Let $Y$, the response, be the performance measurtement. Then you make dummys for the persons and for the weeks. Now, this might get you more $x$-variables than observati |
37,711 | Overlapping sample t-test | You measure the performance for each person multiple times on different days and compare the two groups with the same people in each group. Therefore you should treat the two groups as non-independent, because it is very likely that the performance of one person will correlate on different days.
There are several options to test your hypothesis. Some ideas (assuming the requirements are met):
A simple method would be to calculate the average performance for each person, separated for week 1 and week 2. Then you compare the two means using a paired t-test. However, this will ignore some information like the size or composition of the workgroup.
Another approach is a repeated-measure ANOVA, where you could include the performance of each day, as well as covariates like the group size. Then you test your hypothesis using contrasts. The composition of the workgroup is still ignored.
If you have reason to believe the performance is dependent on the composition of the workgroup, you could do a multilevel/mixed-design model and add the day as a random effect. | Overlapping sample t-test | You measure the performance for each person multiple times on different days and compare the two groups with the same people in each group. Therefore you should treat the two groups as non-independent | Overlapping sample t-test
You measure the performance for each person multiple times on different days and compare the two groups with the same people in each group. Therefore you should treat the two groups as non-independent, because it is very likely that the performance of one person will correlate on different days.
There are several options to test your hypothesis. Some ideas (assuming the requirements are met):
A simple method would be to calculate the average performance for each person, separated for week 1 and week 2. Then you compare the two means using a paired t-test. However, this will ignore some information like the size or composition of the workgroup.
Another approach is a repeated-measure ANOVA, where you could include the performance of each day, as well as covariates like the group size. Then you test your hypothesis using contrasts. The composition of the workgroup is still ignored.
If you have reason to believe the performance is dependent on the composition of the workgroup, you could do a multilevel/mixed-design model and add the day as a random effect. | Overlapping sample t-test
You measure the performance for each person multiple times on different days and compare the two groups with the same people in each group. Therefore you should treat the two groups as non-independent |
37,712 | Comparable training and test cross-entropies result in very different accuracies | It is suggested by Prof. Frank Harrell in my post that accuracy in percentage is an improper scoring rule in which the accuracy score is possibly optimized by the wrong model, and addition of a highly important predictor may cause the model less accurate. Besides, the accuracy scoring rule is high variance. On the other hand, the log-likelihood is considered as a proper scoring rule. | Comparable training and test cross-entropies result in very different accuracies | It is suggested by Prof. Frank Harrell in my post that accuracy in percentage is an improper scoring rule in which the accuracy score is possibly optimized by the wrong model, and addition of a highl | Comparable training and test cross-entropies result in very different accuracies
It is suggested by Prof. Frank Harrell in my post that accuracy in percentage is an improper scoring rule in which the accuracy score is possibly optimized by the wrong model, and addition of a highly important predictor may cause the model less accurate. Besides, the accuracy scoring rule is high variance. On the other hand, the log-likelihood is considered as a proper scoring rule. | Comparable training and test cross-entropies result in very different accuracies
It is suggested by Prof. Frank Harrell in my post that accuracy in percentage is an improper scoring rule in which the accuracy score is possibly optimized by the wrong model, and addition of a highl |
37,713 | Determining the best correlated time series | how about you try a Two-Way Anova AND a pairwise test whether with your yearly data and/or the 5 year-period-intervals. You may also do this with the raw, normalized data or Box-Cox data.
Idea is, that you can look for any non-significant (for the reference station) difference between the distributions of precipitation per station.
I found this link to be helpful to start your own Two-Way-Anova via R r-tutorial-series-two-way-anova
Sebastian | Determining the best correlated time series | how about you try a Two-Way Anova AND a pairwise test whether with your yearly data and/or the 5 year-period-intervals. You may also do this with the raw, normalized data or Box-Cox data.
Idea is, tha | Determining the best correlated time series
how about you try a Two-Way Anova AND a pairwise test whether with your yearly data and/or the 5 year-period-intervals. You may also do this with the raw, normalized data or Box-Cox data.
Idea is, that you can look for any non-significant (for the reference station) difference between the distributions of precipitation per station.
I found this link to be helpful to start your own Two-Way-Anova via R r-tutorial-series-two-way-anova
Sebastian | Determining the best correlated time series
how about you try a Two-Way Anova AND a pairwise test whether with your yearly data and/or the 5 year-period-intervals. You may also do this with the raw, normalized data or Box-Cox data.
Idea is, tha |
37,714 | Probability puzzle about zombies [closed] | I've actually written two papers on zombie epidemiology, so this is a question near and dear to my heart.
One suggestion I have is to use a dynamic model to estimate this, rather than just a time series (this is the technique used by the Smith? (the question mark is part of his name) paper).
There's a good site for tinkering with some of the parameters you've suggested that's the result of a paper I worked on: https://www.cartwrig.ht/apps/whitezed/ | Probability puzzle about zombies [closed] | I've actually written two papers on zombie epidemiology, so this is a question near and dear to my heart.
One suggestion I have is to use a dynamic model to estimate this, rather than just a time seri | Probability puzzle about zombies [closed]
I've actually written two papers on zombie epidemiology, so this is a question near and dear to my heart.
One suggestion I have is to use a dynamic model to estimate this, rather than just a time series (this is the technique used by the Smith? (the question mark is part of his name) paper).
There's a good site for tinkering with some of the parameters you've suggested that's the result of a paper I worked on: https://www.cartwrig.ht/apps/whitezed/ | Probability puzzle about zombies [closed]
I've actually written two papers on zombie epidemiology, so this is a question near and dear to my heart.
One suggestion I have is to use a dynamic model to estimate this, rather than just a time seri |
37,715 | Machine learning with ordered labels | This question seems to repeat itself every now and then (see here, for example), so I'll just summarize the answers and sources that have been accumulated in the time since this question was asked.
Redefine Objective
Ordinal Categorical Classification
A modified version of the cross-entropy, adjusted to ordinal target. It penalizes the model for predicting a wrong category that is further away from the true category more than for predicting a wrong category that is closer to the true category.
$$l(y,\hat{\ y}) = (1+\omega) * CE(y,\hat{\ y}), \text{ s.t.} $$
$$\omega = \dfrac{|class(\hat{\ y})-class(y)|}{k-1}$$
Where $k$ is the number of possible classes. The operation 𝑐𝑙𝑎𝑠𝑠(𝑦) the predicted class of an observation that is achieved by arg-maxing the probabilities in a muti-class prediction task.
A Keras implementation of the ordinal cross-entropy is available here. An example (taken from the link)
import ordinal_categorical_crossentropy as OCC
model = *your model here*
model.compile(loss=OCC.loss, optimizer='adam', metrics=['accuracy'])
Treat your problem as a regression problem
Instead of classification task, treat your task as a regression task, and then round your prediction/map them to categories using any kind of method. As Stephan Kolassa mentions, the underlying assumption of this method is that one's scores are interval scaled.
Cumulative Link Loss
Originally proposed here. It is based on the logistic regression model, but with a link function that maps the logits to the cumulative probabilities of being in a given category or lower. A set of ordered thresholds splits this space into the different classes of the problem.The projections are estimated by the model.
Under this method there two loss functions are suggested, probit based and logitbased.
A keras implementation of the two versions is available here.
Other methods I won't cover
The following table was taken from here and describe other loss functions for ordinal categorical target.
Treat ordinal categories as multi-label problem
Under this method, we convert our ordinal targets into a matrix such that every class $c$ includes all the first $c$ entries being 1, as the following suggest:
Lowest -> [1,0,0,0,0]
Low -> [1,1,0,0,0]
Medium -> [1,1,1,0,0]
High -> [1,1,1,1,0]
Highest -> [1,1,1,1,1]
Then the loss function has to be changed as this paper suggest. See this blog-post for an implementation. | Machine learning with ordered labels | This question seems to repeat itself every now and then (see here, for example), so I'll just summarize the answers and sources that have been accumulated in the time since this question was asked.
Re | Machine learning with ordered labels
This question seems to repeat itself every now and then (see here, for example), so I'll just summarize the answers and sources that have been accumulated in the time since this question was asked.
Redefine Objective
Ordinal Categorical Classification
A modified version of the cross-entropy, adjusted to ordinal target. It penalizes the model for predicting a wrong category that is further away from the true category more than for predicting a wrong category that is closer to the true category.
$$l(y,\hat{\ y}) = (1+\omega) * CE(y,\hat{\ y}), \text{ s.t.} $$
$$\omega = \dfrac{|class(\hat{\ y})-class(y)|}{k-1}$$
Where $k$ is the number of possible classes. The operation 𝑐𝑙𝑎𝑠𝑠(𝑦) the predicted class of an observation that is achieved by arg-maxing the probabilities in a muti-class prediction task.
A Keras implementation of the ordinal cross-entropy is available here. An example (taken from the link)
import ordinal_categorical_crossentropy as OCC
model = *your model here*
model.compile(loss=OCC.loss, optimizer='adam', metrics=['accuracy'])
Treat your problem as a regression problem
Instead of classification task, treat your task as a regression task, and then round your prediction/map them to categories using any kind of method. As Stephan Kolassa mentions, the underlying assumption of this method is that one's scores are interval scaled.
Cumulative Link Loss
Originally proposed here. It is based on the logistic regression model, but with a link function that maps the logits to the cumulative probabilities of being in a given category or lower. A set of ordered thresholds splits this space into the different classes of the problem.The projections are estimated by the model.
Under this method there two loss functions are suggested, probit based and logitbased.
A keras implementation of the two versions is available here.
Other methods I won't cover
The following table was taken from here and describe other loss functions for ordinal categorical target.
Treat ordinal categories as multi-label problem
Under this method, we convert our ordinal targets into a matrix such that every class $c$ includes all the first $c$ entries being 1, as the following suggest:
Lowest -> [1,0,0,0,0]
Low -> [1,1,0,0,0]
Medium -> [1,1,1,0,0]
High -> [1,1,1,1,0]
Highest -> [1,1,1,1,1]
Then the loss function has to be changed as this paper suggest. See this blog-post for an implementation. | Machine learning with ordered labels
This question seems to repeat itself every now and then (see here, for example), so I'll just summarize the answers and sources that have been accumulated in the time since this question was asked.
Re |
37,716 | Yet another "Bayesian vs Maximum Likelihood" question | You should give a reference for your claim that the approximation obtained by simply replacing $\theta$ by its maximum likelihood estimator $\hat{\theta}$ is good. That approximation will forget about the uncertainty in the estimation of $\theta$, and might be a good approximation is some cases and bad in others. That must be evaluated on a case-by-case basis. It will mostly be bad when there are few observations. A particular case where it is bad is a binomial likelihood with $\hat{p}=0$.
One general approach to representing the uncertainty of estimation of $\theta$ is using the laplace approximation of the integral, an approach which should be better known. Start with the conditional density above in the form
$$ \DeclareMathOperator*{\argmax}{arg\,max}
f(y \mid x) = \int f(\theta \mid x) g(y \mid \theta) \; d\theta
$$ (which assumes that $y$ and $x$ are conditionally independent given $\theta$). Write $u(y; \theta) = f(\theta \mid x) g(y \mid \theta)$ and $\theta_y =\argmax_\theta u(y; \theta)$, that is, the value of $\theta$ giving the maximum, as a function of $y$. Suppose also that the maximum is found by setting the derivative equal to zero. Then write the (negative) second derivative as $u_y''= -\frac{\partial^2}{\partial^2 \theta} \log u(y;\theta)$ (evaluated at the maximum $\theta_y$). We have then $u_y'' > 0$.
Using Taylor expansion (and forgetting the error term, to get an approximation) we have
$$
\int f(\theta \mid x) g(y \mid \theta) \; d\theta = \\
\int \exp\left( \log u(y;\theta) \right)\; d\theta = \\
\int \exp\left( \log u(y;\theta_y)+\frac12 u_y'' (\theta - \theta_y)^2 +\cdots\right) \; d\theta \approx \\
u(y; \theta_y) \frac{\sqrt{2\pi}}{\sqrt{u_y''}}
$$
We can summarize this that a better approximation for the (predictive) posterior of $Y$ is
$$
p(y \mid x) \approx f(\theta_y \mid x) g(y \mid \theta_y) \frac{\sqrt{2\pi}}{\sqrt{u_y''}}
$$
Note that $\theta_y$ can be seen as a maximum, likelihood estimator using both $x$ and $y$ as data. This idea is related to the use of a profile likelihood function. (The above development is assuming that $\theta$ is a scalar. The modification for a vector parameter is trivial). A paper-length treatment can be found here. | Yet another "Bayesian vs Maximum Likelihood" question | You should give a reference for your claim that the approximation obtained by simply replacing $\theta$ by its maximum likelihood estimator $\hat{\theta}$ is good. That approximation will forget abou | Yet another "Bayesian vs Maximum Likelihood" question
You should give a reference for your claim that the approximation obtained by simply replacing $\theta$ by its maximum likelihood estimator $\hat{\theta}$ is good. That approximation will forget about the uncertainty in the estimation of $\theta$, and might be a good approximation is some cases and bad in others. That must be evaluated on a case-by-case basis. It will mostly be bad when there are few observations. A particular case where it is bad is a binomial likelihood with $\hat{p}=0$.
One general approach to representing the uncertainty of estimation of $\theta$ is using the laplace approximation of the integral, an approach which should be better known. Start with the conditional density above in the form
$$ \DeclareMathOperator*{\argmax}{arg\,max}
f(y \mid x) = \int f(\theta \mid x) g(y \mid \theta) \; d\theta
$$ (which assumes that $y$ and $x$ are conditionally independent given $\theta$). Write $u(y; \theta) = f(\theta \mid x) g(y \mid \theta)$ and $\theta_y =\argmax_\theta u(y; \theta)$, that is, the value of $\theta$ giving the maximum, as a function of $y$. Suppose also that the maximum is found by setting the derivative equal to zero. Then write the (negative) second derivative as $u_y''= -\frac{\partial^2}{\partial^2 \theta} \log u(y;\theta)$ (evaluated at the maximum $\theta_y$). We have then $u_y'' > 0$.
Using Taylor expansion (and forgetting the error term, to get an approximation) we have
$$
\int f(\theta \mid x) g(y \mid \theta) \; d\theta = \\
\int \exp\left( \log u(y;\theta) \right)\; d\theta = \\
\int \exp\left( \log u(y;\theta_y)+\frac12 u_y'' (\theta - \theta_y)^2 +\cdots\right) \; d\theta \approx \\
u(y; \theta_y) \frac{\sqrt{2\pi}}{\sqrt{u_y''}}
$$
We can summarize this that a better approximation for the (predictive) posterior of $Y$ is
$$
p(y \mid x) \approx f(\theta_y \mid x) g(y \mid \theta_y) \frac{\sqrt{2\pi}}{\sqrt{u_y''}}
$$
Note that $\theta_y$ can be seen as a maximum, likelihood estimator using both $x$ and $y$ as data. This idea is related to the use of a profile likelihood function. (The above development is assuming that $\theta$ is a scalar. The modification for a vector parameter is trivial). A paper-length treatment can be found here. | Yet another "Bayesian vs Maximum Likelihood" question
You should give a reference for your claim that the approximation obtained by simply replacing $\theta$ by its maximum likelihood estimator $\hat{\theta}$ is good. That approximation will forget abou |
37,717 | Yet another "Bayesian vs Maximum Likelihood" question | I would like to wax philosophically on @kjetil's answer, and specifically this statement:
That approximation will forget about the uncertainty in the estimation of θ, and might be a good approximation is some cases and bad in others. That must be evaluated on a case-by-case basis.
The reason that we can use the MLE to good effect is because of the following two things:
The real world is sane.
We know that ``extraordinary claims require extraordinary evidence'' -Carl Sagan
The world is sane
What I mean by the first point is that if you make up an arbitrary problem, out of `all possible problems' in some sense, then the MLE is likely to be a terrible estimate. However, if you choose a real problem out of the set of problems that one might legitimately encounter in the real world, then the MLE works reasonably well because $P(\theta)$ is not unreasonable.
To illustrate, consider that we would like to estimate $\theta$, the probability of heads of some coin-of-unknown-fairness. Now, in order to even compute the Bayesian version, before we can start computing probabilities with respect to a dataset $X$ we first need to contemplate the world of possible coins. This world of coins in which we found our coin is essentially $P(\theta)$, our prior probability.
Ordinarily, this world is easy to contemplate, because we would have a real world coin that we need to estimate, and we live in the real world. However, in a non-real world, who knows what manner of strange and magical coins there be? In a particular weird and magical world, we might have the following prior:
$$P(\theta) = \begin{cases}
0 & \theta \in A \\
m(I)/m(A) & \theta \in I - A \\
0 & else
\end{cases}$$
Where $m$ is the Lebesgue measure, $I$ is the unit interval, and $A$ is a set constructed with this clever method by Rudin.
We get some very strange behavior from this situation. Notably, there is an $m(A)$ chance that our MLE of $\theta$ is impossible. If we construct $A$ so that $m(A)$ is very close to 1, then the MLE of $\theta$ is almost certainly going to be bad in the sense that it will be impossible.
However, we don't live in this weird world. We live in the real world. Generally, when we pick up a coin, a prior for heads that is heavily weighted near $50\%$ is not unreasonable. At the very least, a continuous prior is almost certainly a good assumption. There is no mathematical necessity that our prior be continuous everywhere or anywhere, but we live in the real world, and the real world is a very special world out of the set of all mathematically feasible worlds. If $\theta_1$ is close to $\theta_2$ in the real world, then we anticipate that $\theta_1$ is nearly as likely as $\theta_2$ to be the correct proportion of heads. The fact that our world is a sane world is very convenient for scientists, who depend on this in order to estimate e.g. the likelihood that some coin will turn up heads. In short, priors in our world tend to be well-behaved, and this constraint along with the constraint discussed in the next section, means that the MLE is generally a likely one in our posterior distribution.
Extraordinary claims require extraordinary evidence
To illustrate this, consider Fisher's tea tasting lady. The tea tasting lady claims that she has skill at determining whether the tea or milk has been poured into the cup first. To test this, we design an experiment in which we randomize the order in which tea and milk are added to some cups of tea, and then we decide to choose the percent difference in the fraction of times she was correct and 0.5 (random guessing) as the MLE of her relative skill at tea tasting. If we pour 5 cups of tea for her to taste, then we are guaranteed to measure at least a 20% tea tasting skill, and it is not unlikely that we measure a 60% or 100% tea tasting skill.
However, we reflect briefly upon this experiment that we have designed, and it is clear that this is a terrible experiment. This is because we a priori judge this lady's claim to be nuts... there's just no way she can tell whether we poured the tea or the milk into the cup first. In other words, our prior is extremely skewed in this situation, so that our MLE is not very good in the sense that it is improbable given our prior.
As good scientists, however, we were not fooled by this, because we know that extraordinary claims require extraordinary evidence. If this lady really, really, for realsies can taste whether or not the tea was first, we need her to taste not only 5 cups, but 5000 cups! Of course, as the amount of evidence grows, the evidence overwhelms our skewed prior, and the MLE approaches the Bayesian estimate.
To sum up
In conclusion, since our world is sane, and since good scientists realize that extraordinary claims require extraordinary evidence, then generally when we compute a maximum likelihood estimate (and are inclined to take it seriously), it is not far from the maximum posterior estimate. This is because priors for problems that we test are generally very boring. They're not extremely skewed, they are continuous, and mostly differentiable, and don't tend to conflict with reality to any large degree. Thus, our MLE is usually quite likely under our prior. If the value is likely in the prior, and the evidence also supports the value, then it will be very likely in the posterior. Thus, the MLE and the MAP estimates tend not to be so different in real world problems. Of course, there is no guarantee that this is the case, but it is a convenient property of the sane world in which we live. | Yet another "Bayesian vs Maximum Likelihood" question | I would like to wax philosophically on @kjetil's answer, and specifically this statement:
That approximation will forget about the uncertainty in the estimation of θ, and might be a good approximatio | Yet another "Bayesian vs Maximum Likelihood" question
I would like to wax philosophically on @kjetil's answer, and specifically this statement:
That approximation will forget about the uncertainty in the estimation of θ, and might be a good approximation is some cases and bad in others. That must be evaluated on a case-by-case basis.
The reason that we can use the MLE to good effect is because of the following two things:
The real world is sane.
We know that ``extraordinary claims require extraordinary evidence'' -Carl Sagan
The world is sane
What I mean by the first point is that if you make up an arbitrary problem, out of `all possible problems' in some sense, then the MLE is likely to be a terrible estimate. However, if you choose a real problem out of the set of problems that one might legitimately encounter in the real world, then the MLE works reasonably well because $P(\theta)$ is not unreasonable.
To illustrate, consider that we would like to estimate $\theta$, the probability of heads of some coin-of-unknown-fairness. Now, in order to even compute the Bayesian version, before we can start computing probabilities with respect to a dataset $X$ we first need to contemplate the world of possible coins. This world of coins in which we found our coin is essentially $P(\theta)$, our prior probability.
Ordinarily, this world is easy to contemplate, because we would have a real world coin that we need to estimate, and we live in the real world. However, in a non-real world, who knows what manner of strange and magical coins there be? In a particular weird and magical world, we might have the following prior:
$$P(\theta) = \begin{cases}
0 & \theta \in A \\
m(I)/m(A) & \theta \in I - A \\
0 & else
\end{cases}$$
Where $m$ is the Lebesgue measure, $I$ is the unit interval, and $A$ is a set constructed with this clever method by Rudin.
We get some very strange behavior from this situation. Notably, there is an $m(A)$ chance that our MLE of $\theta$ is impossible. If we construct $A$ so that $m(A)$ is very close to 1, then the MLE of $\theta$ is almost certainly going to be bad in the sense that it will be impossible.
However, we don't live in this weird world. We live in the real world. Generally, when we pick up a coin, a prior for heads that is heavily weighted near $50\%$ is not unreasonable. At the very least, a continuous prior is almost certainly a good assumption. There is no mathematical necessity that our prior be continuous everywhere or anywhere, but we live in the real world, and the real world is a very special world out of the set of all mathematically feasible worlds. If $\theta_1$ is close to $\theta_2$ in the real world, then we anticipate that $\theta_1$ is nearly as likely as $\theta_2$ to be the correct proportion of heads. The fact that our world is a sane world is very convenient for scientists, who depend on this in order to estimate e.g. the likelihood that some coin will turn up heads. In short, priors in our world tend to be well-behaved, and this constraint along with the constraint discussed in the next section, means that the MLE is generally a likely one in our posterior distribution.
Extraordinary claims require extraordinary evidence
To illustrate this, consider Fisher's tea tasting lady. The tea tasting lady claims that she has skill at determining whether the tea or milk has been poured into the cup first. To test this, we design an experiment in which we randomize the order in which tea and milk are added to some cups of tea, and then we decide to choose the percent difference in the fraction of times she was correct and 0.5 (random guessing) as the MLE of her relative skill at tea tasting. If we pour 5 cups of tea for her to taste, then we are guaranteed to measure at least a 20% tea tasting skill, and it is not unlikely that we measure a 60% or 100% tea tasting skill.
However, we reflect briefly upon this experiment that we have designed, and it is clear that this is a terrible experiment. This is because we a priori judge this lady's claim to be nuts... there's just no way she can tell whether we poured the tea or the milk into the cup first. In other words, our prior is extremely skewed in this situation, so that our MLE is not very good in the sense that it is improbable given our prior.
As good scientists, however, we were not fooled by this, because we know that extraordinary claims require extraordinary evidence. If this lady really, really, for realsies can taste whether or not the tea was first, we need her to taste not only 5 cups, but 5000 cups! Of course, as the amount of evidence grows, the evidence overwhelms our skewed prior, and the MLE approaches the Bayesian estimate.
To sum up
In conclusion, since our world is sane, and since good scientists realize that extraordinary claims require extraordinary evidence, then generally when we compute a maximum likelihood estimate (and are inclined to take it seriously), it is not far from the maximum posterior estimate. This is because priors for problems that we test are generally very boring. They're not extremely skewed, they are continuous, and mostly differentiable, and don't tend to conflict with reality to any large degree. Thus, our MLE is usually quite likely under our prior. If the value is likely in the prior, and the evidence also supports the value, then it will be very likely in the posterior. Thus, the MLE and the MAP estimates tend not to be so different in real world problems. Of course, there is no guarantee that this is the case, but it is a convenient property of the sane world in which we live. | Yet another "Bayesian vs Maximum Likelihood" question
I would like to wax philosophically on @kjetil's answer, and specifically this statement:
That approximation will forget about the uncertainty in the estimation of θ, and might be a good approximatio |
37,718 | Selecting an appropriate machine learning algorithm? | are there any rules of the format "IF feature X has property Z THEN do Y"?
Yes, there are such rules. Or rather, if x then is is sensible to try y and z and avoid w.
However, what is sensible and what is not depends on
your application (influences e.g. expected complexity of the problem)
the size of the data set: how many rows, how many columns, how many independent cases
the type of data / what kind of measurement. E.g. gene microarray data and vibrational spectroscopy data often have comparable size, but the different nature of the data suggests different regularization approaches.
and in practice also on your experience in applying different methods.
Without more specific information I think that is about as much as we can say.
If you want to have a general answer to the general problem, I recommend the Elements of Statistical Learning for a start. | Selecting an appropriate machine learning algorithm? | are there any rules of the format "IF feature X has property Z THEN do Y"?
Yes, there are such rules. Or rather, if x then is is sensible to try y and z and avoid w.
However, what is sensible and wha | Selecting an appropriate machine learning algorithm?
are there any rules of the format "IF feature X has property Z THEN do Y"?
Yes, there are such rules. Or rather, if x then is is sensible to try y and z and avoid w.
However, what is sensible and what is not depends on
your application (influences e.g. expected complexity of the problem)
the size of the data set: how many rows, how many columns, how many independent cases
the type of data / what kind of measurement. E.g. gene microarray data and vibrational spectroscopy data often have comparable size, but the different nature of the data suggests different regularization approaches.
and in practice also on your experience in applying different methods.
Without more specific information I think that is about as much as we can say.
If you want to have a general answer to the general problem, I recommend the Elements of Statistical Learning for a start. | Selecting an appropriate machine learning algorithm?
are there any rules of the format "IF feature X has property Z THEN do Y"?
Yes, there are such rules. Or rather, if x then is is sensible to try y and z and avoid w.
However, what is sensible and wha |
37,719 | Selecting an appropriate machine learning algorithm? | There is a classic paper (Wolpert, 1996) that discusses no-free-lunch theorem mentioned above. The paper can be found here. But according to the paper and most practitioners, "there are [rarely]
a priori distinctions between learning algorithms." Note: I replaced "no" with "rarely".
Reference
Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. Neural computation, 8(7), 1341-1390. | Selecting an appropriate machine learning algorithm? | There is a classic paper (Wolpert, 1996) that discusses no-free-lunch theorem mentioned above. The paper can be found here. But according to the paper and most practitioners, "there are [rarely]
a pr | Selecting an appropriate machine learning algorithm?
There is a classic paper (Wolpert, 1996) that discusses no-free-lunch theorem mentioned above. The paper can be found here. But according to the paper and most practitioners, "there are [rarely]
a priori distinctions between learning algorithms." Note: I replaced "no" with "rarely".
Reference
Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. Neural computation, 8(7), 1341-1390. | Selecting an appropriate machine learning algorithm?
There is a classic paper (Wolpert, 1996) that discusses no-free-lunch theorem mentioned above. The paper can be found here. But according to the paper and most practitioners, "there are [rarely]
a pr |
37,720 | Non-parametric correlation for continuous and dichotomous variables [closed] | Consider Rank Biserial Correlation.
"A formula is developed for the correlation between a ranking (possibly including ties) and a dichotomy, with limits which are always ±1. This formula is shown to be equivalent both to Kendall'sτ and Spearman's ρ"
Reference: E. E. Cureton (1956) "Rank Biserial Correlation", Psychometrika, 21, pp. 287-290. | Non-parametric correlation for continuous and dichotomous variables [closed] | Consider Rank Biserial Correlation.
"A formula is developed for the correlation between a ranking (possibly including ties) and a dichotomy, with limits which are always ±1. This formula is shown to b | Non-parametric correlation for continuous and dichotomous variables [closed]
Consider Rank Biserial Correlation.
"A formula is developed for the correlation between a ranking (possibly including ties) and a dichotomy, with limits which are always ±1. This formula is shown to be equivalent both to Kendall'sτ and Spearman's ρ"
Reference: E. E. Cureton (1956) "Rank Biserial Correlation", Psychometrika, 21, pp. 287-290. | Non-parametric correlation for continuous and dichotomous variables [closed]
Consider Rank Biserial Correlation.
"A formula is developed for the correlation between a ranking (possibly including ties) and a dichotomy, with limits which are always ±1. This formula is shown to b |
37,721 | Failing at linear regression / prediction on a real data set | This is probably calls for simultaneous equation modeling, rather than linear regression.
The probability of success depends on two separate equations, one measuring the quality of the opponent, person and machine, the other measuring the quality of the self, person and machine. They directly oppose each other, but only one outcome is observed. Without doing SEM, I believe your coefficients are biased, which may be why they are insignificant mush. This is reminiscent of the estimation of supply and demand equations, which often will net nothing unless well prepared. | Failing at linear regression / prediction on a real data set | This is probably calls for simultaneous equation modeling, rather than linear regression.
The probability of success depends on two separate equations, one measuring the quality of the opponent, perso | Failing at linear regression / prediction on a real data set
This is probably calls for simultaneous equation modeling, rather than linear regression.
The probability of success depends on two separate equations, one measuring the quality of the opponent, person and machine, the other measuring the quality of the self, person and machine. They directly oppose each other, but only one outcome is observed. Without doing SEM, I believe your coefficients are biased, which may be why they are insignificant mush. This is reminiscent of the estimation of supply and demand equations, which often will net nothing unless well prepared. | Failing at linear regression / prediction on a real data set
This is probably calls for simultaneous equation modeling, rather than linear regression.
The probability of success depends on two separate equations, one measuring the quality of the opponent, perso |
37,722 | Validation of a questionnaire in a new population | @suzi One of the properties upon which Rasch analysis is based is that measures are invariant to subgroups. This property supports the development of computer adaptive testing and test equating. If the this invariance of measure holds true in a population, then there is no differential item functioning (DIF). To assist you with your sample, you could run a Rasch analysis for each subgroup and compare the item functioning of each item for each subgroup. If the item measures differ by more than 0.50 logits (or greater than the 95% confidence intervals of the measures), then DIF is present and the item is not invariant. As long as your subgroups have no fewer than 70 subjects, you should be okay.
An excellent paper on applying this principle is "Rasch Fit Statistics as a Test of the Invariance of Item Parameter Estimates", Smith, Richard M. and Suh, Kyunghee, Journal of Applied Measurement 4(2) 153-163.
As stated in the comments, this is a large field and you might need help. If a paper is possible, you might seek help through the Rasch SIG. Software would include Winsteps, Facets, RUMM, eRm, and other programs in R.
Hope this helps. | Validation of a questionnaire in a new population | @suzi One of the properties upon which Rasch analysis is based is that measures are invariant to subgroups. This property supports the development of computer adaptive testing and test equating. If | Validation of a questionnaire in a new population
@suzi One of the properties upon which Rasch analysis is based is that measures are invariant to subgroups. This property supports the development of computer adaptive testing and test equating. If the this invariance of measure holds true in a population, then there is no differential item functioning (DIF). To assist you with your sample, you could run a Rasch analysis for each subgroup and compare the item functioning of each item for each subgroup. If the item measures differ by more than 0.50 logits (or greater than the 95% confidence intervals of the measures), then DIF is present and the item is not invariant. As long as your subgroups have no fewer than 70 subjects, you should be okay.
An excellent paper on applying this principle is "Rasch Fit Statistics as a Test of the Invariance of Item Parameter Estimates", Smith, Richard M. and Suh, Kyunghee, Journal of Applied Measurement 4(2) 153-163.
As stated in the comments, this is a large field and you might need help. If a paper is possible, you might seek help through the Rasch SIG. Software would include Winsteps, Facets, RUMM, eRm, and other programs in R.
Hope this helps. | Validation of a questionnaire in a new population
@suzi One of the properties upon which Rasch analysis is based is that measures are invariant to subgroups. This property supports the development of computer adaptive testing and test equating. If |
37,723 | Probabilities from Logistic Regression | If I understand your question correctly, you have predicted the probability for each individual, but want to know the average probability of a segment of those individuals? For instance you have 1000 individuals with the average rate of 65% but only 300 have blue eyes, what is the average rate of those with blue eyes? Then you can simply average your estimated probabilities for those with blue eyes. | Probabilities from Logistic Regression | If I understand your question correctly, you have predicted the probability for each individual, but want to know the average probability of a segment of those individuals? For instance you have 1000 | Probabilities from Logistic Regression
If I understand your question correctly, you have predicted the probability for each individual, but want to know the average probability of a segment of those individuals? For instance you have 1000 individuals with the average rate of 65% but only 300 have blue eyes, what is the average rate of those with blue eyes? Then you can simply average your estimated probabilities for those with blue eyes. | Probabilities from Logistic Regression
If I understand your question correctly, you have predicted the probability for each individual, but want to know the average probability of a segment of those individuals? For instance you have 1000 |
37,724 | When does it make sense to reject/accept an hypothesis? | I really like your rainbow versions of my clouds, and may 'borrow' them for a future version of my paper. Thank you!
Your questions are not entirely clear to me, so I will paraphrase them. If they are not what you had in mind them my answers will be misdirected!
Are there situations where rejection of the hypothesis like "mean1 equals mean2" is scientifically valuable?
Frequentists would contend that the advantage of having well-defined error rates outweighs the loss of assessment of evidence that comes with their methods, but I don't think that that is very often the case. (And I would suspect that few proponents of the methods really understand the complete loss of evidential consideration of the data that they entail.) Fisher was adamant that the Neyman-Pearson approach to testing had no place in a scientific program, but he did allow that they were appropriate in the situation of 'industrial acceptance testing'. Presumably such a setting is a situation where rejection of a point hypothesis can be useful.
Most of science is more accurately modelled as estimation than as an acceptance procedure. P-values and the likelihood functions that they index (or, to use your term, address) provide very useful information for estimation, and for inferences based on that estimation.
(A couple of old StackExchange questions and answerd are relevant: What is the difference between "testing of hypothesis" and "test of significance"? and Interpretation of p-value in hypothesis testing)
Are you missing the point of rejection of a hypothesis (of low a priori probability)?
I don't know if you are missing much, but it is probably not a good idea to add prior probabilities into this mixture! Much of the argumentation around the ideas relating to hypothesis testing, significance testing and evidential evaluation come from entrenched positions. Such arguments are not very helpful. (You might have noticed how carefully I avoided bringing Bayesianism into my discussion in the paper, even though I wholeheartedly embrace it when there are reasonable prior probabilities to use. First we need to fix the P-value provide evidence, error rates do not issue.)
Should scientists ignore results that fail to reach 'significance'?
No, of course not. Using an arbitrary cutoff to claim significance, or to assume significance, publishability, repeatability or reality of a result is a bad idea in most situations. The results of scientific experiments should be interpreted in light of prior understanding, prior probabilities where available, theory, the weight of contrary and complementary evidence, replications, loss functions where appropriate and a myriad of other intangibles. Scientists should not hand over to insentient algorithms the responsibility for inference. However, to make full use of the evidence within their experimental results scientists will need to much better understand what the statistical analyses can and do provide. That is the purpose of the paper that you have explored. It will also be necessary that scientists make a more complete account of their acquisition of evidence and the evolution of their understanding than what is usually presented in papers, and they should provide what Abelson called a principled argument to support their inferences. Relying on P<0.05 is the opposite of a principled argument. | When does it make sense to reject/accept an hypothesis? | I really like your rainbow versions of my clouds, and may 'borrow' them for a future version of my paper. Thank you!
Your questions are not entirely clear to me, so I will paraphrase them. If they are | When does it make sense to reject/accept an hypothesis?
I really like your rainbow versions of my clouds, and may 'borrow' them for a future version of my paper. Thank you!
Your questions are not entirely clear to me, so I will paraphrase them. If they are not what you had in mind them my answers will be misdirected!
Are there situations where rejection of the hypothesis like "mean1 equals mean2" is scientifically valuable?
Frequentists would contend that the advantage of having well-defined error rates outweighs the loss of assessment of evidence that comes with their methods, but I don't think that that is very often the case. (And I would suspect that few proponents of the methods really understand the complete loss of evidential consideration of the data that they entail.) Fisher was adamant that the Neyman-Pearson approach to testing had no place in a scientific program, but he did allow that they were appropriate in the situation of 'industrial acceptance testing'. Presumably such a setting is a situation where rejection of a point hypothesis can be useful.
Most of science is more accurately modelled as estimation than as an acceptance procedure. P-values and the likelihood functions that they index (or, to use your term, address) provide very useful information for estimation, and for inferences based on that estimation.
(A couple of old StackExchange questions and answerd are relevant: What is the difference between "testing of hypothesis" and "test of significance"? and Interpretation of p-value in hypothesis testing)
Are you missing the point of rejection of a hypothesis (of low a priori probability)?
I don't know if you are missing much, but it is probably not a good idea to add prior probabilities into this mixture! Much of the argumentation around the ideas relating to hypothesis testing, significance testing and evidential evaluation come from entrenched positions. Such arguments are not very helpful. (You might have noticed how carefully I avoided bringing Bayesianism into my discussion in the paper, even though I wholeheartedly embrace it when there are reasonable prior probabilities to use. First we need to fix the P-value provide evidence, error rates do not issue.)
Should scientists ignore results that fail to reach 'significance'?
No, of course not. Using an arbitrary cutoff to claim significance, or to assume significance, publishability, repeatability or reality of a result is a bad idea in most situations. The results of scientific experiments should be interpreted in light of prior understanding, prior probabilities where available, theory, the weight of contrary and complementary evidence, replications, loss functions where appropriate and a myriad of other intangibles. Scientists should not hand over to insentient algorithms the responsibility for inference. However, to make full use of the evidence within their experimental results scientists will need to much better understand what the statistical analyses can and do provide. That is the purpose of the paper that you have explored. It will also be necessary that scientists make a more complete account of their acquisition of evidence and the evolution of their understanding than what is usually presented in papers, and they should provide what Abelson called a principled argument to support their inferences. Relying on P<0.05 is the opposite of a principled argument. | When does it make sense to reject/accept an hypothesis?
I really like your rainbow versions of my clouds, and may 'borrow' them for a future version of my paper. Thank you!
Your questions are not entirely clear to me, so I will paraphrase them. If they are |
37,725 | Generating causally dependent random variables | It seems that in order to reproduce the joint distribution $\rho(a,v)$, you should select new $a$ not only based on $v$, but based on the old $a$ also:
$a_{i+1} \sim \rho'(a_{i+1}|a_i, v_i)$
The question (to which I don't know the answer yet) is how to find $\rho'$ which produces $\rho$.
UPD:
You are to solve the following integral equation:
$$\rho(a, v) = \int da' \rho'\left(a|a', v-{a+a'\over 2}\Delta t\right) \rho(a', v-{a+a'\over 2}\Delta t)$$
Approximating the function $\rho$ with a histogram, you turn this to a system of linear equations:
$$\cases{
\rho(a, v) = \sum_{a'} \rho'\left(a|a', v-{a+a'\over 2}\Delta t\right) \rho(a', v-{a+a'\over 2}\Delta t) \\
\sum_a \rho'\left(a|a', v'\right) = 1}$$
This system is underdetermined. You may apply a smoothness penalty to obtain a solution. | Generating causally dependent random variables | It seems that in order to reproduce the joint distribution $\rho(a,v)$, you should select new $a$ not only based on $v$, but based on the old $a$ also:
$a_{i+1} \sim \rho'(a_{i+1}|a_i, v_i)$
The quest | Generating causally dependent random variables
It seems that in order to reproduce the joint distribution $\rho(a,v)$, you should select new $a$ not only based on $v$, but based on the old $a$ also:
$a_{i+1} \sim \rho'(a_{i+1}|a_i, v_i)$
The question (to which I don't know the answer yet) is how to find $\rho'$ which produces $\rho$.
UPD:
You are to solve the following integral equation:
$$\rho(a, v) = \int da' \rho'\left(a|a', v-{a+a'\over 2}\Delta t\right) \rho(a', v-{a+a'\over 2}\Delta t)$$
Approximating the function $\rho$ with a histogram, you turn this to a system of linear equations:
$$\cases{
\rho(a, v) = \sum_{a'} \rho'\left(a|a', v-{a+a'\over 2}\Delta t\right) \rho(a', v-{a+a'\over 2}\Delta t) \\
\sum_a \rho'\left(a|a', v'\right) = 1}$$
This system is underdetermined. You may apply a smoothness penalty to obtain a solution. | Generating causally dependent random variables
It seems that in order to reproduce the joint distribution $\rho(a,v)$, you should select new $a$ not only based on $v$, but based on the old $a$ also:
$a_{i+1} \sim \rho'(a_{i+1}|a_i, v_i)$
The quest |
37,726 | Generating causally dependent random variables | Doesn't the gps data contain position $p$? I would have thought that, not only is $v_{i+1}$ dependent upon $v_{i}$ and $a_{i}$ but $a_{i+1}$ would also be dependent upon $p_{i}$. Consider: in any road network there are bottlenecks, speed limits, signals, intersections, steep gradients, etc. that are geolocated. So something like an ensemble (distribution) defined by:
$F_{a} = Pr ( A_{i+1} \le a_{i+1}\ |\ a_{i},v_{i},p_{i} )$
$v_{i+1} = v_{i} + a_{i}dt$
For such an ensemble, the difficulty will lay in the nature of the data. It is likely that the true population will be asymmetric, non-linear (piece-wise) and may not have defined moments. These characteristics may not be evident within the sample you have at hand.
As @whuber has stated, the problem, ie exactly what you are seeking to produce, does not yet seem fully and clearly defined. It is not clear as to whether you are interested in the ensemble or more so the individuals. | Generating causally dependent random variables | Doesn't the gps data contain position $p$? I would have thought that, not only is $v_{i+1}$ dependent upon $v_{i}$ and $a_{i}$ but $a_{i+1}$ would also be dependent upon $p_{i}$. Consider: in any r | Generating causally dependent random variables
Doesn't the gps data contain position $p$? I would have thought that, not only is $v_{i+1}$ dependent upon $v_{i}$ and $a_{i}$ but $a_{i+1}$ would also be dependent upon $p_{i}$. Consider: in any road network there are bottlenecks, speed limits, signals, intersections, steep gradients, etc. that are geolocated. So something like an ensemble (distribution) defined by:
$F_{a} = Pr ( A_{i+1} \le a_{i+1}\ |\ a_{i},v_{i},p_{i} )$
$v_{i+1} = v_{i} + a_{i}dt$
For such an ensemble, the difficulty will lay in the nature of the data. It is likely that the true population will be asymmetric, non-linear (piece-wise) and may not have defined moments. These characteristics may not be evident within the sample you have at hand.
As @whuber has stated, the problem, ie exactly what you are seeking to produce, does not yet seem fully and clearly defined. It is not clear as to whether you are interested in the ensemble or more so the individuals. | Generating causally dependent random variables
Doesn't the gps data contain position $p$? I would have thought that, not only is $v_{i+1}$ dependent upon $v_{i}$ and $a_{i}$ but $a_{i+1}$ would also be dependent upon $p_{i}$. Consider: in any r |
37,727 | Can I use a confidence interval of poisson mean for its variance | Your approach is basically correct but heavily depends on the strong distributional assumption you are making. If it is violated, even for very large samples, the confidence regions wont have the stated coverage probabilities. That's why statisticians try to avoid such reasoning if there are more robust methods available.
There is actually an example (not related to confidence intervals but point estimation) where your approach is frequently used by applied statisticians: Assume you want to estimate the true 97.5% quantile e.g. to detect outliers. Often, instead of calculating the sample 97.5% quantile, researchers assume normality and estimate the true quantile by sample mean plus two standard deviations. If the underlying distribution is normal (which it usually has no reason to be), this estimate is more efficient than the one based on sample quantiles. | Can I use a confidence interval of poisson mean for its variance | Your approach is basically correct but heavily depends on the strong distributional assumption you are making. If it is violated, even for very large samples, the confidence regions wont have the stat | Can I use a confidence interval of poisson mean for its variance
Your approach is basically correct but heavily depends on the strong distributional assumption you are making. If it is violated, even for very large samples, the confidence regions wont have the stated coverage probabilities. That's why statisticians try to avoid such reasoning if there are more robust methods available.
There is actually an example (not related to confidence intervals but point estimation) where your approach is frequently used by applied statisticians: Assume you want to estimate the true 97.5% quantile e.g. to detect outliers. Often, instead of calculating the sample 97.5% quantile, researchers assume normality and estimate the true quantile by sample mean plus two standard deviations. If the underlying distribution is normal (which it usually has no reason to be), this estimate is more efficient than the one based on sample quantiles. | Can I use a confidence interval of poisson mean for its variance
Your approach is basically correct but heavily depends on the strong distributional assumption you are making. If it is violated, even for very large samples, the confidence regions wont have the stat |
37,728 | Regression of data that includes a date | It sounds to like you need use time series methods, such as ARMA or ARIMA, that let you calculate a regression using time as an independent variable without violating the independent observations assumption of OLS.
You may want to try a two step analysis:
- first use time as a single predictor variable and use a suitable time series method
- two see if there is any meaningful difference in residuals between the two suppliers. (A simple t-test might be sufficient.) | Regression of data that includes a date | It sounds to like you need use time series methods, such as ARMA or ARIMA, that let you calculate a regression using time as an independent variable without violating the independent observations assu | Regression of data that includes a date
It sounds to like you need use time series methods, such as ARMA or ARIMA, that let you calculate a regression using time as an independent variable without violating the independent observations assumption of OLS.
You may want to try a two step analysis:
- first use time as a single predictor variable and use a suitable time series method
- two see if there is any meaningful difference in residuals between the two suppliers. (A simple t-test might be sufficient.) | Regression of data that includes a date
It sounds to like you need use time series methods, such as ARMA or ARIMA, that let you calculate a regression using time as an independent variable without violating the independent observations assu |
37,729 | Regression of data that includes a date | There's several ways. An option is to convert dates into days after the very first day. Also, you could have have additional variables of days of the week (trends across the week) and the month (to see trends in certain times of the year). By doing so, you can use multiple regression.
To get the variable "# of days after the first day", I believe (both excel and R) you can simply subtract the earlier data from the latter date and get the day difference. So maybe try subtracting 1/1/2010 from all your dates. You should also tell R that the new value is numeric using as.numeric()
EDIT: R seems to read in the year first, so you may have to mess around the dates a bit. see this: https://stackoverflow.com/questions/2254986/how-to-subtract-days-in-r
Time series analysis is another approach, but I'm not too familiar with it. | Regression of data that includes a date | There's several ways. An option is to convert dates into days after the very first day. Also, you could have have additional variables of days of the week (trends across the week) and the month (to se | Regression of data that includes a date
There's several ways. An option is to convert dates into days after the very first day. Also, you could have have additional variables of days of the week (trends across the week) and the month (to see trends in certain times of the year). By doing so, you can use multiple regression.
To get the variable "# of days after the first day", I believe (both excel and R) you can simply subtract the earlier data from the latter date and get the day difference. So maybe try subtracting 1/1/2010 from all your dates. You should also tell R that the new value is numeric using as.numeric()
EDIT: R seems to read in the year first, so you may have to mess around the dates a bit. see this: https://stackoverflow.com/questions/2254986/how-to-subtract-days-in-r
Time series analysis is another approach, but I'm not too familiar with it. | Regression of data that includes a date
There's several ways. An option is to convert dates into days after the very first day. Also, you could have have additional variables of days of the week (trends across the week) and the month (to se |
37,730 | Regression of data that includes a date | I can advise you to use non-linear function for time variable because the prices fall is lesser with each additional time unit. Otherwise the price would finally fall below zero. Moreover, there may be periods when the trend changed up. Thus I recommend to use cubic splines for time variable.
Experience whispers me that I would check the following model:
Y = country_parameter * price(t) * e
where price(t) is a function, preferably cubic spline, but it may also be whatever, even linear trend.
Note that there are multiplication signs, not sums, in the model. | Regression of data that includes a date | I can advise you to use non-linear function for time variable because the prices fall is lesser with each additional time unit. Otherwise the price would finally fall below zero. Moreover, there may b | Regression of data that includes a date
I can advise you to use non-linear function for time variable because the prices fall is lesser with each additional time unit. Otherwise the price would finally fall below zero. Moreover, there may be periods when the trend changed up. Thus I recommend to use cubic splines for time variable.
Experience whispers me that I would check the following model:
Y = country_parameter * price(t) * e
where price(t) is a function, preferably cubic spline, but it may also be whatever, even linear trend.
Note that there are multiplication signs, not sums, in the model. | Regression of data that includes a date
I can advise you to use non-linear function for time variable because the prices fall is lesser with each additional time unit. Otherwise the price would finally fall below zero. Moreover, there may b |
37,731 | Regression of data that includes a date | Pick a reference date, say 1/1/2010, and make a new variable time that is the difference between the date and the reference date, where the difference is computed in, say, days.
Now run a linear regression (or something similar) with time and supplier as the two predictor variables and price as the response variable.
This is just a starting point. | Regression of data that includes a date | Pick a reference date, say 1/1/2010, and make a new variable time that is the difference between the date and the reference date, where the difference is computed in, say, days.
Now run a linear reg | Regression of data that includes a date
Pick a reference date, say 1/1/2010, and make a new variable time that is the difference between the date and the reference date, where the difference is computed in, say, days.
Now run a linear regression (or something similar) with time and supplier as the two predictor variables and price as the response variable.
This is just a starting point. | Regression of data that includes a date
Pick a reference date, say 1/1/2010, and make a new variable time that is the difference between the date and the reference date, where the difference is computed in, say, days.
Now run a linear reg |
37,732 | Are there any alternatives to simulation for determining the distribution of number of events from two dependent non-homogeneous Poisson processes? | That's an interesting problem. I'm not sure to have cought all you mean, but have you thought about reformulating some of your problems as hypothesis tests ? Like:
null hypothesis H0: $x > y$
alternative hypothesis H1: $x \le y$
and then to perform a likelihood ratio test ? Then the extracted p-value tells you if whether H0 is rejected given a certain significance level.
The reason I'm mentionning this is that performing a likelihood ratio test is same as performing 2 minimization which can be much faster than MC integration. However the integral inside the exp might still require an integration.
HTH | Are there any alternatives to simulation for determining the distribution of number of events from t | That's an interesting problem. I'm not sure to have cought all you mean, but have you thought about reformulating some of your problems as hypothesis tests ? Like:
null hypothesis H0: $x > y$
alterna | Are there any alternatives to simulation for determining the distribution of number of events from two dependent non-homogeneous Poisson processes?
That's an interesting problem. I'm not sure to have cought all you mean, but have you thought about reformulating some of your problems as hypothesis tests ? Like:
null hypothesis H0: $x > y$
alternative hypothesis H1: $x \le y$
and then to perform a likelihood ratio test ? Then the extracted p-value tells you if whether H0 is rejected given a certain significance level.
The reason I'm mentionning this is that performing a likelihood ratio test is same as performing 2 minimization which can be much faster than MC integration. However the integral inside the exp might still require an integration.
HTH | Are there any alternatives to simulation for determining the distribution of number of events from t
That's an interesting problem. I'm not sure to have cought all you mean, but have you thought about reformulating some of your problems as hypothesis tests ? Like:
null hypothesis H0: $x > y$
alterna |
37,733 | Are there any alternatives to simulation for determining the distribution of number of events from two dependent non-homogeneous Poisson processes? | I first address 2 problems with the question:
The so-called time inhomogeneous factors preclude the process from being Poisson, because the number of goals in some time interval is not independent of the earlier number of goals. In other words, the transition rate is state dependent. Even the linked article (P.7) calls each process a birth process, reducing only to a homogeneous Poisson process when the intensity is constant.
$x!$ and $y!$ should be excluded from the likelihood, as in Eq. (3.5) of the linked article. Presumably, the OP thought Eq. (3.5) gave the likelihood of a match with some set of unordered interarrival times, which would have to be divided by the number of set permutations to obtain the likelihood for an ordered set. This is unnecessary, and would have been wrong even if Eq. (3.5) were the likelihood for an unordered set, because the time-dependent intensities would result in different probabilities for each ordering.
Then to address the question of score line distribution, I will point out that although not mentioned by the linked article, the score line can be modeled as a birth-death process:
$$
p_{x,y}'(t)=\lambda_{x-1,y}(t)p_{x-1,y}(t)+\mu_{x,y-1}(t)p_{x,y-1}(t)-(\lambda_{x,y}(t)+\mu_{x,y}(t))p_{x,y}(t)
$$
$$
p_{x,y}(0)=\delta_{x,y}
$$
$$
\lambda_{-1,y}(t)=0
$$
$$
\mu_{x,-1}(t)=0
$$
The first equation is a population balance or master equation, whose solution has been widely studied, e.g. by Feller. I don't believe analytic solutions exist in general, whereas numeric solution requires truncation at some maximum $x$ and $y$. What maximum to use depends on the probabilities to be computed from $p_{x,y}(t)$. E.g. $p_{1,0}(t)$ requires only a maximum $x=1$, $P(x+y<2.5)$ requires maxima of 2, while $P(x>y)$, $P(y<x)$, and $P(x=y)$ all require maxima large enough that $p_{x>max,y}$ and $p_{x,y>max}$ are negligible.
Many numeric solutions are possible, e.g. finite difference/element/spectral methods. If large maxima are required, approximating the difference equations with a differential equation in continuous $x$ and $y$ may be more efficient.
Here is some Mathematica code one might use as a template, with maxima, $\lambda_{x,y}(t)$, and $\mu_{x,y}(t)$ to be specified:
max=2;
\[Lambda][x_,y_,t_]=1;
\[Mu][x_,y_,t_]=1;
\[Lambda][-1,y_,t_]=0;
\[Mu][x_,-1,t_]=0;
DSolve[Flatten[Table[{
D[p[x,y,t],t]==\[Lambda][x-1,y,t]p[x-1,y,t]+\[Mu][x,y-1,t]p[x,y-1,t]
-(\[Lambda][x,y,t]+\[Mu][x,y,t])p[x,y,t],
p[x,y,0]==DiscreteDelta[x,y]},{x,0,max-1},{y,0,max-1}]],
Flatten[Table[p[x,y,t],{x,0,max-1},{y,0,max-1}]],t]
$$
\left\{\left\{p(0,0,t)\to e^{-2 t},p(0,1,t)\to e^{-2 t} t,p(1,0,t)\to e^{-2 t} t,p(1,1,t)\to e^{-2 t} t^2\right\}\right\}
$$ | Are there any alternatives to simulation for determining the distribution of number of events from t | I first address 2 problems with the question:
The so-called time inhomogeneous factors preclude the process from being Poisson, because the number of goals in some time interval is not independent of | Are there any alternatives to simulation for determining the distribution of number of events from two dependent non-homogeneous Poisson processes?
I first address 2 problems with the question:
The so-called time inhomogeneous factors preclude the process from being Poisson, because the number of goals in some time interval is not independent of the earlier number of goals. In other words, the transition rate is state dependent. Even the linked article (P.7) calls each process a birth process, reducing only to a homogeneous Poisson process when the intensity is constant.
$x!$ and $y!$ should be excluded from the likelihood, as in Eq. (3.5) of the linked article. Presumably, the OP thought Eq. (3.5) gave the likelihood of a match with some set of unordered interarrival times, which would have to be divided by the number of set permutations to obtain the likelihood for an ordered set. This is unnecessary, and would have been wrong even if Eq. (3.5) were the likelihood for an unordered set, because the time-dependent intensities would result in different probabilities for each ordering.
Then to address the question of score line distribution, I will point out that although not mentioned by the linked article, the score line can be modeled as a birth-death process:
$$
p_{x,y}'(t)=\lambda_{x-1,y}(t)p_{x-1,y}(t)+\mu_{x,y-1}(t)p_{x,y-1}(t)-(\lambda_{x,y}(t)+\mu_{x,y}(t))p_{x,y}(t)
$$
$$
p_{x,y}(0)=\delta_{x,y}
$$
$$
\lambda_{-1,y}(t)=0
$$
$$
\mu_{x,-1}(t)=0
$$
The first equation is a population balance or master equation, whose solution has been widely studied, e.g. by Feller. I don't believe analytic solutions exist in general, whereas numeric solution requires truncation at some maximum $x$ and $y$. What maximum to use depends on the probabilities to be computed from $p_{x,y}(t)$. E.g. $p_{1,0}(t)$ requires only a maximum $x=1$, $P(x+y<2.5)$ requires maxima of 2, while $P(x>y)$, $P(y<x)$, and $P(x=y)$ all require maxima large enough that $p_{x>max,y}$ and $p_{x,y>max}$ are negligible.
Many numeric solutions are possible, e.g. finite difference/element/spectral methods. If large maxima are required, approximating the difference equations with a differential equation in continuous $x$ and $y$ may be more efficient.
Here is some Mathematica code one might use as a template, with maxima, $\lambda_{x,y}(t)$, and $\mu_{x,y}(t)$ to be specified:
max=2;
\[Lambda][x_,y_,t_]=1;
\[Mu][x_,y_,t_]=1;
\[Lambda][-1,y_,t_]=0;
\[Mu][x_,-1,t_]=0;
DSolve[Flatten[Table[{
D[p[x,y,t],t]==\[Lambda][x-1,y,t]p[x-1,y,t]+\[Mu][x,y-1,t]p[x,y-1,t]
-(\[Lambda][x,y,t]+\[Mu][x,y,t])p[x,y,t],
p[x,y,0]==DiscreteDelta[x,y]},{x,0,max-1},{y,0,max-1}]],
Flatten[Table[p[x,y,t],{x,0,max-1},{y,0,max-1}]],t]
$$
\left\{\left\{p(0,0,t)\to e^{-2 t},p(0,1,t)\to e^{-2 t} t,p(1,0,t)\to e^{-2 t} t,p(1,1,t)\to e^{-2 t} t^2\right\}\right\}
$$ | Are there any alternatives to simulation for determining the distribution of number of events from t
I first address 2 problems with the question:
The so-called time inhomogeneous factors preclude the process from being Poisson, because the number of goals in some time interval is not independent of |
37,734 | Finding correlations in longitudinal data analysis | There are two main options available (actually more but let us for simplicity only mention two):
First, you can do standard regression techniques (e.g. linear regression or methods for discrete choice), controlling for example for the length and content of the videos watched (concern two). Of course there will be intra-group correlation and intra-individual correlation. You can adjust your standard errors for intra-group correlation making the variance matrix estimation robust to heteroscedasticity or arbitrary intra-group correlation (also intra-individual correlation if participants do not switch between groups). Here is more information available:
http://www.nber.org/WNE/lect_8_cluster.pdf
and here about the estimation of cluster-robust standard error in R:
http://diffuseprior.wordpress.com/2012/06/15/standard-robust-and-clustered-standard-errors-computed-in-r/
The second approach is two use mixed models (mentioned by gung). A good introduction is Gelman, Andrew, and Jennifer Hill. "Data Analysis Using Regression and Multilevel." (2007). This methods are more efficient compared with the first approach but -of course- you have to do more assumptions about the form of intra-group and intra-individual correlation (the variance matrix) to gain more efficiency.
With both approaches you can calculate easily the (partial) correlation using a linear (additive) functional form or other measures for association.
As far as I understand you, each peer acts on his/her own, so there should not be peer effects which would require more complex models (and more advanced assumptions about the kind of peer-interactions). You should test whether there is actually intra-group correlation, it might be sufficient to control for the longitudinal dimension. | Finding correlations in longitudinal data analysis | There are two main options available (actually more but let us for simplicity only mention two):
First, you can do standard regression techniques (e.g. linear regression or methods for discrete choic | Finding correlations in longitudinal data analysis
There are two main options available (actually more but let us for simplicity only mention two):
First, you can do standard regression techniques (e.g. linear regression or methods for discrete choice), controlling for example for the length and content of the videos watched (concern two). Of course there will be intra-group correlation and intra-individual correlation. You can adjust your standard errors for intra-group correlation making the variance matrix estimation robust to heteroscedasticity or arbitrary intra-group correlation (also intra-individual correlation if participants do not switch between groups). Here is more information available:
http://www.nber.org/WNE/lect_8_cluster.pdf
and here about the estimation of cluster-robust standard error in R:
http://diffuseprior.wordpress.com/2012/06/15/standard-robust-and-clustered-standard-errors-computed-in-r/
The second approach is two use mixed models (mentioned by gung). A good introduction is Gelman, Andrew, and Jennifer Hill. "Data Analysis Using Regression and Multilevel." (2007). This methods are more efficient compared with the first approach but -of course- you have to do more assumptions about the form of intra-group and intra-individual correlation (the variance matrix) to gain more efficiency.
With both approaches you can calculate easily the (partial) correlation using a linear (additive) functional form or other measures for association.
As far as I understand you, each peer acts on his/her own, so there should not be peer effects which would require more complex models (and more advanced assumptions about the kind of peer-interactions). You should test whether there is actually intra-group correlation, it might be sufficient to control for the longitudinal dimension. | Finding correlations in longitudinal data analysis
There are two main options available (actually more but let us for simplicity only mention two):
First, you can do standard regression techniques (e.g. linear regression or methods for discrete choic |
37,735 | glmnet, categorical variable, group lasso? | As far as I am aware glmnet doesn't have this feature implemented yet. @Glen_b's suggestion of using type.multinomial is used to group variables across all responses in a multinomial model, but there's no way of grouping independent variables in a model. see
https://cran.r-project.org/web/packages/grplasso/grplasso.pdf
for an alternative. | glmnet, categorical variable, group lasso? | As far as I am aware glmnet doesn't have this feature implemented yet. @Glen_b's suggestion of using type.multinomial is used to group variables across all responses in a multinomial model, but there' | glmnet, categorical variable, group lasso?
As far as I am aware glmnet doesn't have this feature implemented yet. @Glen_b's suggestion of using type.multinomial is used to group variables across all responses in a multinomial model, but there's no way of grouping independent variables in a model. see
https://cran.r-project.org/web/packages/grplasso/grplasso.pdf
for an alternative. | glmnet, categorical variable, group lasso?
As far as I am aware glmnet doesn't have this feature implemented yet. @Glen_b's suggestion of using type.multinomial is used to group variables across all responses in a multinomial model, but there' |
37,736 | Test for variation using random effects model | There are several things you could do to test whether there is genetic variance.
First, however, I wonder why you want separate models for "each combination of Treat1 and Treat2 (AP,AQ,BP,BQ)"? I don't know anything about the substantive area of application here, and I may be misunderstanding your data, but I think you can have 5 varying intercepts / random effects here: Block, Genotype, Treat1, Treat2, and Treat1.Treat2 -- an interaction term that you create and add to the dataset. One model for everything.
Anyway, back to your question.
First, a formal test of significance can be conducted by running this model using ML (using REML = FALSE):
apmodel1 <- lmer(Age ~ (1|Genotype) + (1|Block), df, REML = FALSE)
Then running a model without the Genotype effect:
apmodel2 <- lmer(Age ~ (1|Block), df, REML = FALSE)
And performing a likelihood ratio test:
anova(apmodel2, apmodel)
Second, and more informally, you might calculate statistics such as the intraclass correlation coefficient (ICC) to provide a measure of how much variance Genotype is accounting for. Try the ICC.lme function from the psychometric library.
Finally, the most interesting method would be to produce plots of predicted effects, with their corresponding uncertainty estimates for each Genotype, Treatment, etc. The focus here shifts from Genotype as a factor, to each level of Genotype as a (modelled / varying) effect. | Test for variation using random effects model | There are several things you could do to test whether there is genetic variance.
First, however, I wonder why you want separate models for "each combination of Treat1 and Treat2 (AP,AQ,BP,BQ)"? I don' | Test for variation using random effects model
There are several things you could do to test whether there is genetic variance.
First, however, I wonder why you want separate models for "each combination of Treat1 and Treat2 (AP,AQ,BP,BQ)"? I don't know anything about the substantive area of application here, and I may be misunderstanding your data, but I think you can have 5 varying intercepts / random effects here: Block, Genotype, Treat1, Treat2, and Treat1.Treat2 -- an interaction term that you create and add to the dataset. One model for everything.
Anyway, back to your question.
First, a formal test of significance can be conducted by running this model using ML (using REML = FALSE):
apmodel1 <- lmer(Age ~ (1|Genotype) + (1|Block), df, REML = FALSE)
Then running a model without the Genotype effect:
apmodel2 <- lmer(Age ~ (1|Block), df, REML = FALSE)
And performing a likelihood ratio test:
anova(apmodel2, apmodel)
Second, and more informally, you might calculate statistics such as the intraclass correlation coefficient (ICC) to provide a measure of how much variance Genotype is accounting for. Try the ICC.lme function from the psychometric library.
Finally, the most interesting method would be to produce plots of predicted effects, with their corresponding uncertainty estimates for each Genotype, Treatment, etc. The focus here shifts from Genotype as a factor, to each level of Genotype as a (modelled / varying) effect. | Test for variation using random effects model
There are several things you could do to test whether there is genetic variance.
First, however, I wonder why you want separate models for "each combination of Treat1 and Treat2 (AP,AQ,BP,BQ)"? I don' |
37,737 | Analyze a football match: similar players with DBSCAN and similar trajectories with TRACLUS | There are 2 questions there (1st point is not a question). All answers are below.
Q1: How can you cluster players that pass the ball to each other more often?
In my view this is a loaded task that is better broken into the following:
Identify whether a player is passing a ball. You have to look at the distribution of sensory data that is often associated with actions related to passing balls. Many ways to do this. Once fancy way could be to replicate this emperically-collected dataset in a 3D game that you load players with similar sensors. The nice thing about the game is that you can identify the target variables that you wish to predict (i.e. you know if they are passing the ball). This way, using the game, you can correlate distribution of sensory data to the targeted variables, ultimately generated a labelled set of samples. Finally, you apply a domain-adaptation step by which your 3D game model is transformed to the domain emperically-collecfted dataset (so you can run it there with less error than without the domain adaptation step).
Identify whether a player is receiving a ball. Similar to point above but for the distribution of sensory data upon the receipt of balls.
Identifying linked passes and receives. This is relatively trivial: two players pass balls to each others iff a receive happens after a pass. To reduce noise, you may wish to add additional constraints to this assumption to ensure that accidental passes are set apart from intentional ones.
Q2: Can I exploit something else from this type of dataset? (so that you expand point 2)
Fatigue/stamina/speed as a function of activity and time. This could be possibly easy to estimate by looking at the frequency speed of how the sensors positions/speed are changing.
Once you identify the point above, you can estimate other parameters, such as recovery time.
Additionally, correlate all above with players relationship with his team. For example, does a player pass balls more often when he is tired? To whom players, or which directions, does he tend to pass balls when he is tired? Does he change his passing targets/directions when he recovers his stamina? | Analyze a football match: similar players with DBSCAN and similar trajectories with TRACLUS | There are 2 questions there (1st point is not a question). All answers are below.
Q1: How can you cluster players that pass the ball to each other more often?
In my view this is a loaded task that is | Analyze a football match: similar players with DBSCAN and similar trajectories with TRACLUS
There are 2 questions there (1st point is not a question). All answers are below.
Q1: How can you cluster players that pass the ball to each other more often?
In my view this is a loaded task that is better broken into the following:
Identify whether a player is passing a ball. You have to look at the distribution of sensory data that is often associated with actions related to passing balls. Many ways to do this. Once fancy way could be to replicate this emperically-collected dataset in a 3D game that you load players with similar sensors. The nice thing about the game is that you can identify the target variables that you wish to predict (i.e. you know if they are passing the ball). This way, using the game, you can correlate distribution of sensory data to the targeted variables, ultimately generated a labelled set of samples. Finally, you apply a domain-adaptation step by which your 3D game model is transformed to the domain emperically-collecfted dataset (so you can run it there with less error than without the domain adaptation step).
Identify whether a player is receiving a ball. Similar to point above but for the distribution of sensory data upon the receipt of balls.
Identifying linked passes and receives. This is relatively trivial: two players pass balls to each others iff a receive happens after a pass. To reduce noise, you may wish to add additional constraints to this assumption to ensure that accidental passes are set apart from intentional ones.
Q2: Can I exploit something else from this type of dataset? (so that you expand point 2)
Fatigue/stamina/speed as a function of activity and time. This could be possibly easy to estimate by looking at the frequency speed of how the sensors positions/speed are changing.
Once you identify the point above, you can estimate other parameters, such as recovery time.
Additionally, correlate all above with players relationship with his team. For example, does a player pass balls more often when he is tired? To whom players, or which directions, does he tend to pass balls when he is tired? Does he change his passing targets/directions when he recovers his stamina? | Analyze a football match: similar players with DBSCAN and similar trajectories with TRACLUS
There are 2 questions there (1st point is not a question). All answers are below.
Q1: How can you cluster players that pass the ball to each other more often?
In my view this is a loaded task that is |
37,738 | "Monte Carlo Kalman Filter" vs Unscented Kalman Filter | I would recommend looking at this paper:
Hommels A, Murakami A, Nishimura SI. A comparison of the ensemble Kalman filter with the unscented Kalman filter: application to the construction of a road embankment. Geotechniek. 2009;13(1):52.
I believe the authors of the above paper conclude that MCKF or EnKF is better than UKF in terms of accuracy and takes the same computational time.
Edit: I agree with the people who have commented below that the performance is dependent on the process and measurement models used. In fact there is another paper for a different application that says UKF is better than MCKF:
T. Kodama and K. Kogiso, "Applications of UKF and EnKF to estimation of contraction ratio of McKibben pneumatic artificial muscles," 2017 American Control Conference (ACC), Seattle, WA, 2017, pp. 5217-5222.
doi: 10.23919/ACC.2017.7963765
Based on my understanding, I feel that the biggest difference between the methods is UKF weighs the samples, which MCKF (or EnKF) provides equal weight to all the samples. On the other hand, the UKF has parameters to tune that are not intuitive such as choosing the sigma value. And based on personal experience, those parameters do affect the results quite a bit. MCKF does not have such issues, but the need to work with several samples means more computational time (unless you have the luxury to use parallelization of course). As a rule of thumb, I usually go to MCKF first and if the results are good, I then spend time tuning UKF to match or improve the results and receive faster computation in return. | "Monte Carlo Kalman Filter" vs Unscented Kalman Filter | I would recommend looking at this paper:
Hommels A, Murakami A, Nishimura SI. A comparison of the ensemble Kalman filter with the unscented Kalman filter: application to the construction of a road em | "Monte Carlo Kalman Filter" vs Unscented Kalman Filter
I would recommend looking at this paper:
Hommels A, Murakami A, Nishimura SI. A comparison of the ensemble Kalman filter with the unscented Kalman filter: application to the construction of a road embankment. Geotechniek. 2009;13(1):52.
I believe the authors of the above paper conclude that MCKF or EnKF is better than UKF in terms of accuracy and takes the same computational time.
Edit: I agree with the people who have commented below that the performance is dependent on the process and measurement models used. In fact there is another paper for a different application that says UKF is better than MCKF:
T. Kodama and K. Kogiso, "Applications of UKF and EnKF to estimation of contraction ratio of McKibben pneumatic artificial muscles," 2017 American Control Conference (ACC), Seattle, WA, 2017, pp. 5217-5222.
doi: 10.23919/ACC.2017.7963765
Based on my understanding, I feel that the biggest difference between the methods is UKF weighs the samples, which MCKF (or EnKF) provides equal weight to all the samples. On the other hand, the UKF has parameters to tune that are not intuitive such as choosing the sigma value. And based on personal experience, those parameters do affect the results quite a bit. MCKF does not have such issues, but the need to work with several samples means more computational time (unless you have the luxury to use parallelization of course). As a rule of thumb, I usually go to MCKF first and if the results are good, I then spend time tuning UKF to match or improve the results and receive faster computation in return. | "Monte Carlo Kalman Filter" vs Unscented Kalman Filter
I would recommend looking at this paper:
Hommels A, Murakami A, Nishimura SI. A comparison of the ensemble Kalman filter with the unscented Kalman filter: application to the construction of a road em |
37,739 | Sample size with respect to prediction in classification and regression | Basically, I think you ask intuitively how sample size affects machine learning techniques. So, the real factor that affects the required sample sizes is dimensionality of the space that data live in, and its sparseness. I will give you two examples, because I find it hard to summarise everything in one...
Let's say you have some dense data and you try to fit a model using some regression. If the data follow a polynomial of degree $n$ then you need more that $n$ data so your algorithm can find the correct curve. Otherwise, it will make an over-simplistic model, different than reality. Of course in reality there will be noise, so you need even more data to make a better model.
Let's say you have some sparse data, i.e., most dimensions are zeros. Such an example is text, like tweets or SMS (forget books for now), where the frequency of each word is a dimension and of course documents don't have the majority of the words in the dictionary (sparse space).
You try to classify tweets based on their topic. Algorithms, like kNN, SVMs etc, work on similarities between samples, e.g. 1-NN will find the tweet in the training set closest to the one that you try to classify and it will assign the corresponding label. However, because of the sparseness... guess what... most similarities are zero! Simply because documents don't share enough words. To be able to make predictions you need enough data so that something in your training set resembles the unknown documents you try to classify. Of course since it is a continuous space you can never fill all the gaps between samples... but the more data you put in, the higher the chance that the unknown sample will find something similar in the training set. | Sample size with respect to prediction in classification and regression | Basically, I think you ask intuitively how sample size affects machine learning techniques. So, the real factor that affects the required sample sizes is dimensionality of the space that data live in, | Sample size with respect to prediction in classification and regression
Basically, I think you ask intuitively how sample size affects machine learning techniques. So, the real factor that affects the required sample sizes is dimensionality of the space that data live in, and its sparseness. I will give you two examples, because I find it hard to summarise everything in one...
Let's say you have some dense data and you try to fit a model using some regression. If the data follow a polynomial of degree $n$ then you need more that $n$ data so your algorithm can find the correct curve. Otherwise, it will make an over-simplistic model, different than reality. Of course in reality there will be noise, so you need even more data to make a better model.
Let's say you have some sparse data, i.e., most dimensions are zeros. Such an example is text, like tweets or SMS (forget books for now), where the frequency of each word is a dimension and of course documents don't have the majority of the words in the dictionary (sparse space).
You try to classify tweets based on their topic. Algorithms, like kNN, SVMs etc, work on similarities between samples, e.g. 1-NN will find the tweet in the training set closest to the one that you try to classify and it will assign the corresponding label. However, because of the sparseness... guess what... most similarities are zero! Simply because documents don't share enough words. To be able to make predictions you need enough data so that something in your training set resembles the unknown documents you try to classify. Of course since it is a continuous space you can never fill all the gaps between samples... but the more data you put in, the higher the chance that the unknown sample will find something similar in the training set. | Sample size with respect to prediction in classification and regression
Basically, I think you ask intuitively how sample size affects machine learning techniques. So, the real factor that affects the required sample sizes is dimensionality of the space that data live in, |
37,740 | Sample size with respect to prediction in classification and regression | I dont understand the question fully. Generally a bigger sample will yield (for example) a better classification. Unless bigger means bad quality observations. A small sample will make a lot of models useless. For example since tree based models are a sort of "divde and conquer" approach their efficiency depends a lot on the size of the training sample.
On the other hand, if you are interested in statistical learning in high dimensions I think your concern has more to do with the curse of dimensionality. If your sample size is "small" and your feature space is of a "high" dimension your data will behave as if it were sparse and most algorithms will have a terrible time trying to make sense of it. Quoting John A. Richards in Remote Sensing Digital Image Analysis:
Feature Reduction and Separability
Classification cost increases with the number of features used to describe pixel vectors in multispectral space – i.e. with the number of spectral bands associated with a pixel. For classifiers such as the parallelepiped and minimum distance procedures this is
a linear increase with features; however for maximum likelihood classification, the
procedure most often preferred, the cost increase with features is quadratic. Therefore it is sensible economically to ensure that no more features than necessary are utilised when performing a classification. Section 8.2.6 draws attention to the number of training pixels needed to ensure that reliable estimates of class signatues can be obtained. In particular, the number of training pixels required increases with the number of bands or channels in the data. For high dimensionality data, such as that from imaging spectrometers, that requirement presents quite a challenge in practice, so keeping the number of features used in a classification to as few as possible is important if reliable results are to be expected from affordable numbers of training pixels. Features which do not aid discrimination, by contributing little to the separability of spectral classes, should be discarded. Removal of least effective features is referred to as feature selection, this being one form of feature reduction. The other is to transform the pixel vector into a new set of coordinates in which the features that can be removed are made more evident. Both procedures are considered in some detail in this chapter.
Which would mean that the problem is two-fold, finding relevant features and the samp size you mention. As of now you can dowload the book for free if you search for it on google.
Another way to read your question which particularly interests me would be this: in supervised learning you can only really validate your models on test data by cross validation and what not. If the labeled sample from which you obtained your train/test samples doesnt represent your universe well, the validation results might not apply for your universe. How can you measure representativeness of your labeled sample? | Sample size with respect to prediction in classification and regression | I dont understand the question fully. Generally a bigger sample will yield (for example) a better classification. Unless bigger means bad quality observations. A small sample will make a lot of models | Sample size with respect to prediction in classification and regression
I dont understand the question fully. Generally a bigger sample will yield (for example) a better classification. Unless bigger means bad quality observations. A small sample will make a lot of models useless. For example since tree based models are a sort of "divde and conquer" approach their efficiency depends a lot on the size of the training sample.
On the other hand, if you are interested in statistical learning in high dimensions I think your concern has more to do with the curse of dimensionality. If your sample size is "small" and your feature space is of a "high" dimension your data will behave as if it were sparse and most algorithms will have a terrible time trying to make sense of it. Quoting John A. Richards in Remote Sensing Digital Image Analysis:
Feature Reduction and Separability
Classification cost increases with the number of features used to describe pixel vectors in multispectral space – i.e. with the number of spectral bands associated with a pixel. For classifiers such as the parallelepiped and minimum distance procedures this is
a linear increase with features; however for maximum likelihood classification, the
procedure most often preferred, the cost increase with features is quadratic. Therefore it is sensible economically to ensure that no more features than necessary are utilised when performing a classification. Section 8.2.6 draws attention to the number of training pixels needed to ensure that reliable estimates of class signatues can be obtained. In particular, the number of training pixels required increases with the number of bands or channels in the data. For high dimensionality data, such as that from imaging spectrometers, that requirement presents quite a challenge in practice, so keeping the number of features used in a classification to as few as possible is important if reliable results are to be expected from affordable numbers of training pixels. Features which do not aid discrimination, by contributing little to the separability of spectral classes, should be discarded. Removal of least effective features is referred to as feature selection, this being one form of feature reduction. The other is to transform the pixel vector into a new set of coordinates in which the features that can be removed are made more evident. Both procedures are considered in some detail in this chapter.
Which would mean that the problem is two-fold, finding relevant features and the samp size you mention. As of now you can dowload the book for free if you search for it on google.
Another way to read your question which particularly interests me would be this: in supervised learning you can only really validate your models on test data by cross validation and what not. If the labeled sample from which you obtained your train/test samples doesnt represent your universe well, the validation results might not apply for your universe. How can you measure representativeness of your labeled sample? | Sample size with respect to prediction in classification and regression
I dont understand the question fully. Generally a bigger sample will yield (for example) a better classification. Unless bigger means bad quality observations. A small sample will make a lot of models |
37,741 | How can one perform a two-group binomial power analysis without using normal approximations? | This is not an answer. It is a community wiki that people may edit as they look for the answer.
G*power 3 can perform (approximations) of these analyses (per this site). The canonical reference for that software provides a reference for performing (at least some) of these types of power analyses as Cohen, 1988 chapter 6 (and 7) as does this example using SAS. The exact equations/procedures may be available from that source. However, the approximations appear to break down at small probabilities. | How can one perform a two-group binomial power analysis without using normal approximations? | This is not an answer. It is a community wiki that people may edit as they look for the answer.
G*power 3 can perform (approximations) of these analyses (per this site). The canonical reference for | How can one perform a two-group binomial power analysis without using normal approximations?
This is not an answer. It is a community wiki that people may edit as they look for the answer.
G*power 3 can perform (approximations) of these analyses (per this site). The canonical reference for that software provides a reference for performing (at least some) of these types of power analyses as Cohen, 1988 chapter 6 (and 7) as does this example using SAS. The exact equations/procedures may be available from that source. However, the approximations appear to break down at small probabilities. | How can one perform a two-group binomial power analysis without using normal approximations?
This is not an answer. It is a community wiki that people may edit as they look for the answer.
G*power 3 can perform (approximations) of these analyses (per this site). The canonical reference for |
37,742 | Causal identification and penalized splines | "Clean identification" of regression parameters is not an established concept. I believe what the reviewer means by this is that you should specify a parameter which is interpretable, testable, of low dimensionality, and for which the analysis is decently powered to detect so that an unbiased estimate can be obtained with relatively good efficiency.
The desire for "clean identification" does not imply OLS is the only suitable tool for the job. OLS is, however, a theoretically and practically sound tool for specifying and estimating parameters under a variety of settings. The desire for "clean identification" does not preclude semiparametric inference either. As a note, the spline extends an OLS model by creating (a) complex representation(s) of covariates. Semiparametric inference involves flexible modeling to eliminate the influence of ancillary statistics, but in your model it seems the main exposure is handled in such a fashion.
I think the reviewer raises two substantiated concerns. First is the rationale for penalization. Penalized regression methods are valuable for prediction. They are rarely used for inference. Penalized methods like ridge regression are biased, and it is difficult to describe or assess the bias. The goal of minimizing AIC is to obtain the best predictions, not valid inference. The second substantiated concern is whether the spline is even necessary to model the main exposure. It is true as you say that a spline is capable of modeling complex nonlinear functional forms. However, a spline simplifies very little. It is a complex high dimensional representation, with knot points and tuning that can be a source of researcher bias, and covariates that are nearly uninterpretable for anyone except highly trained statisticians. Many statistically significant trends that are precisely modeled by splines have underlying linear approximations which are neither statistically nor practically significant. Many statisticians and field experts agree that, in that case, both results should be carefully reported and/or that the spline is committing a type I error.
If the functional form of the main exposure is misspecified, it is possible to use Huber White standard errors to obtain consistent and unbiased inference for the least squares slope as a first order approximation to any non-linear trend. Splines can be used to model precision variables, on which you do not base inference, when there is a complex design to the data. This serves to effectively match and reduce variability when there is complex heterogeneity in data.
I think the reviewers comments can be addressed by fitting a linear model for the exposure and conducting inference with Huber White Sandwich errors. If the inference mostly agrees with the spline inference, comment on the spline model insofar as it demonstrates a curvilinear trend between the exposure and the response. | Causal identification and penalized splines | "Clean identification" of regression parameters is not an established concept. I believe what the reviewer means by this is that you should specify a parameter which is interpretable, testable, of low | Causal identification and penalized splines
"Clean identification" of regression parameters is not an established concept. I believe what the reviewer means by this is that you should specify a parameter which is interpretable, testable, of low dimensionality, and for which the analysis is decently powered to detect so that an unbiased estimate can be obtained with relatively good efficiency.
The desire for "clean identification" does not imply OLS is the only suitable tool for the job. OLS is, however, a theoretically and practically sound tool for specifying and estimating parameters under a variety of settings. The desire for "clean identification" does not preclude semiparametric inference either. As a note, the spline extends an OLS model by creating (a) complex representation(s) of covariates. Semiparametric inference involves flexible modeling to eliminate the influence of ancillary statistics, but in your model it seems the main exposure is handled in such a fashion.
I think the reviewer raises two substantiated concerns. First is the rationale for penalization. Penalized regression methods are valuable for prediction. They are rarely used for inference. Penalized methods like ridge regression are biased, and it is difficult to describe or assess the bias. The goal of minimizing AIC is to obtain the best predictions, not valid inference. The second substantiated concern is whether the spline is even necessary to model the main exposure. It is true as you say that a spline is capable of modeling complex nonlinear functional forms. However, a spline simplifies very little. It is a complex high dimensional representation, with knot points and tuning that can be a source of researcher bias, and covariates that are nearly uninterpretable for anyone except highly trained statisticians. Many statistically significant trends that are precisely modeled by splines have underlying linear approximations which are neither statistically nor practically significant. Many statisticians and field experts agree that, in that case, both results should be carefully reported and/or that the spline is committing a type I error.
If the functional form of the main exposure is misspecified, it is possible to use Huber White standard errors to obtain consistent and unbiased inference for the least squares slope as a first order approximation to any non-linear trend. Splines can be used to model precision variables, on which you do not base inference, when there is a complex design to the data. This serves to effectively match and reduce variability when there is complex heterogeneity in data.
I think the reviewers comments can be addressed by fitting a linear model for the exposure and conducting inference with Huber White Sandwich errors. If the inference mostly agrees with the spline inference, comment on the spline model insofar as it demonstrates a curvilinear trend between the exposure and the response. | Causal identification and penalized splines
"Clean identification" of regression parameters is not an established concept. I believe what the reviewer means by this is that you should specify a parameter which is interpretable, testable, of low |
37,743 | Credibility Intervals | The problem of comparing credible sets and confidence intervals is that they are not apples to apples or apples to oranges comparisons. They are an apples to tractors comparison. They are only substitutes for one another in certain circumstances.
The primary use of a confidence interval is in scientific research. Although businesses use them, their value is lessened since it is often difficult to choose an action based on a range. Applied business statistical methods tend to favor point estimates for practical reasons, even if intervals are included in reports. When included, they are mostly as warnings.
Credible sets tend to be less used in Bayesian methods because the entire posterior is reported as well as the marginals. They are reported out and descriptively provide a feel for the data if no graph of the posterior is provided, but they do not have the same usefulness as confidence intervals because they mean something different.
There are four cases where you will tend to see a credible set used instead of a confidence interval, but I am not certain that most of them are practical. It happens, but not often.
The first one has already been mentioned. There are times where a confidence interval appears to produce a pathological interval. I am less happy with this use. It is important to remember that confidence procedures produce valid intervals at least $1-\alpha$ percent of the time upon infinite repetition, but the price of that may be total nonsense sometimes. I am not sure that is a good reason to discard a Frequentist method.
Rare or widespread events are a typical example. If a high enough percentage of a population is doing or not doing something, then it may appear that everybody or nobody is doing something. Because Frequentist intervals are built around point estimates, and the sample has no variance, the interval lacks a range. I find it disturbing to abandon a method because it sometimes produces a result that others may not accept. The virtue of a Frequentist method is that all information comes from the data. It just happens that the data didn’t have enough information in it.
That is not the sum total of all pathologies, however. Other pathologies may encourage the use of a Bayesian method because an appropriate Frequentist method may exist but cannot be found. For example, the sample mean coordinate of the points in a donut centered on $(0,0,0)$ should be near $(0,0,0)$, but there is no donut there. That is where the donut hole is. A range built around an unsupported point may encourage a Bayesian alternative if information about the shape cannot be included in the non-Bayesian solution for some reason.
The second reason has a partial Frequentist analog, the case of outside information. In the general case, where there is outside research on a parameter of interest, both a Bayesian prior and a Frequentist meta-analysis produce useable intervals. The difficulty happens when the outside knowledge is not contained in data, per se, but in outside knowledge.
Some knowledge is supported by theory and observations in unrelated studies but should logically hold. For example, consider the case of a well engineered object that should range between 1 and 0. If it reaches 0, then it terminates. The next value $x_{t+1}=\beta{x}_t+\epsilon,0<\beta<1$. It can only have a value of 1 at $t=0$. It may be the case that $x_t$ can go up or down, but it can never reach 1 again and stops at 0. Furthermore, because it is well-engineered, $\beta=.9999999\pm{.00000001}$. Of course, we could have deceived ourselves about the true tolerance. That is the rub when using a Bayesian method.
In the case of the well-engineered product, confidence intervals are too conservative and overestimate the range of the interval. In that case, it can be trivially true that a 95% interval covers it at least 95% of the time because it may be so wide, given that prior information was excluded from its construction, that it should cover the parameter nearly 100% of the time.
The third case happens when something is a one-off event instead of a repeating event. Interestingly, you can create a case where a confidence interval is the valid interval for one party, and a credible set is the valid interval for another party with the same data.
Consider a manufacturing firm that produces some product that fails from time to time. It wants to guarantee that at least 99% of the time, it can recover from failure based on an interval. A confidence interval provides that guarantee. However, the party buying a product that failed may want an interval that has a 99% chance of being the correct interval to fix the problem as this will not repeat, and it must only work this one time. They are concerned about the data they have and the one event they are experiencing. They do not care about the product’s efficacy for the other customers of the firm.
The fourth case may have no real-world analogs, but it has to do with the difference in the type of loss being experienced. Most Frequentist procedures are mini-max procedures. The minimize the maximum amount of risk that you are exposed to. That is also true for confidence procedures. Most Bayesian interval estimates minimize average loss. If your concern is minimizing your average loss from using an interval built by a non-representative sample, then you should use a credible set. If you are concerned about taking the smallest possible largest risk, then you should use a confidence interval.
But getting back to the apples and tractors, these do not happen that often. Frequentist procedures overtook the pre-existing Bayesian paradigm because it works in most settings for most problems. Bayesian procedures are clearly superior in some cases, but not necessarily Bayesian intervals.
The real-world cases for Bayesian credible sets are things like search and rescue because they can be quickly and easily updated and can use knowledge without prior research. It can also be superior when significant amounts of data are missing because Bayesian methods can treat a missing data point as it does a parameter. That can prevent a pathological interval created by information loss because it can then marginalize out the impact of the missing data.
This is a personal guess based on the observation that Bayesian methods are not in heavy use comparatively, but I am not that convinced an interval holds the same value on the Bayesian side of the coin.
Frequentist methods are built around points. Bayesian methods are built around distributions. Distributions carry more information than a single point. Bayesian methods can split inference and probability from actions taken based on those probabilities.
If an interval would be helpful, a loss function can be applied to the posterior, and boundaries for the interval can be discovered. In that case, it is a formalism to support a proper action given the data.
I do not suspect that specific use happens that much except in risk management, where ranges are essential. I do not know that it happens that much in that case.
Confidence intervals carry more information than point estimates. Credible sets are an information reduction technique.
A confidence interval of $7\pm{3}$ isn’t giving the same information as a credible set of $[6,7]\cup[7.5,9]$ for the same data. | Credibility Intervals | The problem of comparing credible sets and confidence intervals is that they are not apples to apples or apples to oranges comparisons. They are an apples to tractors comparison. They are only subst | Credibility Intervals
The problem of comparing credible sets and confidence intervals is that they are not apples to apples or apples to oranges comparisons. They are an apples to tractors comparison. They are only substitutes for one another in certain circumstances.
The primary use of a confidence interval is in scientific research. Although businesses use them, their value is lessened since it is often difficult to choose an action based on a range. Applied business statistical methods tend to favor point estimates for practical reasons, even if intervals are included in reports. When included, they are mostly as warnings.
Credible sets tend to be less used in Bayesian methods because the entire posterior is reported as well as the marginals. They are reported out and descriptively provide a feel for the data if no graph of the posterior is provided, but they do not have the same usefulness as confidence intervals because they mean something different.
There are four cases where you will tend to see a credible set used instead of a confidence interval, but I am not certain that most of them are practical. It happens, but not often.
The first one has already been mentioned. There are times where a confidence interval appears to produce a pathological interval. I am less happy with this use. It is important to remember that confidence procedures produce valid intervals at least $1-\alpha$ percent of the time upon infinite repetition, but the price of that may be total nonsense sometimes. I am not sure that is a good reason to discard a Frequentist method.
Rare or widespread events are a typical example. If a high enough percentage of a population is doing or not doing something, then it may appear that everybody or nobody is doing something. Because Frequentist intervals are built around point estimates, and the sample has no variance, the interval lacks a range. I find it disturbing to abandon a method because it sometimes produces a result that others may not accept. The virtue of a Frequentist method is that all information comes from the data. It just happens that the data didn’t have enough information in it.
That is not the sum total of all pathologies, however. Other pathologies may encourage the use of a Bayesian method because an appropriate Frequentist method may exist but cannot be found. For example, the sample mean coordinate of the points in a donut centered on $(0,0,0)$ should be near $(0,0,0)$, but there is no donut there. That is where the donut hole is. A range built around an unsupported point may encourage a Bayesian alternative if information about the shape cannot be included in the non-Bayesian solution for some reason.
The second reason has a partial Frequentist analog, the case of outside information. In the general case, where there is outside research on a parameter of interest, both a Bayesian prior and a Frequentist meta-analysis produce useable intervals. The difficulty happens when the outside knowledge is not contained in data, per se, but in outside knowledge.
Some knowledge is supported by theory and observations in unrelated studies but should logically hold. For example, consider the case of a well engineered object that should range between 1 and 0. If it reaches 0, then it terminates. The next value $x_{t+1}=\beta{x}_t+\epsilon,0<\beta<1$. It can only have a value of 1 at $t=0$. It may be the case that $x_t$ can go up or down, but it can never reach 1 again and stops at 0. Furthermore, because it is well-engineered, $\beta=.9999999\pm{.00000001}$. Of course, we could have deceived ourselves about the true tolerance. That is the rub when using a Bayesian method.
In the case of the well-engineered product, confidence intervals are too conservative and overestimate the range of the interval. In that case, it can be trivially true that a 95% interval covers it at least 95% of the time because it may be so wide, given that prior information was excluded from its construction, that it should cover the parameter nearly 100% of the time.
The third case happens when something is a one-off event instead of a repeating event. Interestingly, you can create a case where a confidence interval is the valid interval for one party, and a credible set is the valid interval for another party with the same data.
Consider a manufacturing firm that produces some product that fails from time to time. It wants to guarantee that at least 99% of the time, it can recover from failure based on an interval. A confidence interval provides that guarantee. However, the party buying a product that failed may want an interval that has a 99% chance of being the correct interval to fix the problem as this will not repeat, and it must only work this one time. They are concerned about the data they have and the one event they are experiencing. They do not care about the product’s efficacy for the other customers of the firm.
The fourth case may have no real-world analogs, but it has to do with the difference in the type of loss being experienced. Most Frequentist procedures are mini-max procedures. The minimize the maximum amount of risk that you are exposed to. That is also true for confidence procedures. Most Bayesian interval estimates minimize average loss. If your concern is minimizing your average loss from using an interval built by a non-representative sample, then you should use a credible set. If you are concerned about taking the smallest possible largest risk, then you should use a confidence interval.
But getting back to the apples and tractors, these do not happen that often. Frequentist procedures overtook the pre-existing Bayesian paradigm because it works in most settings for most problems. Bayesian procedures are clearly superior in some cases, but not necessarily Bayesian intervals.
The real-world cases for Bayesian credible sets are things like search and rescue because they can be quickly and easily updated and can use knowledge without prior research. It can also be superior when significant amounts of data are missing because Bayesian methods can treat a missing data point as it does a parameter. That can prevent a pathological interval created by information loss because it can then marginalize out the impact of the missing data.
This is a personal guess based on the observation that Bayesian methods are not in heavy use comparatively, but I am not that convinced an interval holds the same value on the Bayesian side of the coin.
Frequentist methods are built around points. Bayesian methods are built around distributions. Distributions carry more information than a single point. Bayesian methods can split inference and probability from actions taken based on those probabilities.
If an interval would be helpful, a loss function can be applied to the posterior, and boundaries for the interval can be discovered. In that case, it is a formalism to support a proper action given the data.
I do not suspect that specific use happens that much except in risk management, where ranges are essential. I do not know that it happens that much in that case.
Confidence intervals carry more information than point estimates. Credible sets are an information reduction technique.
A confidence interval of $7\pm{3}$ isn’t giving the same information as a credible set of $[6,7]\cup[7.5,9]$ for the same data. | Credibility Intervals
The problem of comparing credible sets and confidence intervals is that they are not apples to apples or apples to oranges comparisons. They are an apples to tractors comparison. They are only subst |
37,744 | Credibility Intervals | A classic example is when you have tested a drug versus a placebo in a randomized clinical trial of 1 year duration and there were 1000 patients in each group. An adverse event that people were concerned could be a side effect of the treatment occurred in 0 patients in the treatment group and 0 patients in the placebo group. We have rates at which these events occurred in the placebo groups of previous similar studies in the same population, where they were also very rare, but sometimes occurred.
What can you say about the odds ratios (or rate ratio or hazard ratio)? A frequentist estimate would be that we don't really have an estimate and maybe our confidence interval is something like $(-\infty, \infty)$.
In contrast, a sensible Bayesian analysis will do something more informative as long as we have at least some weak prior information about the likely placebo rate and the possible size of a treatment effect. With a plausible level of prior information, a Bayesian unless would in this kind of scenario already suggest that extreme odds ratios are no longer very likely.
In contrast, see e.g. the TGN1412 example (see e.g. pages 2 and 92 to 94 here or Senn, S. (2008). Lessons from TGN1412 and TARGET: Implications for observational studies and meta-analysis. Pharmaceutical Statistics, 7(4):294–301.), where 6 out of 6 patients with an adverse event
on a test drug compared with 0 out of 2 placebo patients with an event is not statistically significant at the one-sided 2.5% level (Fisher’s exact test). However, a sensible Bayesian analysis suggests that we are pretty sure that the side effects were due to the drug. | Credibility Intervals | A classic example is when you have tested a drug versus a placebo in a randomized clinical trial of 1 year duration and there were 1000 patients in each group. An adverse event that people were concer | Credibility Intervals
A classic example is when you have tested a drug versus a placebo in a randomized clinical trial of 1 year duration and there were 1000 patients in each group. An adverse event that people were concerned could be a side effect of the treatment occurred in 0 patients in the treatment group and 0 patients in the placebo group. We have rates at which these events occurred in the placebo groups of previous similar studies in the same population, where they were also very rare, but sometimes occurred.
What can you say about the odds ratios (or rate ratio or hazard ratio)? A frequentist estimate would be that we don't really have an estimate and maybe our confidence interval is something like $(-\infty, \infty)$.
In contrast, a sensible Bayesian analysis will do something more informative as long as we have at least some weak prior information about the likely placebo rate and the possible size of a treatment effect. With a plausible level of prior information, a Bayesian unless would in this kind of scenario already suggest that extreme odds ratios are no longer very likely.
In contrast, see e.g. the TGN1412 example (see e.g. pages 2 and 92 to 94 here or Senn, S. (2008). Lessons from TGN1412 and TARGET: Implications for observational studies and meta-analysis. Pharmaceutical Statistics, 7(4):294–301.), where 6 out of 6 patients with an adverse event
on a test drug compared with 0 out of 2 placebo patients with an event is not statistically significant at the one-sided 2.5% level (Fisher’s exact test). However, a sensible Bayesian analysis suggests that we are pretty sure that the side effects were due to the drug. | Credibility Intervals
A classic example is when you have tested a drug versus a placebo in a randomized clinical trial of 1 year duration and there were 1000 patients in each group. An adverse event that people were concer |
37,745 | Credibility Intervals | Bjorn's answer suggests a frequentist confidence procedure cannot handle sparse data, nor can it incorporate historical data. To illustrate this Bjorn provides the TGN1412 example,
(see e.g. pages 2 and 92 to 94 here
or Senn, S. (2008). Lessons from TGN1412 and TARGET: Implications for
observational studies and meta-analysis. Pharmaceutical Statistics,
7(4):294–301.), where 6 out of 6
patients with an adverse event on a test drug compared with 0 out of 2
placebo patients with an event.
Using only the data provided above (while assuming equal exposure for all subjects and that a subject can experience only 1 event of interest), the figure below depicts confidence curves (one-sided p-values) testing hypotheses regarding the population-level adverse event rate $p$ for the active and placebo treatments. It also identifies the one-sided 97.5% confidence limits. This is formed by inverting the CDF of a binomial distribution based on the $\hat{p}_{pbo}=0$ and $\hat{p}_{act}=1$ point estimates. The estimated rate ratio is $\hat{p}_{pbo}/\hat{p}_{act}=0$ and a conservative upper 97.5% confidence limit is the ratio of the individual confidence limits, $0.84/0.54=1.56$. Notice the point and interval estimate $0(0,1.56)$ for the rate ratio is not $0(-\infty,\infty)$.
This figure also shows Bayesian posterior densities (credible intervals of all levels) for the adverse event rate for each treatment based on an arbitrary uniform prior in each group. As estimators the posterior means are biased towards 0.5, which is evidenced by the observed point estimates. Also to note is the upper credible limit for the placebo event rate is noticeably shorter than the confidence limit. This credible limit may not have good coverage probability in repeated experiments, calling into question whether we should feel confident in its performance for this experimental result. Based on $100,000$ Monte Carlo simulations the two-sided equal-tailed $95\%$ credible interval for the incidence rate ratio is $(0.0096, 0.85)$. Viewing the prior as a user-defined weight function that smooths the likelihood, the posterior densities can be seen as approximate p-value functions. The choice of interpreting a credible interval comes down to what one wants to measure using probability, the experimenter or the experiment.
Based on these data and a uniform prior distribution, a strict posterior decision rule would lead one to conclude the unknown fixed true rate ratio is smaller than $1$. Both methods can incorporate relevant historical data. Encoding the historical and current data through the likelihood, it is not clear what arbitrary user-defined weight function (prior) one should choose when smoothing the likelihood to construct the posterior intervals.
Addendum: Per Bjorn's request we can also look at the scenario where both groups have zero observed events. Just as before the credible intervals are worrisomely shorter than the confidence intervals, and the posterior means are the result of biased estimators.
The challenge now is to construct a point and interval estimate for the incidence rate ratio. The maximum likelihood estimate is $\frac{\hat{p}_{pbo}}{\hat{p}_{act}}=\frac{0}{0}$, which we could define to be equal to $1$. However, to construct conservative upper and lower confidence limits as before would produce values of the form $\frac{c}{0}$.
The Bayesian analysis of the rate ratio avoids this trouble because of the uniform prior distributions for each rate. This is equivalent to incorporating hypothetical experimental evidence by considering the scenario where each treatment group had recruited $2$ additional subjects, and $1$ subject in each group experienced the event of interest. This of course does not match the actual observed experiment, but it does provide conservative point estimates (conservative in the sense that the adverse event rate is not under estimated).
This same examination of hypothetical experimental evidence can be performed by referencing the exact binomial sampling distribution, which is presented in the figure below. Under this hypothetical scenario, a conservative $95\%$ confidence interval can be constructed by using the ratios of confidence limits for the individual rates, producing $\bigg(\frac{\hat{p}^L_{pbo}}{\hat{p}^U_{act}},\frac{\hat{p}^U_{pbo}}{\hat{p}^L_{act}}\bigg)=\Big(\frac{0.006}{0.53},\frac{0.81}{0.003}\Big)=(0.011,270)$. Another approach would be to invert the cumulative distribution function for the maximum likelihood estimator of the rate ratio while profiling the nuisance parameter $p_{act}$. Based on $100,000$ Monte Carlo simulations, the two-sided equal-tailed $95\%$ credible interval for the rate ratio is $(0.068, 68.25)$.
If we instead investigate the difference in incidence rates then no hypothetical experimental evidence is needed when constructing confidence limits based on the binomial CDF. If a subject can experience more than 1 event or we have varying exposure for each subject (or both) then a Poisson or Negative Binomial model should be used instead.
Treating fixed population-level parameters as random variables gives the appearance that more uncertainty is being accounted for, but often leads to credible limits (approximate confidence limits) that are too short. | Credibility Intervals | Bjorn's answer suggests a frequentist confidence procedure cannot handle sparse data, nor can it incorporate historical data. To illustrate this Bjorn provides the TGN1412 example,
(see e.g. pages 2 | Credibility Intervals
Bjorn's answer suggests a frequentist confidence procedure cannot handle sparse data, nor can it incorporate historical data. To illustrate this Bjorn provides the TGN1412 example,
(see e.g. pages 2 and 92 to 94 here
or Senn, S. (2008). Lessons from TGN1412 and TARGET: Implications for
observational studies and meta-analysis. Pharmaceutical Statistics,
7(4):294–301.), where 6 out of 6
patients with an adverse event on a test drug compared with 0 out of 2
placebo patients with an event.
Using only the data provided above (while assuming equal exposure for all subjects and that a subject can experience only 1 event of interest), the figure below depicts confidence curves (one-sided p-values) testing hypotheses regarding the population-level adverse event rate $p$ for the active and placebo treatments. It also identifies the one-sided 97.5% confidence limits. This is formed by inverting the CDF of a binomial distribution based on the $\hat{p}_{pbo}=0$ and $\hat{p}_{act}=1$ point estimates. The estimated rate ratio is $\hat{p}_{pbo}/\hat{p}_{act}=0$ and a conservative upper 97.5% confidence limit is the ratio of the individual confidence limits, $0.84/0.54=1.56$. Notice the point and interval estimate $0(0,1.56)$ for the rate ratio is not $0(-\infty,\infty)$.
This figure also shows Bayesian posterior densities (credible intervals of all levels) for the adverse event rate for each treatment based on an arbitrary uniform prior in each group. As estimators the posterior means are biased towards 0.5, which is evidenced by the observed point estimates. Also to note is the upper credible limit for the placebo event rate is noticeably shorter than the confidence limit. This credible limit may not have good coverage probability in repeated experiments, calling into question whether we should feel confident in its performance for this experimental result. Based on $100,000$ Monte Carlo simulations the two-sided equal-tailed $95\%$ credible interval for the incidence rate ratio is $(0.0096, 0.85)$. Viewing the prior as a user-defined weight function that smooths the likelihood, the posterior densities can be seen as approximate p-value functions. The choice of interpreting a credible interval comes down to what one wants to measure using probability, the experimenter or the experiment.
Based on these data and a uniform prior distribution, a strict posterior decision rule would lead one to conclude the unknown fixed true rate ratio is smaller than $1$. Both methods can incorporate relevant historical data. Encoding the historical and current data through the likelihood, it is not clear what arbitrary user-defined weight function (prior) one should choose when smoothing the likelihood to construct the posterior intervals.
Addendum: Per Bjorn's request we can also look at the scenario where both groups have zero observed events. Just as before the credible intervals are worrisomely shorter than the confidence intervals, and the posterior means are the result of biased estimators.
The challenge now is to construct a point and interval estimate for the incidence rate ratio. The maximum likelihood estimate is $\frac{\hat{p}_{pbo}}{\hat{p}_{act}}=\frac{0}{0}$, which we could define to be equal to $1$. However, to construct conservative upper and lower confidence limits as before would produce values of the form $\frac{c}{0}$.
The Bayesian analysis of the rate ratio avoids this trouble because of the uniform prior distributions for each rate. This is equivalent to incorporating hypothetical experimental evidence by considering the scenario where each treatment group had recruited $2$ additional subjects, and $1$ subject in each group experienced the event of interest. This of course does not match the actual observed experiment, but it does provide conservative point estimates (conservative in the sense that the adverse event rate is not under estimated).
This same examination of hypothetical experimental evidence can be performed by referencing the exact binomial sampling distribution, which is presented in the figure below. Under this hypothetical scenario, a conservative $95\%$ confidence interval can be constructed by using the ratios of confidence limits for the individual rates, producing $\bigg(\frac{\hat{p}^L_{pbo}}{\hat{p}^U_{act}},\frac{\hat{p}^U_{pbo}}{\hat{p}^L_{act}}\bigg)=\Big(\frac{0.006}{0.53},\frac{0.81}{0.003}\Big)=(0.011,270)$. Another approach would be to invert the cumulative distribution function for the maximum likelihood estimator of the rate ratio while profiling the nuisance parameter $p_{act}$. Based on $100,000$ Monte Carlo simulations, the two-sided equal-tailed $95\%$ credible interval for the rate ratio is $(0.068, 68.25)$.
If we instead investigate the difference in incidence rates then no hypothetical experimental evidence is needed when constructing confidence limits based on the binomial CDF. If a subject can experience more than 1 event or we have varying exposure for each subject (or both) then a Poisson or Negative Binomial model should be used instead.
Treating fixed population-level parameters as random variables gives the appearance that more uncertainty is being accounted for, but often leads to credible limits (approximate confidence limits) that are too short. | Credibility Intervals
Bjorn's answer suggests a frequentist confidence procedure cannot handle sparse data, nor can it incorporate historical data. To illustrate this Bjorn provides the TGN1412 example,
(see e.g. pages 2 |
37,746 | Credibility Intervals | If you concern about the recovery rate of a certain disease, credible interval is what you need when you want to say
There is a 95% chance that the recovery rate is between X and Y.
You cannot say this using confidence interval. With a 95% confidence interval, you can only say
There is a 95% chance that the next set of patient sample have a recovery rate between X and Y (crossed out to not mess up with the sample generation scenario -- that ASSUME my sample distribution is the true population distribution, 95% chance the next sample drawn from the population is within the interval X to Y)
If we draw $N$ sets of samples and calculate for each set a confidence interval, 95% of those intervals cover the true recovery rate, but I don't know if a particular interval X to Y contains the true recovery rate or not. In other words, I am only 95% confidence that the true recovery rate falls within the confidence interval X to Y that I calculcated from my data. | Credibility Intervals | If you concern about the recovery rate of a certain disease, credible interval is what you need when you want to say
There is a 95% chance that the recovery rate is between X and Y.
You cannot say t | Credibility Intervals
If you concern about the recovery rate of a certain disease, credible interval is what you need when you want to say
There is a 95% chance that the recovery rate is between X and Y.
You cannot say this using confidence interval. With a 95% confidence interval, you can only say
There is a 95% chance that the next set of patient sample have a recovery rate between X and Y (crossed out to not mess up with the sample generation scenario -- that ASSUME my sample distribution is the true population distribution, 95% chance the next sample drawn from the population is within the interval X to Y)
If we draw $N$ sets of samples and calculate for each set a confidence interval, 95% of those intervals cover the true recovery rate, but I don't know if a particular interval X to Y contains the true recovery rate or not. In other words, I am only 95% confidence that the true recovery rate falls within the confidence interval X to Y that I calculcated from my data. | Credibility Intervals
If you concern about the recovery rate of a certain disease, credible interval is what you need when you want to say
There is a 95% chance that the recovery rate is between X and Y.
You cannot say t |
37,747 | Relative advantages of multiple imputation and expectation maximization (EM) | Whether or not it makes sense to use GLMs depends on the distribution of $y$. I'd be inclined to use a nonlinear least squares model for the whole thing.
So if your regression model is $a = Z\alpha+\nu$ where $Z$ are the predictors and $\alpha$ are the parameters in the regression model for $a$, and your model for $b$ is $b = f(x)+\epsilon$ but where $f(x)$ is restricted to be non-negative, you could write $f(x) = \exp(\psi(x))$ and fit a model like this:
$$
y = Z\alpha+\exp(\psi(x))+\eta
$$
where $\eta$ is the sum of the two individual noise terms. (If you really intend that $y=a+b$ with no error at all, you have to do it differently; that's not really a stats problem as much as an approximation problem and you would probably want to look at infinity-norms then.)
If you put say a cubic regression spline in for $\psi$ that would be one easy way of getting some general smooth function in. That model could be fitted by nonlinear least squares. (Indeed, some algorithms can take advantage of the linearity of $a$ to simplify and speed up calculation.)
Depending on what you assume about $y$ or $f$, there are other things you might do instead.
That doesn't really address the imputation issue yet. However, this sort of model framework can be inserted into something like your suggestion of using EM. | Relative advantages of multiple imputation and expectation maximization (EM) | Whether or not it makes sense to use GLMs depends on the distribution of $y$. I'd be inclined to use a nonlinear least squares model for the whole thing.
So if your regression model is $a = Z\alpha+\ | Relative advantages of multiple imputation and expectation maximization (EM)
Whether or not it makes sense to use GLMs depends on the distribution of $y$. I'd be inclined to use a nonlinear least squares model for the whole thing.
So if your regression model is $a = Z\alpha+\nu$ where $Z$ are the predictors and $\alpha$ are the parameters in the regression model for $a$, and your model for $b$ is $b = f(x)+\epsilon$ but where $f(x)$ is restricted to be non-negative, you could write $f(x) = \exp(\psi(x))$ and fit a model like this:
$$
y = Z\alpha+\exp(\psi(x))+\eta
$$
where $\eta$ is the sum of the two individual noise terms. (If you really intend that $y=a+b$ with no error at all, you have to do it differently; that's not really a stats problem as much as an approximation problem and you would probably want to look at infinity-norms then.)
If you put say a cubic regression spline in for $\psi$ that would be one easy way of getting some general smooth function in. That model could be fitted by nonlinear least squares. (Indeed, some algorithms can take advantage of the linearity of $a$ to simplify and speed up calculation.)
Depending on what you assume about $y$ or $f$, there are other things you might do instead.
That doesn't really address the imputation issue yet. However, this sort of model framework can be inserted into something like your suggestion of using EM. | Relative advantages of multiple imputation and expectation maximization (EM)
Whether or not it makes sense to use GLMs depends on the distribution of $y$. I'd be inclined to use a nonlinear least squares model for the whole thing.
So if your regression model is $a = Z\alpha+\ |
37,748 | The gradient of a bivariate probit model | Let's take a step back and solve a simpler problem - how do derivatives with respect to a variable in the limits of the integral work, at least when everything is sufficiently nice? Let's take a very basic approach:
$\frac{d}{dz} \int_{a}^{h(z)} g(x) \,dx$
Let $\frac{dG}{dz}=g(z)$.
$=\frac{d}{dz} [G(h(z))-G(a)]=\frac{d}{dz} G(h(z))=h'(z)\,g(h(z))$
The same approach should be sufficient for your problem.
(whuber's approach gets you there faster though) | The gradient of a bivariate probit model | Let's take a step back and solve a simpler problem - how do derivatives with respect to a variable in the limits of the integral work, at least when everything is sufficiently nice? Let's take a very | The gradient of a bivariate probit model
Let's take a step back and solve a simpler problem - how do derivatives with respect to a variable in the limits of the integral work, at least when everything is sufficiently nice? Let's take a very basic approach:
$\frac{d}{dz} \int_{a}^{h(z)} g(x) \,dx$
Let $\frac{dG}{dz}=g(z)$.
$=\frac{d}{dz} [G(h(z))-G(a)]=\frac{d}{dz} G(h(z))=h'(z)\,g(h(z))$
The same approach should be sufficient for your problem.
(whuber's approach gets you there faster though) | The gradient of a bivariate probit model
Let's take a step back and solve a simpler problem - how do derivatives with respect to a variable in the limits of the integral work, at least when everything is sufficiently nice? Let's take a very |
37,749 | I am getting a number below zero when calculating out two standard deviations from the mean. Is this ok? | It appears unlikely to me that the question would require you to calculate two standard deviations of the data out from the mean - especially given that your data are unlikely to be even symmetric, much less normally distributed (since they are discrete). I see no interesting question that could really be answered by this calculation.
It appears more likely that you are asked to give a confidence interval for the mean. This also involves calculating the standard deviations of the data, but then you calculate the standard error of the mean from this standard deviation by dividing by the square of the sample size and finally construct the confidence interval based on the standard error. This confidence interval is therefore much less likely to go beneath zero (and if it did, you should indeed truncate at zero). Note that the sampling distribution of the mean will be roughly normally distributed as sample size increases, which is why this interval actually answers an interesting question, namely where we expect the actual mean to lie. | I am getting a number below zero when calculating out two standard deviations from the mean. Is this | It appears unlikely to me that the question would require you to calculate two standard deviations of the data out from the mean - especially given that your data are unlikely to be even symmetric, mu | I am getting a number below zero when calculating out two standard deviations from the mean. Is this ok?
It appears unlikely to me that the question would require you to calculate two standard deviations of the data out from the mean - especially given that your data are unlikely to be even symmetric, much less normally distributed (since they are discrete). I see no interesting question that could really be answered by this calculation.
It appears more likely that you are asked to give a confidence interval for the mean. This also involves calculating the standard deviations of the data, but then you calculate the standard error of the mean from this standard deviation by dividing by the square of the sample size and finally construct the confidence interval based on the standard error. This confidence interval is therefore much less likely to go beneath zero (and if it did, you should indeed truncate at zero). Note that the sampling distribution of the mean will be roughly normally distributed as sample size increases, which is why this interval actually answers an interesting question, namely where we expect the actual mean to lie. | I am getting a number below zero when calculating out two standard deviations from the mean. Is this
It appears unlikely to me that the question would require you to calculate two standard deviations of the data out from the mean - especially given that your data are unlikely to be even symmetric, mu |
37,750 | Should the likelihood function be increasing in every step of the EM algorithm? | The estimator may increase or decrease during each iteration however the likelihood must increase.
You should make sure your likelihood is increasing at each step and see if you are converging to the same value. | Should the likelihood function be increasing in every step of the EM algorithm? | The estimator may increase or decrease during each iteration however the likelihood must increase.
You should make sure your likelihood is increasing at each step and see if you are converging to the | Should the likelihood function be increasing in every step of the EM algorithm?
The estimator may increase or decrease during each iteration however the likelihood must increase.
You should make sure your likelihood is increasing at each step and see if you are converging to the same value. | Should the likelihood function be increasing in every step of the EM algorithm?
The estimator may increase or decrease during each iteration however the likelihood must increase.
You should make sure your likelihood is increasing at each step and see if you are converging to the |
37,751 | Should the likelihood function be increasing in every step of the EM algorithm? | Perhaps not relevant in this case but note that if the E-step is estimated, with Monte Carlo methods or another approximation, it is possible for the likelihood to decrease.
A thought that might be relevant is that the EM does not converge towards the global maxima but to a local.
For more details see section 3 of
"On the Convergence Properties of the EM Algorithm" by Wu. | Should the likelihood function be increasing in every step of the EM algorithm? | Perhaps not relevant in this case but note that if the E-step is estimated, with Monte Carlo methods or another approximation, it is possible for the likelihood to decrease.
A thought that might be re | Should the likelihood function be increasing in every step of the EM algorithm?
Perhaps not relevant in this case but note that if the E-step is estimated, with Monte Carlo methods or another approximation, it is possible for the likelihood to decrease.
A thought that might be relevant is that the EM does not converge towards the global maxima but to a local.
For more details see section 3 of
"On the Convergence Properties of the EM Algorithm" by Wu. | Should the likelihood function be increasing in every step of the EM algorithm?
Perhaps not relevant in this case but note that if the E-step is estimated, with Monte Carlo methods or another approximation, it is possible for the likelihood to decrease.
A thought that might be re |
37,752 | Probability that 2 OH NFL teams go 31 weeks w/o wins on the same day | There is a big selection bias. It would make more sense to calculate the probability of any two teams going 31 weeks without both teams winning during the same week than just these two teams.
Your way of calculating seems better than your friend's. Assuming that probability of winning a game is 11/42 makes more sense than assuming that the team will win exactly 11 out of 42 games (if the team loses their first game they aren't more likely the win their second game). | Probability that 2 OH NFL teams go 31 weeks w/o wins on the same day | There is a big selection bias. It would make more sense to calculate the probability of any two teams going 31 weeks without both teams winning during the same week than just these two teams.
Your way | Probability that 2 OH NFL teams go 31 weeks w/o wins on the same day
There is a big selection bias. It would make more sense to calculate the probability of any two teams going 31 weeks without both teams winning during the same week than just these two teams.
Your way of calculating seems better than your friend's. Assuming that probability of winning a game is 11/42 makes more sense than assuming that the team will win exactly 11 out of 42 games (if the team loses their first game they aren't more likely the win their second game). | Probability that 2 OH NFL teams go 31 weeks w/o wins on the same day
There is a big selection bias. It would make more sense to calculate the probability of any two teams going 31 weeks without both teams winning during the same week than just these two teams.
Your way |
37,753 | Comparison of frequency tables over time | Im not sure that I clearly understand the design of an experiment, but if more details are given, I will maybe re-answer your question. Mainly - how did CTRL and TRT differ from each other? Was number of students equal in each trial?
As I understand it, this should help.
First, you need to reorganize your data into array:
Students <-array(c(18,14,7,7,2,6,5,30,28,10,4,4,17,10,5,11),
dim = c(4,2,2), dimnames = list(GROUP = c("1","2","3","4"),
Response = c("success","failure"),Trial.no = c("1","8")));Students
You can also add more trials (not only 1st and 8th as in this case)
mantelhaen.test(Students)
if p-value < 0.05, it is the evidence for learning effect | Comparison of frequency tables over time | Im not sure that I clearly understand the design of an experiment, but if more details are given, I will maybe re-answer your question. Mainly - how did CTRL and TRT differ from each other? Was number | Comparison of frequency tables over time
Im not sure that I clearly understand the design of an experiment, but if more details are given, I will maybe re-answer your question. Mainly - how did CTRL and TRT differ from each other? Was number of students equal in each trial?
As I understand it, this should help.
First, you need to reorganize your data into array:
Students <-array(c(18,14,7,7,2,6,5,30,28,10,4,4,17,10,5,11),
dim = c(4,2,2), dimnames = list(GROUP = c("1","2","3","4"),
Response = c("success","failure"),Trial.no = c("1","8")));Students
You can also add more trials (not only 1st and 8th as in this case)
mantelhaen.test(Students)
if p-value < 0.05, it is the evidence for learning effect | Comparison of frequency tables over time
Im not sure that I clearly understand the design of an experiment, but if more details are given, I will maybe re-answer your question. Mainly - how did CTRL and TRT differ from each other? Was number |
37,754 | How to predict the probability of a performance rank for a test taker on a future test based on previous test scores? | There is one big caveat and several smaller ones. First an approximate answer for Tom ranking first is $p=$ 0.6315 with 95% confidence intervals of 0.6306 to 0.6324. Note, the confidence intervals are from how precisely I determined an answer, not from what the variability of the probability actually is. Now the gory details.
One cannot use Wilcoxon or other ranking methods, there are just too many ties. Thus, the Big Caveat: I assume that the ranker knows how to apportion tied scores exactly to award only one first place rank. I don't need to know how to do that, as follows.
I transformed the data into an approximately normal distribution, and to do this on a larger data set, it would have to be redone properly on that data. To do that, I took all the data, and tested it for normality. The mean and median were different and the tails were asymmetric, and the data looked vaguely like r-values. So, I took the $ArcTanh($test score$/10)$ (Fisher's transformation), which made the data a lot more normal. The reader should not take the transformation that I used to heart. One caveat is that it would not allow for a perfect score of 10, of which, for my quick and dirty approximation, there were none. A better distribution can only be found from analyzing more data. I then found the mean and standard deviation of that transformed data and did 1000 Monte Carlo simulations using inverse normal 1000 times. For each simulation, I counted when Tom had the best score. Now because I used real numbers and not integer scores, what would have been ties were adjudicated perfectly, without my knowing how to actually do that adjudication in practice. Alice's ranking first probability can be determined using the same methods.
Now there are probably 1000 ways of doing this same problem, and my humble suggestion has ifs ands and buts that may be avoided by handling the problem otherwise. My solution is only approximate, but, it is also quick and almost brain dead. I leave it up to others to suggest better methods. | How to predict the probability of a performance rank for a test taker on a future test based on prev | There is one big caveat and several smaller ones. First an approximate answer for Tom ranking first is $p=$ 0.6315 with 95% confidence intervals of 0.6306 to 0.6324. Note, the confidence intervals are | How to predict the probability of a performance rank for a test taker on a future test based on previous test scores?
There is one big caveat and several smaller ones. First an approximate answer for Tom ranking first is $p=$ 0.6315 with 95% confidence intervals of 0.6306 to 0.6324. Note, the confidence intervals are from how precisely I determined an answer, not from what the variability of the probability actually is. Now the gory details.
One cannot use Wilcoxon or other ranking methods, there are just too many ties. Thus, the Big Caveat: I assume that the ranker knows how to apportion tied scores exactly to award only one first place rank. I don't need to know how to do that, as follows.
I transformed the data into an approximately normal distribution, and to do this on a larger data set, it would have to be redone properly on that data. To do that, I took all the data, and tested it for normality. The mean and median were different and the tails were asymmetric, and the data looked vaguely like r-values. So, I took the $ArcTanh($test score$/10)$ (Fisher's transformation), which made the data a lot more normal. The reader should not take the transformation that I used to heart. One caveat is that it would not allow for a perfect score of 10, of which, for my quick and dirty approximation, there were none. A better distribution can only be found from analyzing more data. I then found the mean and standard deviation of that transformed data and did 1000 Monte Carlo simulations using inverse normal 1000 times. For each simulation, I counted when Tom had the best score. Now because I used real numbers and not integer scores, what would have been ties were adjudicated perfectly, without my knowing how to actually do that adjudication in practice. Alice's ranking first probability can be determined using the same methods.
Now there are probably 1000 ways of doing this same problem, and my humble suggestion has ifs ands and buts that may be avoided by handling the problem otherwise. My solution is only approximate, but, it is also quick and almost brain dead. I leave it up to others to suggest better methods. | How to predict the probability of a performance rank for a test taker on a future test based on prev
There is one big caveat and several smaller ones. First an approximate answer for Tom ranking first is $p=$ 0.6315 with 95% confidence intervals of 0.6306 to 0.6324. Note, the confidence intervals are |
37,755 | How to predict the probability of a performance rank for a test taker on a future test based on previous test scores? | A plan should be
consider a test as a 'match' between students
sort test in time order, and insert results in a rating system engine.
You can retrieve the probability that Tom will be ranked first in next test from ranking values.
If you're interested in use more than in development, rankade, our free ranking system for sports, games, and more, allows matches with both 2 and 3+ factions (as per your needs, while Elo and Glicko works just for one-on-one - here's a comparison).
In addition, rankade has a weight feature (all tests have same impact?) that might refine your work. | How to predict the probability of a performance rank for a test taker on a future test based on prev | A plan should be
consider a test as a 'match' between students
sort test in time order, and insert results in a rating system engine.
You can retrieve the probability that Tom will be ranked first | How to predict the probability of a performance rank for a test taker on a future test based on previous test scores?
A plan should be
consider a test as a 'match' between students
sort test in time order, and insert results in a rating system engine.
You can retrieve the probability that Tom will be ranked first in next test from ranking values.
If you're interested in use more than in development, rankade, our free ranking system for sports, games, and more, allows matches with both 2 and 3+ factions (as per your needs, while Elo and Glicko works just for one-on-one - here's a comparison).
In addition, rankade has a weight feature (all tests have same impact?) that might refine your work. | How to predict the probability of a performance rank for a test taker on a future test based on prev
A plan should be
consider a test as a 'match' between students
sort test in time order, and insert results in a rating system engine.
You can retrieve the probability that Tom will be ranked first |
37,756 | How to examine interactions between factor and covariate in a mixed effects model? | For visualizing interaction terms, you may look at the sjPlot-package (see examples here).
Your function call would be
sjp.int(fit, type ="eff")
I'm not sure, however, if this meets your needs? | How to examine interactions between factor and covariate in a mixed effects model? | For visualizing interaction terms, you may look at the sjPlot-package (see examples here).
Your function call would be
sjp.int(fit, type ="eff")
I'm not sure, however, if this meets your needs? | How to examine interactions between factor and covariate in a mixed effects model?
For visualizing interaction terms, you may look at the sjPlot-package (see examples here).
Your function call would be
sjp.int(fit, type ="eff")
I'm not sure, however, if this meets your needs? | How to examine interactions between factor and covariate in a mixed effects model?
For visualizing interaction terms, you may look at the sjPlot-package (see examples here).
Your function call would be
sjp.int(fit, type ="eff")
I'm not sure, however, if this meets your needs? |
37,757 | How to examine interactions between factor and covariate in a mixed effects model? | I personally think that if you want to examine the true relationship between Y and the factors in your model after controlling for X you should be looking at the plotted adjusted rather than raw means computed from your favorite model.
For the purposes there are R packages such as lsmeans which are quite handy and user-friendly! | How to examine interactions between factor and covariate in a mixed effects model? | I personally think that if you want to examine the true relationship between Y and the factors in your model after controlling for X you should be looking at the plotted adjusted rather than raw means | How to examine interactions between factor and covariate in a mixed effects model?
I personally think that if you want to examine the true relationship between Y and the factors in your model after controlling for X you should be looking at the plotted adjusted rather than raw means computed from your favorite model.
For the purposes there are R packages such as lsmeans which are quite handy and user-friendly! | How to examine interactions between factor and covariate in a mixed effects model?
I personally think that if you want to examine the true relationship between Y and the factors in your model after controlling for X you should be looking at the plotted adjusted rather than raw means |
37,758 | How to examine interactions between factor and covariate in a mixed effects model? | You can investigate the VIFs of your model. VIF stands for Variance Inflation Factor and is a way to measure co-linearity.
https://onlinecourses.science.psu.edu/stat501/node/347
There is a vif function in the car package for R. | How to examine interactions between factor and covariate in a mixed effects model? | You can investigate the VIFs of your model. VIF stands for Variance Inflation Factor and is a way to measure co-linearity.
https://onlinecourses.science.psu.edu/stat501/node/347
There is a vif functi | How to examine interactions between factor and covariate in a mixed effects model?
You can investigate the VIFs of your model. VIF stands for Variance Inflation Factor and is a way to measure co-linearity.
https://onlinecourses.science.psu.edu/stat501/node/347
There is a vif function in the car package for R. | How to examine interactions between factor and covariate in a mixed effects model?
You can investigate the VIFs of your model. VIF stands for Variance Inflation Factor and is a way to measure co-linearity.
https://onlinecourses.science.psu.edu/stat501/node/347
There is a vif functi |
37,759 | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in package party? | my package edarf will calculate partial dependence for predictors using cforest. you can get permutation using the varimp function in the party package as well.
yes cforest generates an ensemble of trees of the same form as ctree with random features selected at each node and subsampling (by default). control the parameters of via cforest_control. if you download the source from the cran page you can see all the relevant code, most of which is written in C but is fairly readable. | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in packa | my package edarf will calculate partial dependence for predictors using cforest. you can get permutation using the varimp function in the party package as well.
yes cforest generates an ensemble of tr | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in package party?
my package edarf will calculate partial dependence for predictors using cforest. you can get permutation using the varimp function in the party package as well.
yes cforest generates an ensemble of trees of the same form as ctree with random features selected at each node and subsampling (by default). control the parameters of via cforest_control. if you download the source from the cran page you can see all the relevant code, most of which is written in C but is fairly readable. | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in packa
my package edarf will calculate partial dependence for predictors using cforest. you can get permutation using the varimp function in the party package as well.
yes cforest generates an ensemble of tr |
37,760 | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in package party? | You can now make partial dependence plots for any learner in R by using the mlr package. Here the tutorial package that explains how to do that: tutorial | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in packa | You can now make partial dependence plots for any learner in R by using the mlr package. Here the tutorial package that explains how to do that: tutorial | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in package party?
You can now make partial dependence plots for any learner in R by using the mlr package. Here the tutorial package that explains how to do that: tutorial | randomForest vs. cforest; Can I get partial dependence plots and percent variance explained in packa
You can now make partial dependence plots for any learner in R by using the mlr package. Here the tutorial package that explains how to do that: tutorial |
37,761 | Measuring effects of categorical factors on binomial outcome with many groups | To circumvent the 200-players problem, you could fit whichever model you choose (logit, binomial...), without the player variable as such, but inside a discrete mixture framework. You'll have to process the data right (for instance you want to make sure that all stats of a single player are taken together, and you'll have to determine the optimal number of clusters in the mixture) but the fitted mixture model will group players into clusters, which should reflect differences in performance, or rather differences in how the conditions (home and ahead) affect performance. This is very easy and fast with R package flexmix.
Building on the same idea, you could also just run an unsupervized clustering algo (k-means, gaussian mixture, self-organizing map) on the data transformed as such: each player has one vector of 8 values $(rate_{home,lead}, N_{home, lead}, rate_{home, behind}, N_{home, behind}, ...)$. In that case each player belongs to a cluster of players with similar characteristics, and you can check whether the differences between clusters are significant. | Measuring effects of categorical factors on binomial outcome with many groups | To circumvent the 200-players problem, you could fit whichever model you choose (logit, binomial...), without the player variable as such, but inside a discrete mixture framework. You'll have to proce | Measuring effects of categorical factors on binomial outcome with many groups
To circumvent the 200-players problem, you could fit whichever model you choose (logit, binomial...), without the player variable as such, but inside a discrete mixture framework. You'll have to process the data right (for instance you want to make sure that all stats of a single player are taken together, and you'll have to determine the optimal number of clusters in the mixture) but the fitted mixture model will group players into clusters, which should reflect differences in performance, or rather differences in how the conditions (home and ahead) affect performance. This is very easy and fast with R package flexmix.
Building on the same idea, you could also just run an unsupervized clustering algo (k-means, gaussian mixture, self-organizing map) on the data transformed as such: each player has one vector of 8 values $(rate_{home,lead}, N_{home, lead}, rate_{home, behind}, N_{home, behind}, ...)$. In that case each player belongs to a cluster of players with similar characteristics, and you can check whether the differences between clusters are significant. | Measuring effects of categorical factors on binomial outcome with many groups
To circumvent the 200-players problem, you could fit whichever model you choose (logit, binomial...), without the player variable as such, but inside a discrete mixture framework. You'll have to proce |
37,762 | Measuring effects of categorical factors on binomial outcome with many groups | I think you could fit a logistic regression model using player, ahead/behind, home/away, percentage success and number of shots taken under those conditions as possible covariates. Then difficulty with player is that you have over 200. I think that success percentage under specific conditions could serve as a substitute for player since the player and his past performance under the conditions should be highly related to the outcome. To predict for individual players you just use that player's other covariates. | Measuring effects of categorical factors on binomial outcome with many groups | I think you could fit a logistic regression model using player, ahead/behind, home/away, percentage success and number of shots taken under those conditions as possible covariates. Then difficulty wi | Measuring effects of categorical factors on binomial outcome with many groups
I think you could fit a logistic regression model using player, ahead/behind, home/away, percentage success and number of shots taken under those conditions as possible covariates. Then difficulty with player is that you have over 200. I think that success percentage under specific conditions could serve as a substitute for player since the player and his past performance under the conditions should be highly related to the outcome. To predict for individual players you just use that player's other covariates. | Measuring effects of categorical factors on binomial outcome with many groups
I think you could fit a logistic regression model using player, ahead/behind, home/away, percentage success and number of shots taken under those conditions as possible covariates. Then difficulty wi |
37,763 | Hidden Markov models and anomaly detection | According to this Wikipedia article, there are many inference benefits you gain from a HMM. The first inference is the ability to assign a probability to any observation sequence $\mathbf{Y} = (Y_1,\ldots, Y_N)$ by marginalizing over the set of all possible hidden state sequences $\mathbf{X} = (X_1,\ldots, X_N)$:
$P(\mathbf{Y}) = \sum_{\mathbf{X}} P(\mathbf{X}) P( \mathbf{Y} \vert \mathbf{X} )$
This way, you can assign probabilities to observation sequences even in an on-line manner as observations arrive (using the very efficient forward algorithm). An anomaly is an observation that is (relatively) highly unlikely according $P(\mathbf{Y})$ (a threshold can be used to decide). Of course, the value of $P(\mathbf{Y})$ grows smaller and smaller as $N$ increases. Many methods can be used to renormalize $P(\mathbf{Y})$ to keep it within the representable range of floating-point data types and enable meaningful thresholding. For example, we might use the following as an anomaly measure:
$\mathbb{A}_{N} = \log P(Y_N \vert Y_1,\ldots, Y_{N-1}) = \log \frac{P(Y_1,\ldots, Y_N)}{P(Y_1,\ldots, Y_{N-1})}$
$\mathbb{A}_{N} = \log P(Y_1,\ldots, Y_N) - \log P(Y_1,\ldots, Y_{N-1})$ | Hidden Markov models and anomaly detection | According to this Wikipedia article, there are many inference benefits you gain from a HMM. The first inference is the ability to assign a probability to any observation sequence $\mathbf{Y} = (Y_1,\l | Hidden Markov models and anomaly detection
According to this Wikipedia article, there are many inference benefits you gain from a HMM. The first inference is the ability to assign a probability to any observation sequence $\mathbf{Y} = (Y_1,\ldots, Y_N)$ by marginalizing over the set of all possible hidden state sequences $\mathbf{X} = (X_1,\ldots, X_N)$:
$P(\mathbf{Y}) = \sum_{\mathbf{X}} P(\mathbf{X}) P( \mathbf{Y} \vert \mathbf{X} )$
This way, you can assign probabilities to observation sequences even in an on-line manner as observations arrive (using the very efficient forward algorithm). An anomaly is an observation that is (relatively) highly unlikely according $P(\mathbf{Y})$ (a threshold can be used to decide). Of course, the value of $P(\mathbf{Y})$ grows smaller and smaller as $N$ increases. Many methods can be used to renormalize $P(\mathbf{Y})$ to keep it within the representable range of floating-point data types and enable meaningful thresholding. For example, we might use the following as an anomaly measure:
$\mathbb{A}_{N} = \log P(Y_N \vert Y_1,\ldots, Y_{N-1}) = \log \frac{P(Y_1,\ldots, Y_N)}{P(Y_1,\ldots, Y_{N-1})}$
$\mathbb{A}_{N} = \log P(Y_1,\ldots, Y_N) - \log P(Y_1,\ldots, Y_{N-1})$ | Hidden Markov models and anomaly detection
According to this Wikipedia article, there are many inference benefits you gain from a HMM. The first inference is the ability to assign a probability to any observation sequence $\mathbf{Y} = (Y_1,\l |
37,764 | Variable Selection One by One vs Simultaneously | Before going to the specifics of the method, first we need to understand the two classes of feature selection:
1. Univariate: where we consider the input features one by one.
2. Multivariate: where we consider a group of variables together.
In many cases univariate feature selection can produce good enough results and like all things in machine learning "there is no free lunch" kicks in and you have to decide everything based on data.
For example, in the chessboard problem (often also known as XOR problem) in feature selection as shown in figure below. Both X and Y would not be able to distinguish the two classes red and black; while both of them taken together can distinguish the two.
This video from over 10 years before explains this should be give a good introduction
http://www.quizover.com/oer/course/introduction-to-feature-select-by-isabelle-gu-videolectures-net
Including this article http://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf by the presenter herself. | Variable Selection One by One vs Simultaneously | Before going to the specifics of the method, first we need to understand the two classes of feature selection:
1. Univariate: where we consider the input features one by one.
2. Multivariate: where w | Variable Selection One by One vs Simultaneously
Before going to the specifics of the method, first we need to understand the two classes of feature selection:
1. Univariate: where we consider the input features one by one.
2. Multivariate: where we consider a group of variables together.
In many cases univariate feature selection can produce good enough results and like all things in machine learning "there is no free lunch" kicks in and you have to decide everything based on data.
For example, in the chessboard problem (often also known as XOR problem) in feature selection as shown in figure below. Both X and Y would not be able to distinguish the two classes red and black; while both of them taken together can distinguish the two.
This video from over 10 years before explains this should be give a good introduction
http://www.quizover.com/oer/course/introduction-to-feature-select-by-isabelle-gu-videolectures-net
Including this article http://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf by the presenter herself. | Variable Selection One by One vs Simultaneously
Before going to the specifics of the method, first we need to understand the two classes of feature selection:
1. Univariate: where we consider the input features one by one.
2. Multivariate: where w |
37,765 | Mean comparisons following multiple imputation | A recent paper by van Ginkel & Kroonenberg works out the details of pooling F-tests and other ANOVA results. The paper is:
van Ginkel, J. R., & Kroonenberg, P. M. (2014). Analysis of Variance of Multiply Imputed Data. Multivariate Behavioral Research, 49(1), 78-91.
and van Ginkel's website (http://www.socialsciences.leiden.edu/educationandchildstudies/childandfamilystudies/organisation/staffcfs/van-ginkel.html) has SPSS macros with instruction files. As far as I know, their formulae have not yet been implemented in R.
@Brian, if you do write a function, please share! | Mean comparisons following multiple imputation | A recent paper by van Ginkel & Kroonenberg works out the details of pooling F-tests and other ANOVA results. The paper is:
van Ginkel, J. R., & Kroonenberg, P. M. (2014). Analysis of Variance of Mult | Mean comparisons following multiple imputation
A recent paper by van Ginkel & Kroonenberg works out the details of pooling F-tests and other ANOVA results. The paper is:
van Ginkel, J. R., & Kroonenberg, P. M. (2014). Analysis of Variance of Multiply Imputed Data. Multivariate Behavioral Research, 49(1), 78-91.
and van Ginkel's website (http://www.socialsciences.leiden.edu/educationandchildstudies/childandfamilystudies/organisation/staffcfs/van-ginkel.html) has SPSS macros with instruction files. As far as I know, their formulae have not yet been implemented in R.
@Brian, if you do write a function, please share! | Mean comparisons following multiple imputation
A recent paper by van Ginkel & Kroonenberg works out the details of pooling F-tests and other ANOVA results. The paper is:
van Ginkel, J. R., & Kroonenberg, P. M. (2014). Analysis of Variance of Mult |
37,766 | Modeling a spline over time -- design matrix and survey of approaches | I do agree with you that you may need to account for individual respondents errors terms through time particularly if you d not have result for all period for each respondent.
A way to do this is with the BayesX. It allows for spatial effects with splines where you can have time in one dimension and the covariate value in the other. Further, you can add a random effect for each observation. Potentially, have a look at this paper.
Though, I am quite sure you will have to put your model into long format. Further, you will have to add an id column for the respondent or the random effect. | Modeling a spline over time -- design matrix and survey of approaches | I do agree with you that you may need to account for individual respondents errors terms through time particularly if you d not have result for all period for each respondent.
A way to do this is wi | Modeling a spline over time -- design matrix and survey of approaches
I do agree with you that you may need to account for individual respondents errors terms through time particularly if you d not have result for all period for each respondent.
A way to do this is with the BayesX. It allows for spatial effects with splines where you can have time in one dimension and the covariate value in the other. Further, you can add a random effect for each observation. Potentially, have a look at this paper.
Though, I am quite sure you will have to put your model into long format. Further, you will have to add an id column for the respondent or the random effect. | Modeling a spline over time -- design matrix and survey of approaches
I do agree with you that you may need to account for individual respondents errors terms through time particularly if you d not have result for all period for each respondent.
A way to do this is wi |
37,767 | Balanced repeated measures design | You don't need to specify a covariance structure and it is highly discouraged: If you choose the wrong structure, you might miss the targeted type-I-error. Instead, use the procedure described here. It is a generalization of ANOVA for unknown covariance matrices and even applicable if there are more repeated measures than independent subjects.
Unfortunately, it is not (yet) implemented in SPSS. But there are SAS macros. See how hld-f2.sas is used. | Balanced repeated measures design | You don't need to specify a covariance structure and it is highly discouraged: If you choose the wrong structure, you might miss the targeted type-I-error. Instead, use the procedure described here. I | Balanced repeated measures design
You don't need to specify a covariance structure and it is highly discouraged: If you choose the wrong structure, you might miss the targeted type-I-error. Instead, use the procedure described here. It is a generalization of ANOVA for unknown covariance matrices and even applicable if there are more repeated measures than independent subjects.
Unfortunately, it is not (yet) implemented in SPSS. But there are SAS macros. See how hld-f2.sas is used. | Balanced repeated measures design
You don't need to specify a covariance structure and it is highly discouraged: If you choose the wrong structure, you might miss the targeted type-I-error. Instead, use the procedure described here. I |
37,768 | Balanced repeated measures design | If you want to model the covariance structure simply as compound symmetric, the results from-mixed effect modeling and repeated-measures ANOVA should match if the data are balanced and not missing. If you want to model the covariance structure as something else (e.g., unstructured or autoregressive), then you need to use mixed-effects modeling.
How do you know which covariance structure to use? First, plot the data and see is the variance/correlation noticeably changes over nights. Second, compute a covariance matrix to see whether the (co)variance remains constant over nights. Third, check the journal you are thinking of submitting your article to. Does it prefer advanced or conventional techniques? If the journal publishes only articles using ANOVAs, then you may want to stick with ANOVA as well (unless you feel comfortable justifying the use of an alternative covariance structure based on mixed-effects modeling). | Balanced repeated measures design | If you want to model the covariance structure simply as compound symmetric, the results from-mixed effect modeling and repeated-measures ANOVA should match if the data are balanced and not missing. If | Balanced repeated measures design
If you want to model the covariance structure simply as compound symmetric, the results from-mixed effect modeling and repeated-measures ANOVA should match if the data are balanced and not missing. If you want to model the covariance structure as something else (e.g., unstructured or autoregressive), then you need to use mixed-effects modeling.
How do you know which covariance structure to use? First, plot the data and see is the variance/correlation noticeably changes over nights. Second, compute a covariance matrix to see whether the (co)variance remains constant over nights. Third, check the journal you are thinking of submitting your article to. Does it prefer advanced or conventional techniques? If the journal publishes only articles using ANOVAs, then you may want to stick with ANOVA as well (unless you feel comfortable justifying the use of an alternative covariance structure based on mixed-effects modeling). | Balanced repeated measures design
If you want to model the covariance structure simply as compound symmetric, the results from-mixed effect modeling and repeated-measures ANOVA should match if the data are balanced and not missing. If |
37,769 | Compute p-value in paired bootstrap | As far as I understand from looking at section 2, the authors seem to explain their rationale for the bootstrap test as follows-
"the $x_i$ were sampled from $x$, and so their average $\delta(x_i)$ won’t be zero like the null hypothesis demands; the average will instead be around $\delta(x)$... The solution is a re-centering of the mean – we want to know how often $A$ does more than $\delta(x)$ better than expected. We expect it to beat $B$ by $\delta(x)$. Therefore, we count up how many of the $x_i$ have $A$ beating $B$ by at least $\delta(x)$."
The authors want to test if the gain is non-zero so they write the p-value as
$\delta(x_i) < 2\delta(x)$ , which could be re-written as $0 < 2\delta(x) - \delta(x_i)$; because $E[\delta(x_i)]=\delta(x)$ the R.H.S. of the inequality then becomes $\delta(x)$, which is the $H_0$ they were seeking to reject. | Compute p-value in paired bootstrap | As far as I understand from looking at section 2, the authors seem to explain their rationale for the bootstrap test as follows-
"the $x_i$ were sampled from $x$, and so their average $\delta(x_i)$ w | Compute p-value in paired bootstrap
As far as I understand from looking at section 2, the authors seem to explain their rationale for the bootstrap test as follows-
"the $x_i$ were sampled from $x$, and so their average $\delta(x_i)$ won’t be zero like the null hypothesis demands; the average will instead be around $\delta(x)$... The solution is a re-centering of the mean – we want to know how often $A$ does more than $\delta(x)$ better than expected. We expect it to beat $B$ by $\delta(x)$. Therefore, we count up how many of the $x_i$ have $A$ beating $B$ by at least $\delta(x)$."
The authors want to test if the gain is non-zero so they write the p-value as
$\delta(x_i) < 2\delta(x)$ , which could be re-written as $0 < 2\delta(x) - \delta(x_i)$; because $E[\delta(x_i)]=\delta(x)$ the R.H.S. of the inequality then becomes $\delta(x)$, which is the $H_0$ they were seeking to reject. | Compute p-value in paired bootstrap
As far as I understand from looking at section 2, the authors seem to explain their rationale for the bootstrap test as follows-
"the $x_i$ were sampled from $x$, and so their average $\delta(x_i)$ w |
37,770 | How should I interpret the interaction term in a Cox proportional hazards model? | For models beyond the simplest (and an interaction makes it non-simple) I like to look at predictions instead of trying to interpret the coefficients directly. Does the software that you used to fit the model also do predictions for a given set of x and y? (many if not all do). You can then make predictions for patients with the following (x,y): (0,0), (0,1), (10,0), and (10,1) and see how they compare (or maybe use values more meaningful, such as start at the mean or median and then go 1, 10 units either direction). A simple prediction is mean or median survival, but if possible it is really nice for a survival analysis to plot the 4 (or more) predicted survival curves (different colors). These plots/comparisons often make the direction and magnitude of the effects clear. | How should I interpret the interaction term in a Cox proportional hazards model? | For models beyond the simplest (and an interaction makes it non-simple) I like to look at predictions instead of trying to interpret the coefficients directly. Does the software that you used to fit | How should I interpret the interaction term in a Cox proportional hazards model?
For models beyond the simplest (and an interaction makes it non-simple) I like to look at predictions instead of trying to interpret the coefficients directly. Does the software that you used to fit the model also do predictions for a given set of x and y? (many if not all do). You can then make predictions for patients with the following (x,y): (0,0), (0,1), (10,0), and (10,1) and see how they compare (or maybe use values more meaningful, such as start at the mean or median and then go 1, 10 units either direction). A simple prediction is mean or median survival, but if possible it is really nice for a survival analysis to plot the 4 (or more) predicted survival curves (different colors). These plots/comparisons often make the direction and magnitude of the effects clear. | How should I interpret the interaction term in a Cox proportional hazards model?
For models beyond the simplest (and an interaction makes it non-simple) I like to look at predictions instead of trying to interpret the coefficients directly. Does the software that you used to fit |
37,771 | How should I interpret the interaction term in a Cox proportional hazards model? | Did you find out the answers? I would like to know that, too. I think the interpretation is like this: With one-point increase in $Y$ and 10 point increase in $X$, the risk of death increases with 1.6% and this is significant. With keeping $X$ constant, increase in $Y$ decrease the risk (by 16.3%) and with keeping $Y$ constant, increase in $X$ decrease the risk (by 10.5%) but when both of them are working together, they increase the risk of death. We can also check this if we have coefficient value for baseline hazard ($\beta_0$), $X$ ($\beta_1$), $Y$ ($\beta_2$) and $X\times Y$ ($\beta_3$). If there is no interaction then $\exp(\beta_3) = \exp(\beta_1+\beta_2-\beta_0)$. I am not statistician. Please correct me if I am wrong. | How should I interpret the interaction term in a Cox proportional hazards model? | Did you find out the answers? I would like to know that, too. I think the interpretation is like this: With one-point increase in $Y$ and 10 point increase in $X$, the risk of death increases with 1.6 | How should I interpret the interaction term in a Cox proportional hazards model?
Did you find out the answers? I would like to know that, too. I think the interpretation is like this: With one-point increase in $Y$ and 10 point increase in $X$, the risk of death increases with 1.6% and this is significant. With keeping $X$ constant, increase in $Y$ decrease the risk (by 16.3%) and with keeping $Y$ constant, increase in $X$ decrease the risk (by 10.5%) but when both of them are working together, they increase the risk of death. We can also check this if we have coefficient value for baseline hazard ($\beta_0$), $X$ ($\beta_1$), $Y$ ($\beta_2$) and $X\times Y$ ($\beta_3$). If there is no interaction then $\exp(\beta_3) = \exp(\beta_1+\beta_2-\beta_0)$. I am not statistician. Please correct me if I am wrong. | How should I interpret the interaction term in a Cox proportional hazards model?
Did you find out the answers? I would like to know that, too. I think the interpretation is like this: With one-point increase in $Y$ and 10 point increase in $X$, the risk of death increases with 1.6 |
37,772 | Identifying differences among calibration curves: ANCOVA? | If you don't care about the intercept, I think you want this:
> model <- lm(response ~ -1 + conc + day:treat)
> anova(model)
Analysis of Variance Table
Response: response
Df Sum Sq Mean Sq F value Pr(>F)
conc 1 18.5545 18.5545 62.3458 9.909e-05 ***
day:treat 4 1.7860 0.4465 1.5003 0.2996
Residuals 7 2.0832 0.2976
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary(model)
Call:
lm(formula = response ~ -1 + conc + day:treat)
Residuals:
Min 1Q Median 3Q Max
-0.5736 -0.4803 0.0222 0.3465 0.6013
Coefficients:
Estimate Std. Error t value Pr(>|t|)
conc 0.6859 0.1929 3.556 0.00927 **
dayday1:treatA 0.6919 0.3693 1.873 0.10316
dayday2:treatA 0.1093 0.3693 0.296 0.77585
dayday1:treatB 0.3387 0.3693 0.917 0.38965
dayday2:treatB 0.7090 0.3693 1.920 0.09636 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5455 on 7 degrees of freedom
Multiple R-squared: 0.9071, Adjusted R-squared: 0.8407
F-statistic: 13.67 on 5 and 7 DF, p-value: 0.001695
The -1 in the lm removes the intercept.
There aren't really "slopes" in your model. You just have different days and different treatments. In the results above, you see that the interaction is not significant.
Presumably this isn't your real data. In your real data, you might also want to look at something like this:
model2 <- lm(response ~ -1 + conc + day + treat)
and
model3 <- lm(response ~ -1 + conc + day*treat)
You can visualize this with the following code (pretty hacky)
plot(as.numeric(day),response,col=as.numeric(treat), xlim=c(1,2.2))
points(as.numeric(day)+0.1,pred,col=as.numeric(treat),pch=2)
for(i in 1:6){lines(c(1,1.1), c(response[i],pred[i]),col=as.numeric(treat)[i])}
for(i in 7:12){lines(c(2,2.1), c(response[i],pred[i]),col=as.numeric(treat)[i])}
legend("bottom", c("treatA","treatB","observed","predicted"), col=c(1,2,1,1),
lty=c(1,1,NA,NA), pch=c(NA,NA,1,2))
I'm still new to stack exchange, so if someone add a comment with a link that shows how to include this plot rather than just the code I'd appreciate it.
EDIT
Based on this code, the following image was produces. You didn't give the pred object so I'm assuming a few things here. Predicted values are smaller and transparent.
pred <- data.frame(treat = df$treat, conc = df$conc, day = df$day, response = predict(mdl, newdata = df[, 1:3]))
library(ggplot2)
ggplot(df, aes(y = response, x = treat, colour = as.factor(conc))) +
geom_jitter(position = position_jitter(width = 0.25), shape = 16, size = 3) +
geom_point(data = pred, aes(shape = as.factor(conc)), alpha = 0.2) +
facet_grid(~day) +
theme_bw() | Identifying differences among calibration curves: ANCOVA? | If you don't care about the intercept, I think you want this:
> model <- lm(response ~ -1 + conc + day:treat)
> anova(model)
Analysis of Variance Table
Response: response
Df Sum Sq Mean Sq | Identifying differences among calibration curves: ANCOVA?
If you don't care about the intercept, I think you want this:
> model <- lm(response ~ -1 + conc + day:treat)
> anova(model)
Analysis of Variance Table
Response: response
Df Sum Sq Mean Sq F value Pr(>F)
conc 1 18.5545 18.5545 62.3458 9.909e-05 ***
day:treat 4 1.7860 0.4465 1.5003 0.2996
Residuals 7 2.0832 0.2976
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary(model)
Call:
lm(formula = response ~ -1 + conc + day:treat)
Residuals:
Min 1Q Median 3Q Max
-0.5736 -0.4803 0.0222 0.3465 0.6013
Coefficients:
Estimate Std. Error t value Pr(>|t|)
conc 0.6859 0.1929 3.556 0.00927 **
dayday1:treatA 0.6919 0.3693 1.873 0.10316
dayday2:treatA 0.1093 0.3693 0.296 0.77585
dayday1:treatB 0.3387 0.3693 0.917 0.38965
dayday2:treatB 0.7090 0.3693 1.920 0.09636 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5455 on 7 degrees of freedom
Multiple R-squared: 0.9071, Adjusted R-squared: 0.8407
F-statistic: 13.67 on 5 and 7 DF, p-value: 0.001695
The -1 in the lm removes the intercept.
There aren't really "slopes" in your model. You just have different days and different treatments. In the results above, you see that the interaction is not significant.
Presumably this isn't your real data. In your real data, you might also want to look at something like this:
model2 <- lm(response ~ -1 + conc + day + treat)
and
model3 <- lm(response ~ -1 + conc + day*treat)
You can visualize this with the following code (pretty hacky)
plot(as.numeric(day),response,col=as.numeric(treat), xlim=c(1,2.2))
points(as.numeric(day)+0.1,pred,col=as.numeric(treat),pch=2)
for(i in 1:6){lines(c(1,1.1), c(response[i],pred[i]),col=as.numeric(treat)[i])}
for(i in 7:12){lines(c(2,2.1), c(response[i],pred[i]),col=as.numeric(treat)[i])}
legend("bottom", c("treatA","treatB","observed","predicted"), col=c(1,2,1,1),
lty=c(1,1,NA,NA), pch=c(NA,NA,1,2))
I'm still new to stack exchange, so if someone add a comment with a link that shows how to include this plot rather than just the code I'd appreciate it.
EDIT
Based on this code, the following image was produces. You didn't give the pred object so I'm assuming a few things here. Predicted values are smaller and transparent.
pred <- data.frame(treat = df$treat, conc = df$conc, day = df$day, response = predict(mdl, newdata = df[, 1:3]))
library(ggplot2)
ggplot(df, aes(y = response, x = treat, colour = as.factor(conc))) +
geom_jitter(position = position_jitter(width = 0.25), shape = 16, size = 3) +
geom_point(data = pred, aes(shape = as.factor(conc)), alpha = 0.2) +
facet_grid(~day) +
theme_bw() | Identifying differences among calibration curves: ANCOVA?
If you don't care about the intercept, I think you want this:
> model <- lm(response ~ -1 + conc + day:treat)
> anova(model)
Analysis of Variance Table
Response: response
Df Sum Sq Mean Sq |
37,773 | Using MAD as a way of defining a threshold for significance testing | I doubt it. Most probably, the distribution of frequency terms is highly skewed. In such a case, using a threshold rule based on an assumption that the underlying data is drawn from a symmetrical distribution will give highly misleading thresholds (and as a result potentially results).
You could try to apply the thresholding rule you propose on a transformed versions of your data using transformations such as the arcsin. The threshold rule you proposed is based on order statistics meaning that the result should not depend on which transformation you use so long as it is a valid transformation (i.e. a monotone function on the domain of your inputs).
An alternative solution that i personally favor because it simplifies interpretations is to use adjusted boxplots. | Using MAD as a way of defining a threshold for significance testing | I doubt it. Most probably, the distribution of frequency terms is highly skewed. In such a case, using a threshold rule based on an assumption that the underlying data is drawn from a symmetrical dis | Using MAD as a way of defining a threshold for significance testing
I doubt it. Most probably, the distribution of frequency terms is highly skewed. In such a case, using a threshold rule based on an assumption that the underlying data is drawn from a symmetrical distribution will give highly misleading thresholds (and as a result potentially results).
You could try to apply the thresholding rule you propose on a transformed versions of your data using transformations such as the arcsin. The threshold rule you proposed is based on order statistics meaning that the result should not depend on which transformation you use so long as it is a valid transformation (i.e. a monotone function on the domain of your inputs).
An alternative solution that i personally favor because it simplifies interpretations is to use adjusted boxplots. | Using MAD as a way of defining a threshold for significance testing
I doubt it. Most probably, the distribution of frequency terms is highly skewed. In such a case, using a threshold rule based on an assumption that the underlying data is drawn from a symmetrical dis |
37,774 | Post processing random forests using regularised regression: what about bias? | I want to add some thoughts on the problem at hand, so that the discussion may roll on. However, I propose something else to think about, so others may comment on this.
When reading this post and the post in the highlighted link, we try to overcome the bias in the RF (possibly in the tails) and to correct biased output of the RF, by applying another method, e.g. elastic net. However doing an elastic net first, and then doing the RF or vice versa has not led to a well-received solution. We try to stack methods one after another to overcome their downsides, but with no success.
The Problem may be buried in the procedural usage of two methods after each method gave its word on all features in the equation. I'm not speaking about parallelism here. I mean, what if we gave word for every feature with a function of its own.
When we do a RF to predict the outcome of some function $f(x)$ we try to model all features $x_1 x_2$, the whole equation, all at once with one method. In other words summed trees or the opinion of all trees come to an averaged conclusion on how to deal with all features in the equation. The elastic net does the same when choosing $\lambda$ for regularization. But we are doing it on all features at once, and that means, that one or maybe two features have a downside from this. Someone has to draw the short straw.
Maybe the Find as the previous poster stated is not the real Find, as it wants to find another method to correct the misbehavior. Maybe the real Find lies not in stacking methods after another. What if we can try to give word to every feature by applying one method for every feature. Like $y = a_0 * + a_1*f_1(x_1) + ... + a_n*f_n(x_n)$. I did not heard of it, until I read it myself. I'm talking about the interpret ML package by microsoft.
Although at the moment
It only supports trees as base learners
(Dr. Robert Kübler)
it may shed some light on the problem.
I just wanted to give some thought that we may have to look for a different way instead of 'correcting' our route.
Paper Microsoft:
https://arxiv.org/pdf/1909.09223.pdf
Medium:
https://towardsdatascience.com/the-explainable-boosting-machine-f24152509ebb | Post processing random forests using regularised regression: what about bias? | I want to add some thoughts on the problem at hand, so that the discussion may roll on. However, I propose something else to think about, so others may comment on this.
When reading this post and the | Post processing random forests using regularised regression: what about bias?
I want to add some thoughts on the problem at hand, so that the discussion may roll on. However, I propose something else to think about, so others may comment on this.
When reading this post and the post in the highlighted link, we try to overcome the bias in the RF (possibly in the tails) and to correct biased output of the RF, by applying another method, e.g. elastic net. However doing an elastic net first, and then doing the RF or vice versa has not led to a well-received solution. We try to stack methods one after another to overcome their downsides, but with no success.
The Problem may be buried in the procedural usage of two methods after each method gave its word on all features in the equation. I'm not speaking about parallelism here. I mean, what if we gave word for every feature with a function of its own.
When we do a RF to predict the outcome of some function $f(x)$ we try to model all features $x_1 x_2$, the whole equation, all at once with one method. In other words summed trees or the opinion of all trees come to an averaged conclusion on how to deal with all features in the equation. The elastic net does the same when choosing $\lambda$ for regularization. But we are doing it on all features at once, and that means, that one or maybe two features have a downside from this. Someone has to draw the short straw.
Maybe the Find as the previous poster stated is not the real Find, as it wants to find another method to correct the misbehavior. Maybe the real Find lies not in stacking methods after another. What if we can try to give word to every feature by applying one method for every feature. Like $y = a_0 * + a_1*f_1(x_1) + ... + a_n*f_n(x_n)$. I did not heard of it, until I read it myself. I'm talking about the interpret ML package by microsoft.
Although at the moment
It only supports trees as base learners
(Dr. Robert Kübler)
it may shed some light on the problem.
I just wanted to give some thought that we may have to look for a different way instead of 'correcting' our route.
Paper Microsoft:
https://arxiv.org/pdf/1909.09223.pdf
Medium:
https://towardsdatascience.com/the-explainable-boosting-machine-f24152509ebb | Post processing random forests using regularised regression: what about bias?
I want to add some thoughts on the problem at hand, so that the discussion may roll on. However, I propose something else to think about, so others may comment on this.
When reading this post and the |
37,775 | Post processing random forests using regularised regression: what about bias? | It is a decent question, asked a decade ago. The chat is long gone.
Question
Given a random forest:
That has bias in the outputs
That is decently trained with decent parameters and is otherwise non-pathological
has some similarity to this (link)
Find:
Elastic net regression, or other regularized regression, to calibrate the output
Logistic regression based calibration of the ensemble output
A combination of both, if possible, to improve the result.
Analysis:
In progress... | Post processing random forests using regularised regression: what about bias? | It is a decent question, asked a decade ago. The chat is long gone.
Question
Given a random forest:
That has bias in the outputs
That is decently trained with decent parameters and is otherwise non- | Post processing random forests using regularised regression: what about bias?
It is a decent question, asked a decade ago. The chat is long gone.
Question
Given a random forest:
That has bias in the outputs
That is decently trained with decent parameters and is otherwise non-pathological
has some similarity to this (link)
Find:
Elastic net regression, or other regularized regression, to calibrate the output
Logistic regression based calibration of the ensemble output
A combination of both, if possible, to improve the result.
Analysis:
In progress... | Post processing random forests using regularised regression: what about bias?
It is a decent question, asked a decade ago. The chat is long gone.
Question
Given a random forest:
That has bias in the outputs
That is decently trained with decent parameters and is otherwise non- |
37,776 | Time series with multiple subjects and multiple variables | As I mentioned in my note above, I would treat this as a regression problem. Here is a link to constructing, in R, the lag (and lead) variables from your data (R Head).
Included in the post is a brief introduction to using the resulting data in a regression model. You might also want to do a bit of background digging on the R package dynlm (dynamic linear regression). | Time series with multiple subjects and multiple variables | As I mentioned in my note above, I would treat this as a regression problem. Here is a link to constructing, in R, the lag (and lead) variables from your data (R Head).
Included in the post is a brief | Time series with multiple subjects and multiple variables
As I mentioned in my note above, I would treat this as a regression problem. Here is a link to constructing, in R, the lag (and lead) variables from your data (R Head).
Included in the post is a brief introduction to using the resulting data in a regression model. You might also want to do a bit of background digging on the R package dynlm (dynamic linear regression). | Time series with multiple subjects and multiple variables
As I mentioned in my note above, I would treat this as a regression problem. Here is a link to constructing, in R, the lag (and lead) variables from your data (R Head).
Included in the post is a brief |
37,777 | Time series with multiple subjects and multiple variables | You could create tables where the y1 is shifted by 0,1,2,3,4 weeks.
Then you run an analysis on them. For instance, you could make a neural network that tries to predict y1 from x. For some ideas, you can give Weka a spin.
Then, you have some measure of predicting y1 from x for each lag. Using this, you can find the lag that fits best.
Alternatively, you can create one table that includes x from the current week, x from the previous week, ... and y1.
Then do a analysis of influence (e.g. PCA) to see which week and which variable has the most influence. | Time series with multiple subjects and multiple variables | You could create tables where the y1 is shifted by 0,1,2,3,4 weeks.
Then you run an analysis on them. For instance, you could make a neural network that tries to predict y1 from x. For some ideas, you | Time series with multiple subjects and multiple variables
You could create tables where the y1 is shifted by 0,1,2,3,4 weeks.
Then you run an analysis on them. For instance, you could make a neural network that tries to predict y1 from x. For some ideas, you can give Weka a spin.
Then, you have some measure of predicting y1 from x for each lag. Using this, you can find the lag that fits best.
Alternatively, you can create one table that includes x from the current week, x from the previous week, ... and y1.
Then do a analysis of influence (e.g. PCA) to see which week and which variable has the most influence. | Time series with multiple subjects and multiple variables
You could create tables where the y1 is shifted by 0,1,2,3,4 weeks.
Then you run an analysis on them. For instance, you could make a neural network that tries to predict y1 from x. For some ideas, you |
37,778 | How to adjust average rating for sample size on rating systems with more than two categories? | One way to cast your problem would be to treat it as a bayesian estimation problem.
Basically this means having a prior on your mean and update the mean based on each new observation over time.
A practical, yet theoretically disputable way to achieve this is to compute the mean as a function of the mean found in the corpus and the actual observations you have for this item. More precisely, in the recommender system setting, this could mean that you initialize the mean to the mean of the category of the item you're dealing with (in your example "statistics books" probably) and then update it each time a user gives a rating to this particular item.
You can design a clever update rule that has statistical foundations or rely on common sense to quickly produce a basic update rule like this one:
X : item
r_X^i : i-th rating for item X
C : all item in the same category as X, discarding empty ratings
mean_C = (1/|C|) * sum_{c in C} sum_{i} (r_c^i)
# when no rating => use category mean
mean_X^0 = mean_C
# when j ratings => ponderate category mean with actual ratings
mean_X^j = (1/n+1)(mean_C + sum_{i=1..n}(r_X^i))
When dealing in general with this kind of problems I recommend reading the work of Koren et al on the Netflix challenge. They grabbed quite a bit of performance by using unsupervised learning on user and content variables - the idea of using the category mean being a similar, yet naive cousin. | How to adjust average rating for sample size on rating systems with more than two categories? | One way to cast your problem would be to treat it as a bayesian estimation problem.
Basically this means having a prior on your mean and update the mean based on each new observation over time.
A prac | How to adjust average rating for sample size on rating systems with more than two categories?
One way to cast your problem would be to treat it as a bayesian estimation problem.
Basically this means having a prior on your mean and update the mean based on each new observation over time.
A practical, yet theoretically disputable way to achieve this is to compute the mean as a function of the mean found in the corpus and the actual observations you have for this item. More precisely, in the recommender system setting, this could mean that you initialize the mean to the mean of the category of the item you're dealing with (in your example "statistics books" probably) and then update it each time a user gives a rating to this particular item.
You can design a clever update rule that has statistical foundations or rely on common sense to quickly produce a basic update rule like this one:
X : item
r_X^i : i-th rating for item X
C : all item in the same category as X, discarding empty ratings
mean_C = (1/|C|) * sum_{c in C} sum_{i} (r_c^i)
# when no rating => use category mean
mean_X^0 = mean_C
# when j ratings => ponderate category mean with actual ratings
mean_X^j = (1/n+1)(mean_C + sum_{i=1..n}(r_X^i))
When dealing in general with this kind of problems I recommend reading the work of Koren et al on the Netflix challenge. They grabbed quite a bit of performance by using unsupervised learning on user and content variables - the idea of using the category mean being a similar, yet naive cousin. | How to adjust average rating for sample size on rating systems with more than two categories?
One way to cast your problem would be to treat it as a bayesian estimation problem.
Basically this means having a prior on your mean and update the mean based on each new observation over time.
A prac |
37,779 | How to adjust average rating for sample size on rating systems with more than two categories? | In the example you give, only one person has reviewed and given a score of 5/5. At this point, I would say you don't have enough information to give an informative estimate of the mean (or median). Possible scores are 1,2,3,4, or 5, so all you could say is that the mean average is somewhere between 1 and 5 and that one person on planet earth really likes the book.
However, if you have more people review, you can construct a confidence interval for that true mean review score. That way you could give a confidence level and some upper and lower bounds for the rating. (e.g. 95% confident that the book's rating is between 4.2 and 4.8). These bounds become tighter the more reviewers you have, so they do take into account the number of scores received.
However, typical Gaussian based confidence interval theory only holds up when you have a random sample from some population. Here the population is not well defined, perhaps those people who have bought the book through that website. Also, I would not say online reviewers are a random sample at all. I've found that book reviews (as with many online reviews) attract those people at the extremes who either love or hate the product. But perhaps it's best not to dwell too much on these issues...
I think what you're hinting at is the idea that if one person gave a book 5/5, this should probably not be considered better than an average of, say, 4.5/5 that's been reviewed by 200 people. And you mentioned "average", so perhaps you just want a one number summary that can be sorted easily.
I'm not too familiar with the Wilson score interval, but it looks like it is similar to the gaussian confidence interval, but it's construction is based on the score statistic.
You might want to look into some kind of weighted average that penalizes you for having a small sample. | How to adjust average rating for sample size on rating systems with more than two categories? | In the example you give, only one person has reviewed and given a score of 5/5. At this point, I would say you don't have enough information to give an informative estimate of the mean (or median). Po | How to adjust average rating for sample size on rating systems with more than two categories?
In the example you give, only one person has reviewed and given a score of 5/5. At this point, I would say you don't have enough information to give an informative estimate of the mean (or median). Possible scores are 1,2,3,4, or 5, so all you could say is that the mean average is somewhere between 1 and 5 and that one person on planet earth really likes the book.
However, if you have more people review, you can construct a confidence interval for that true mean review score. That way you could give a confidence level and some upper and lower bounds for the rating. (e.g. 95% confident that the book's rating is between 4.2 and 4.8). These bounds become tighter the more reviewers you have, so they do take into account the number of scores received.
However, typical Gaussian based confidence interval theory only holds up when you have a random sample from some population. Here the population is not well defined, perhaps those people who have bought the book through that website. Also, I would not say online reviewers are a random sample at all. I've found that book reviews (as with many online reviews) attract those people at the extremes who either love or hate the product. But perhaps it's best not to dwell too much on these issues...
I think what you're hinting at is the idea that if one person gave a book 5/5, this should probably not be considered better than an average of, say, 4.5/5 that's been reviewed by 200 people. And you mentioned "average", so perhaps you just want a one number summary that can be sorted easily.
I'm not too familiar with the Wilson score interval, but it looks like it is similar to the gaussian confidence interval, but it's construction is based on the score statistic.
You might want to look into some kind of weighted average that penalizes you for having a small sample. | How to adjust average rating for sample size on rating systems with more than two categories?
In the example you give, only one person has reviewed and given a score of 5/5. At this point, I would say you don't have enough information to give an informative estimate of the mean (or median). Po |
37,780 | What is the confidence interval calculated in a spectral density periodogram in R? | You are on the right track by inspecting the source code. In addition to the spec.ci function you found, the plotting chunk for CI is given by
ci.text <- ""
conf.y <- max(x$spec)/conf.lim[2L]
conf.x <- max(x$freq) - x$bandwidth
lines(rep(conf.x, 2), conf.y * conf.lim, col = ci.col)
lines(conf.x + c(-0.5, 0.5) * x$bandwidth, rep(conf.y, 2), col = ci.col)
Therefore the confidence interval is about a single parameter $f(\omega)$ with $\omega$ taken to be at the fixed frequency conf.x.
The confidence interval formula for a given frequency $\omega$, is (see Eq. (10.5.2) of Brockwell & Davis Time Series: Theory and Methods):
\begin{align*}
\left(\frac{\nu \hat{f}(\omega)}{\chi_{1 - \alpha/2}^2(\nu)},
\frac{\nu \hat{f}(\omega)}{\chi_{\alpha/2}^2(\nu)}\right), \quad 0 < \omega < \pi,
\end{align*}
where $\nu$ is the equivalent degrees of freedom of the estimator $\hat{f}$, which corresponds to df in the code you pasted in the question. | What is the confidence interval calculated in a spectral density periodogram in R? | You are on the right track by inspecting the source code. In addition to the spec.ci function you found, the plotting chunk for CI is given by
ci.text <- ""
conf.y <- max(x$spec)/conf.lim[2L]
conf.x < | What is the confidence interval calculated in a spectral density periodogram in R?
You are on the right track by inspecting the source code. In addition to the spec.ci function you found, the plotting chunk for CI is given by
ci.text <- ""
conf.y <- max(x$spec)/conf.lim[2L]
conf.x <- max(x$freq) - x$bandwidth
lines(rep(conf.x, 2), conf.y * conf.lim, col = ci.col)
lines(conf.x + c(-0.5, 0.5) * x$bandwidth, rep(conf.y, 2), col = ci.col)
Therefore the confidence interval is about a single parameter $f(\omega)$ with $\omega$ taken to be at the fixed frequency conf.x.
The confidence interval formula for a given frequency $\omega$, is (see Eq. (10.5.2) of Brockwell & Davis Time Series: Theory and Methods):
\begin{align*}
\left(\frac{\nu \hat{f}(\omega)}{\chi_{1 - \alpha/2}^2(\nu)},
\frac{\nu \hat{f}(\omega)}{\chi_{\alpha/2}^2(\nu)}\right), \quad 0 < \omega < \pi,
\end{align*}
where $\nu$ is the equivalent degrees of freedom of the estimator $\hat{f}$, which corresponds to df in the code you pasted in the question. | What is the confidence interval calculated in a spectral density periodogram in R?
You are on the right track by inspecting the source code. In addition to the spec.ci function you found, the plotting chunk for CI is given by
ci.text <- ""
conf.y <- max(x$spec)/conf.lim[2L]
conf.x < |
37,781 | Technical variation versus real signal | The issue is you have various possible sources of randomness. Individual randomness (the normal error term in a linear regression); variation between your two measurements in each case; and variation from the particular units you've sampled. I think you probably want something like
model <- aov(outcome ~ condition + Error(samp + measurement), data=mydata)
summary(model)
Hope that helps. | Technical variation versus real signal | The issue is you have various possible sources of randomness. Individual randomness (the normal error term in a linear regression); variation between your two measurements in each case; and variation | Technical variation versus real signal
The issue is you have various possible sources of randomness. Individual randomness (the normal error term in a linear regression); variation between your two measurements in each case; and variation from the particular units you've sampled. I think you probably want something like
model <- aov(outcome ~ condition + Error(samp + measurement), data=mydata)
summary(model)
Hope that helps. | Technical variation versus real signal
The issue is you have various possible sources of randomness. Individual randomness (the normal error term in a linear regression); variation between your two measurements in each case; and variation |
37,782 | If a factor variable is to be dropped in model selection, should all levels be dropped simultaneously? If so, why? [duplicate] | I'm really not sure what the answer would be in the absence of crossvalidation. But if we are crossvalidating, and we find that, say, one ethnic group out of 6 is substantially different from the others wrt Y, I can't seem to see anything wrong with using only that group's dummy variable in the followup equation. If membership/nonmembership in that group, and none other, is helping to predict the outcome (or to explain it, for that matter), why gummy up the equation with a bunch of unhelpful predictor dummies, which would only figure to add noise to the prediction? | If a factor variable is to be dropped in model selection, should all levels be dropped simultaneousl | I'm really not sure what the answer would be in the absence of crossvalidation. But if we are crossvalidating, and we find that, say, one ethnic group out of 6 is substantially different from the othe | If a factor variable is to be dropped in model selection, should all levels be dropped simultaneously? If so, why? [duplicate]
I'm really not sure what the answer would be in the absence of crossvalidation. But if we are crossvalidating, and we find that, say, one ethnic group out of 6 is substantially different from the others wrt Y, I can't seem to see anything wrong with using only that group's dummy variable in the followup equation. If membership/nonmembership in that group, and none other, is helping to predict the outcome (or to explain it, for that matter), why gummy up the equation with a bunch of unhelpful predictor dummies, which would only figure to add noise to the prediction? | If a factor variable is to be dropped in model selection, should all levels be dropped simultaneousl
I'm really not sure what the answer would be in the absence of crossvalidation. But if we are crossvalidating, and we find that, say, one ethnic group out of 6 is substantially different from the othe |
37,783 | How can I compare my model to a technically invalid model? | One solution would be to use cross-validation methods. This might be a conceptually easy (and elegant) solution because the model you have differs significantly from the model to be compared to. AIC or BIC won't really work here because the functional forms of these two models are very different -- yours is nonlinear and their model is not only linear but also based on binned data. AIC or BIC is insensitive to functional forms.
I wouldn't worry about binning vs non-binning too much, since it seems to me that binning is a modeling decision that could make a model better or worse. In other words, it's a feature whose effectiveness should be tested.
Now, assuming you can implement the other model, you can perform a k-fold cross-validation:
Divide your data into k subsets;
Iteratively leave one subset out, and train your model (without binning) and the other model (with binning) on the rest of the subsets;
Compute the sum of loglikelihoods of the subset that was left out in the previous with regard to your model and the other model. This should be relatively straight-forward: in your nonlinear model, error is binomially distributed; in the other model, error is normally distribution since it's a simple linear regression;
Repeat 2 and 3 until you have used each of the k subsets as the test subset (thus the name k-fold).
You can then compare which model gives you the better loglikelihood (i.e. the less negative one). | How can I compare my model to a technically invalid model? | One solution would be to use cross-validation methods. This might be a conceptually easy (and elegant) solution because the model you have differs significantly from the model to be compared to. AIC o | How can I compare my model to a technically invalid model?
One solution would be to use cross-validation methods. This might be a conceptually easy (and elegant) solution because the model you have differs significantly from the model to be compared to. AIC or BIC won't really work here because the functional forms of these two models are very different -- yours is nonlinear and their model is not only linear but also based on binned data. AIC or BIC is insensitive to functional forms.
I wouldn't worry about binning vs non-binning too much, since it seems to me that binning is a modeling decision that could make a model better or worse. In other words, it's a feature whose effectiveness should be tested.
Now, assuming you can implement the other model, you can perform a k-fold cross-validation:
Divide your data into k subsets;
Iteratively leave one subset out, and train your model (without binning) and the other model (with binning) on the rest of the subsets;
Compute the sum of loglikelihoods of the subset that was left out in the previous with regard to your model and the other model. This should be relatively straight-forward: in your nonlinear model, error is binomially distributed; in the other model, error is normally distribution since it's a simple linear regression;
Repeat 2 and 3 until you have used each of the k subsets as the test subset (thus the name k-fold).
You can then compare which model gives you the better loglikelihood (i.e. the less negative one). | How can I compare my model to a technically invalid model?
One solution would be to use cross-validation methods. This might be a conceptually easy (and elegant) solution because the model you have differs significantly from the model to be compared to. AIC o |
37,784 | Covariate modeling with a within-subject factor | I would suggest using random model in this case. In my opinion to allow for within and across subjects variability it is better to consider it as a hierarchical modelling. The first level allows for within subjects variation and the second level allows for across subjects variation.
To clarify, in the first level you try to find the average for within subject variation and assume and the second level will detect the relationship across subject taking the within variation into account. Such that:
level 1: (An average for A effect on x with within-x variation)
xi|A ~ (x, var(within))
level 2: (The average effect of x on Y by taking within-subject variation into account)
Y|x ~(Mean (x), var(across)+var(within)) | Covariate modeling with a within-subject factor | I would suggest using random model in this case. In my opinion to allow for within and across subjects variability it is better to consider it as a hierarchical modelling. The first level allows for w | Covariate modeling with a within-subject factor
I would suggest using random model in this case. In my opinion to allow for within and across subjects variability it is better to consider it as a hierarchical modelling. The first level allows for within subjects variation and the second level allows for across subjects variation.
To clarify, in the first level you try to find the average for within subject variation and assume and the second level will detect the relationship across subject taking the within variation into account. Such that:
level 1: (An average for A effect on x with within-x variation)
xi|A ~ (x, var(within))
level 2: (The average effect of x on Y by taking within-subject variation into account)
Y|x ~(Mean (x), var(across)+var(within)) | Covariate modeling with a within-subject factor
I would suggest using random model in this case. In my opinion to allow for within and across subjects variability it is better to consider it as a hierarchical modelling. The first level allows for w |
37,785 | How to calculate sample size for comparing the area under the curve of two models? | I could not find an R-package that would solve the problem. But I can remember reading the book "Statistical Methods in Diagnostic Medicine" by Zhou, Obuchowski and McClish
(Amazon Link). They give a method (too long to reproduce it here for now) for determining the sample size and refer to 2 publications:
Obuchowski NA. Nonparametric analysis of clustered ROC curve data. Biometrics.
1997 Jun;53(2):567-78. PubMed PMID: 9192452.
which IMHO should be instead:
Obuchowski NA, McClish DK. Sample size determination for diagnostic accuracy studies involving binormal ROC curve indices. Stat Med. 1997 Jul
15;16(13):1529-42. PubMed PMID: 9249923.
The second one is a technical report at the University of Chicago by Metz, Kronman and Wang (1989): FORTRAN Program ROCPWR. I could not find this one but googling led me to
http://www-radiology.uchicago.edu/krl/KRL_ROC/ROC_analysis_by_topic4.htm
I could not find a working link to download the software, though. Maybe someone other does or you contact the authors.
I hope this helps at least a little bit...
psj | How to calculate sample size for comparing the area under the curve of two models? | I could not find an R-package that would solve the problem. But I can remember reading the book "Statistical Methods in Diagnostic Medicine" by Zhou, Obuchowski and McClish
(Amazon Link). They give a | How to calculate sample size for comparing the area under the curve of two models?
I could not find an R-package that would solve the problem. But I can remember reading the book "Statistical Methods in Diagnostic Medicine" by Zhou, Obuchowski and McClish
(Amazon Link). They give a method (too long to reproduce it here for now) for determining the sample size and refer to 2 publications:
Obuchowski NA. Nonparametric analysis of clustered ROC curve data. Biometrics.
1997 Jun;53(2):567-78. PubMed PMID: 9192452.
which IMHO should be instead:
Obuchowski NA, McClish DK. Sample size determination for diagnostic accuracy studies involving binormal ROC curve indices. Stat Med. 1997 Jul
15;16(13):1529-42. PubMed PMID: 9249923.
The second one is a technical report at the University of Chicago by Metz, Kronman and Wang (1989): FORTRAN Program ROCPWR. I could not find this one but googling led me to
http://www-radiology.uchicago.edu/krl/KRL_ROC/ROC_analysis_by_topic4.htm
I could not find a working link to download the software, though. Maybe someone other does or you contact the authors.
I hope this helps at least a little bit...
psj | How to calculate sample size for comparing the area under the curve of two models?
I could not find an R-package that would solve the problem. But I can remember reading the book "Statistical Methods in Diagnostic Medicine" by Zhou, Obuchowski and McClish
(Amazon Link). They give a |
37,786 | How to calculate sample size for comparing the area under the curve of two models? | There is a just released R Package called pROC which does what you want using the function power.roc.test.
Documentation of the package:
https://cran.r-project.org/web/packages/pROC/pROC.pdf
I Hope it helps | How to calculate sample size for comparing the area under the curve of two models? | There is a just released R Package called pROC which does what you want using the function power.roc.test.
Documentation of the package:
https://cran.r-project.org/web/packages/pROC/pROC.pdf
I Hope it | How to calculate sample size for comparing the area under the curve of two models?
There is a just released R Package called pROC which does what you want using the function power.roc.test.
Documentation of the package:
https://cran.r-project.org/web/packages/pROC/pROC.pdf
I Hope it helps | How to calculate sample size for comparing the area under the curve of two models?
There is a just released R Package called pROC which does what you want using the function power.roc.test.
Documentation of the package:
https://cran.r-project.org/web/packages/pROC/pROC.pdf
I Hope it |
37,787 | How to predict future reservations when data for the current day is incomplete? | Ratio estimates just don't work. For example if it takes 1 hour to complete three innings at a baseball game you can rest assured that the next 6 innings is going to take a lot longer than 2 hours to complete.In order to predict tomorrow given partial information for today and full information for the past NOB days, I suggest the following approach which we implemented for Proctor & Gamble as they had been unable to detect the economic downturn in a timely fashion. Their problem was if we have say 15 days in the current month history and 16 days remain with say 1 holiday and 2 Saturdays ( for example ) we want to compute the probability of achieving total sales of X. We implemented a daily forecasting model that included day-of-the-week;week-of-the-year;month-of-the year effects AND the lead/contemporaneous and lag effects around known events AND any Level Shifts/Time Trends that proved to be statistically significant. The model/approach also included an ARIMA component and validation tests/remedies for constancy of the parameters and variance over time. Furthermore Pulses and Seasonal Pulses ( i.e. significant changes in the day-of-the-week component ) were also entertained in order to develop a robust data generating function (DGF). What you want to do is to also include an hour-of-the-day forecasting model which would in conjunction with the daily forecast/model produce an estimate of the current day's total and the total for the next 15 days. It is imperative that the hourly forecasts and the daily forecasts not only reconcile but be fully integrated where expectations regarding daily totals actually drive the hourly estimates.
If you wished to post your hourly data going back at least 2-3 years , I would be glad to share the results with the list. If for some reason you don't wish to share your data with the list then perhaps we can do a chat room session.
We have also had some experience with a major hotel chain to improve their 60 day forecast for occupancy. | How to predict future reservations when data for the current day is incomplete? | Ratio estimates just don't work. For example if it takes 1 hour to complete three innings at a baseball game you can rest assured that the next 6 innings is going to take a lot longer than 2 hours to | How to predict future reservations when data for the current day is incomplete?
Ratio estimates just don't work. For example if it takes 1 hour to complete three innings at a baseball game you can rest assured that the next 6 innings is going to take a lot longer than 2 hours to complete.In order to predict tomorrow given partial information for today and full information for the past NOB days, I suggest the following approach which we implemented for Proctor & Gamble as they had been unable to detect the economic downturn in a timely fashion. Their problem was if we have say 15 days in the current month history and 16 days remain with say 1 holiday and 2 Saturdays ( for example ) we want to compute the probability of achieving total sales of X. We implemented a daily forecasting model that included day-of-the-week;week-of-the-year;month-of-the year effects AND the lead/contemporaneous and lag effects around known events AND any Level Shifts/Time Trends that proved to be statistically significant. The model/approach also included an ARIMA component and validation tests/remedies for constancy of the parameters and variance over time. Furthermore Pulses and Seasonal Pulses ( i.e. significant changes in the day-of-the-week component ) were also entertained in order to develop a robust data generating function (DGF). What you want to do is to also include an hour-of-the-day forecasting model which would in conjunction with the daily forecast/model produce an estimate of the current day's total and the total for the next 15 days. It is imperative that the hourly forecasts and the daily forecasts not only reconcile but be fully integrated where expectations regarding daily totals actually drive the hourly estimates.
If you wished to post your hourly data going back at least 2-3 years , I would be glad to share the results with the list. If for some reason you don't wish to share your data with the list then perhaps we can do a chat room session.
We have also had some experience with a major hotel chain to improve their 60 day forecast for occupancy. | How to predict future reservations when data for the current day is incomplete?
Ratio estimates just don't work. For example if it takes 1 hour to complete three innings at a baseball game you can rest assured that the next 6 innings is going to take a lot longer than 2 hours to |
37,788 | How to interpret coefficients produced by the sem function in R? | sem gives direct effects only. To get total as well as indirect effects use the functions given by John Fox. | How to interpret coefficients produced by the sem function in R? | sem gives direct effects only. To get total as well as indirect effects use the functions given by John Fox. | How to interpret coefficients produced by the sem function in R?
sem gives direct effects only. To get total as well as indirect effects use the functions given by John Fox. | How to interpret coefficients produced by the sem function in R?
sem gives direct effects only. To get total as well as indirect effects use the functions given by John Fox. |
37,789 | Multicategory choice model with given categories | Did you read this?
http://www.jstor.org/pss/30038862
Edwards and Allenby seem to have the same basic setup as you, a multivariate probit, which you can find the code in the bayesm package.
It seems you should be able to evaluate the dependency by a test of if the probits are independent in the different scenarios by a liklihood ratio test on rho, just like the endogeneity tests people advocate. So run the seemingly unrelated multivariate probit, and do a likelihood ratio test on rho to see if the things impact each other.
Here is an example of the test on rho in the SUR mv probit, about 2/3 of the way down:
http://www.philender.com/courses/categorical/notes1/biprobit.html | Multicategory choice model with given categories | Did you read this?
http://www.jstor.org/pss/30038862
Edwards and Allenby seem to have the same basic setup as you, a multivariate probit, which you can find the code in the bayesm package.
It seems yo | Multicategory choice model with given categories
Did you read this?
http://www.jstor.org/pss/30038862
Edwards and Allenby seem to have the same basic setup as you, a multivariate probit, which you can find the code in the bayesm package.
It seems you should be able to evaluate the dependency by a test of if the probits are independent in the different scenarios by a liklihood ratio test on rho, just like the endogeneity tests people advocate. So run the seemingly unrelated multivariate probit, and do a likelihood ratio test on rho to see if the things impact each other.
Here is an example of the test on rho in the SUR mv probit, about 2/3 of the way down:
http://www.philender.com/courses/categorical/notes1/biprobit.html | Multicategory choice model with given categories
Did you read this?
http://www.jstor.org/pss/30038862
Edwards and Allenby seem to have the same basic setup as you, a multivariate probit, which you can find the code in the bayesm package.
It seems yo |
37,790 | How to analyse repeated measure ANOVA with three or more conditions presented in randomised order? | Repeated measures is kind of an overloaded term. To some people it refers to a particular statistical analysis method; to others it refers to the structure of the design.
This is a variant on a three period, three treatment crossover design.
It is a variant because usually in a crossover design you randomize subjects to sequences. In this case the sequence is determined randomly for each subject. Since there are six possible sequences, it might be that some sequences are not observed, especially with 10 subjects. Maybe this is formally the same as randomizing subjects to sequences, but I haven't looked at that yet.
The considerations for crossover designs are:
Carryover effects: Also known as residual effects, where prior treatment may affect response to current treatment. The goal of the washout periods is to remove this from consideration. You could also have (in theory) second-order residual effects, where the treatment given in the first period potentially affects the response to treatment given in the third period.
Period effects: Response to treatment(s) may change as the study goes on for a given subject.
Autocorrelation: Serial correlation in errors is usually an issue with more closely measured data. In simple balanced designs, having a random effect for subject is going to imply equal correlaation of errors from each subject.
Subject effects: Subjects may differ in mean response from each other regardless of treatments. You could conceive of a situation where measurement error was serially correlated separate from a random subject effect.
Sequence effect: In cases where you randomize subjects to sequences, subjects are considered nested in sequence.
A minimal analysis for this would be the suggested randomized complete block design. That is, a fixed effect for treatment and a random effect for subject. With a skimpy sample size that might be all you can really do.
I would argue for a bit more structure to the analysis, if possible. Assuming no carryover effects on scientific grounds, it seems like a good idea to have at fixed effects for treatment, period, and treatment $\times$ period interaction, and a random effect for subjects. For small data sets, if this model can't be fit, I would drop the treatment $\times$ period interaction first.
Period should be included because it represents a restriction on the randomization. You cannot "randomize" periods --- they always happen in the same order. Treatment $\times$ period interaction might be indicative of some sort of carryover effect.
With tons of data one could work up terms that would allow estimation of various specific carryover effects. My notes on this are gone, though I know I've seen it handled in some texts.
The strategy of additionally modelling the correlation structure on the R-side seems reasonable to me. That allows one to claim that one is handling the possible dependence structure induced by repeated measures on the same subject, which I would also probably claim about the random effect for subject if the analysis devolved to that level... It is also nice if various analysis strategies provide broadly or very similar results.
For implementation, I'd use PROC MIXED in SAS and likely nlme or lme4 in R.
I'll punt on the compound symmetry question, since that seems more like a holdover from the days where MANOVA was the only "correct" analysis for repeated measures. | How to analyse repeated measure ANOVA with three or more conditions presented in randomised order? | Repeated measures is kind of an overloaded term. To some people it refers to a particular statistical analysis method; to others it refers to the structure of the design.
This is a variant on a thre | How to analyse repeated measure ANOVA with three or more conditions presented in randomised order?
Repeated measures is kind of an overloaded term. To some people it refers to a particular statistical analysis method; to others it refers to the structure of the design.
This is a variant on a three period, three treatment crossover design.
It is a variant because usually in a crossover design you randomize subjects to sequences. In this case the sequence is determined randomly for each subject. Since there are six possible sequences, it might be that some sequences are not observed, especially with 10 subjects. Maybe this is formally the same as randomizing subjects to sequences, but I haven't looked at that yet.
The considerations for crossover designs are:
Carryover effects: Also known as residual effects, where prior treatment may affect response to current treatment. The goal of the washout periods is to remove this from consideration. You could also have (in theory) second-order residual effects, where the treatment given in the first period potentially affects the response to treatment given in the third period.
Period effects: Response to treatment(s) may change as the study goes on for a given subject.
Autocorrelation: Serial correlation in errors is usually an issue with more closely measured data. In simple balanced designs, having a random effect for subject is going to imply equal correlaation of errors from each subject.
Subject effects: Subjects may differ in mean response from each other regardless of treatments. You could conceive of a situation where measurement error was serially correlated separate from a random subject effect.
Sequence effect: In cases where you randomize subjects to sequences, subjects are considered nested in sequence.
A minimal analysis for this would be the suggested randomized complete block design. That is, a fixed effect for treatment and a random effect for subject. With a skimpy sample size that might be all you can really do.
I would argue for a bit more structure to the analysis, if possible. Assuming no carryover effects on scientific grounds, it seems like a good idea to have at fixed effects for treatment, period, and treatment $\times$ period interaction, and a random effect for subjects. For small data sets, if this model can't be fit, I would drop the treatment $\times$ period interaction first.
Period should be included because it represents a restriction on the randomization. You cannot "randomize" periods --- they always happen in the same order. Treatment $\times$ period interaction might be indicative of some sort of carryover effect.
With tons of data one could work up terms that would allow estimation of various specific carryover effects. My notes on this are gone, though I know I've seen it handled in some texts.
The strategy of additionally modelling the correlation structure on the R-side seems reasonable to me. That allows one to claim that one is handling the possible dependence structure induced by repeated measures on the same subject, which I would also probably claim about the random effect for subject if the analysis devolved to that level... It is also nice if various analysis strategies provide broadly or very similar results.
For implementation, I'd use PROC MIXED in SAS and likely nlme or lme4 in R.
I'll punt on the compound symmetry question, since that seems more like a holdover from the days where MANOVA was the only "correct" analysis for repeated measures. | How to analyse repeated measure ANOVA with three or more conditions presented in randomised order?
Repeated measures is kind of an overloaded term. To some people it refers to a particular statistical analysis method; to others it refers to the structure of the design.
This is a variant on a thre |
37,791 | Is there a name for the high sensitivity of frequency of extreme data points to the mean of a normal distribution? | This is not an answer to the question, but may be of interest. The question gave three ratios of number of people in the mean=110 and mean=100 populations having IQ above a given threshold (ratio(IQ=150)≈10, ratio(120)≈3, ratio(110)≈2). The R code below plots the ratio as a function of IQ.
IQ = seq(0, 200, length.out=100)
c100 = pnorm(IQ, mean=100, sd=15)
c110 = pnorm(IQ, mean=110, sd=15)
ratio = (1 - c110) / (1 - c100)
plot(ratio ~ IQ); abline(h=c(0, 10, 20, 30, 40, 50, 60)) | Is there a name for the high sensitivity of frequency of extreme data points to the mean of a normal | This is not an answer to the question, but may be of interest. The question gave three ratios of number of people in the mean=110 and mean=100 populations having IQ above a given threshold (ratio(IQ=1 | Is there a name for the high sensitivity of frequency of extreme data points to the mean of a normal distribution?
This is not an answer to the question, but may be of interest. The question gave three ratios of number of people in the mean=110 and mean=100 populations having IQ above a given threshold (ratio(IQ=150)≈10, ratio(120)≈3, ratio(110)≈2). The R code below plots the ratio as a function of IQ.
IQ = seq(0, 200, length.out=100)
c100 = pnorm(IQ, mean=100, sd=15)
c110 = pnorm(IQ, mean=110, sd=15)
ratio = (1 - c110) / (1 - c100)
plot(ratio ~ IQ); abline(h=c(0, 10, 20, 30, 40, 50, 60)) | Is there a name for the high sensitivity of frequency of extreme data points to the mean of a normal
This is not an answer to the question, but may be of interest. The question gave three ratios of number of people in the mean=110 and mean=100 populations having IQ above a given threshold (ratio(IQ=1 |
37,792 | How to model time-series temperature data at multiple sites as a function of data at one site? | You may want to examine the GAM package in R, as it can be adapted to do some (or all) of what you are looking for. The original paper (Hastie & Tibshirani, 1986) is available via OpenAccess if you're up for reading it.
Essentially, you model a single dependent variable as being an additive combination of 'smooth' predictors. One of the typical uses is to have time series and lags thereof as your predictors, smooth these inputs, then apply GAM.
This method has been used extensively to estimate daily mortality as a function of smoothed environmental time series, especially pollutants. It's not OpenAccess, but (Dominici et al., 2000) is a superb reference, and (Statistical Methods for Environmental Epidemiology with R) is an excellent book on how to use R to do this type of analysis. | How to model time-series temperature data at multiple sites as a function of data at one site? | You may want to examine the GAM package in R, as it can be adapted to do some (or all) of what you are looking for. The original paper (Hastie & Tibshirani, 1986) is available via OpenAccess if you're | How to model time-series temperature data at multiple sites as a function of data at one site?
You may want to examine the GAM package in R, as it can be adapted to do some (or all) of what you are looking for. The original paper (Hastie & Tibshirani, 1986) is available via OpenAccess if you're up for reading it.
Essentially, you model a single dependent variable as being an additive combination of 'smooth' predictors. One of the typical uses is to have time series and lags thereof as your predictors, smooth these inputs, then apply GAM.
This method has been used extensively to estimate daily mortality as a function of smoothed environmental time series, especially pollutants. It's not OpenAccess, but (Dominici et al., 2000) is a superb reference, and (Statistical Methods for Environmental Epidemiology with R) is an excellent book on how to use R to do this type of analysis. | How to model time-series temperature data at multiple sites as a function of data at one site?
You may want to examine the GAM package in R, as it can be adapted to do some (or all) of what you are looking for. The original paper (Hastie & Tibshirani, 1986) is available via OpenAccess if you're |
37,793 | How to model time-series temperature data at multiple sites as a function of data at one site? | Whether or not you wish to forecast or not has nothing whatsoever to do with correct time series analysis. Time series methods can develop a robust model which can be used simply to characterize the relationship between a dependent series and a set of user-suggested inputs (a.k.a. user-specified predictor series) and empirically identified omitted variables be they deterministic or stochastic.Users at their option can then extend the "signal" into the future i.e. forecast with uncertainties based upon the uncertainty in the coefficients and the uncertainty in the future values of the predictor . Now these two kinds of empirically identified "omitted series" can be classified as 1) deterministic and 2) stochastic. The first type are simply Pulses, Level Shifts , Seasonal Pulses and Local Time Trends whereas the second type is represented by the ARIMA portion of your final model. When one omits one or more stochastic series from the list of possible predictors, the omission is characterized by the ARIMA component in your final model. Time series modelers refer to ARIMA models as a "Poor Man's Regression Model" because the past of the series is being used as a proxy for omitted stochastic input series. | How to model time-series temperature data at multiple sites as a function of data at one site? | Whether or not you wish to forecast or not has nothing whatsoever to do with correct time series analysis. Time series methods can develop a robust model which can be used simply to characterize the r | How to model time-series temperature data at multiple sites as a function of data at one site?
Whether or not you wish to forecast or not has nothing whatsoever to do with correct time series analysis. Time series methods can develop a robust model which can be used simply to characterize the relationship between a dependent series and a set of user-suggested inputs (a.k.a. user-specified predictor series) and empirically identified omitted variables be they deterministic or stochastic.Users at their option can then extend the "signal" into the future i.e. forecast with uncertainties based upon the uncertainty in the coefficients and the uncertainty in the future values of the predictor . Now these two kinds of empirically identified "omitted series" can be classified as 1) deterministic and 2) stochastic. The first type are simply Pulses, Level Shifts , Seasonal Pulses and Local Time Trends whereas the second type is represented by the ARIMA portion of your final model. When one omits one or more stochastic series from the list of possible predictors, the omission is characterized by the ARIMA component in your final model. Time series modelers refer to ARIMA models as a "Poor Man's Regression Model" because the past of the series is being used as a proxy for omitted stochastic input series. | How to model time-series temperature data at multiple sites as a function of data at one site?
Whether or not you wish to forecast or not has nothing whatsoever to do with correct time series analysis. Time series methods can develop a robust model which can be used simply to characterize the r |
37,794 | What is the right name for the variant of the Kolmogorov-Smirnov statistic that retains the sign of the difference? | It looks like a variant of Kuiper's test to me, although Kuiper's V = D+ + D− ≠ D'. | What is the right name for the variant of the Kolmogorov-Smirnov statistic that retains the sign of | It looks like a variant of Kuiper's test to me, although Kuiper's V = D+ + D− ≠ D'. | What is the right name for the variant of the Kolmogorov-Smirnov statistic that retains the sign of the difference?
It looks like a variant of Kuiper's test to me, although Kuiper's V = D+ + D− ≠ D'. | What is the right name for the variant of the Kolmogorov-Smirnov statistic that retains the sign of
It looks like a variant of Kuiper's test to me, although Kuiper's V = D+ + D− ≠ D'. |
37,795 | Self-organizing maps: fuzzy input? | There are a few papers which propose fuzzy SOM.
Petri Vuorimaa, "Fuzzy self-organizing map", Fuzzy Sets and Systems
Volume 66, Issue 2, 9 September 1994, Pages 223-231.
Kohonen's Self-Organizing Map is one of the best-known neural network models. In this paper, we introduce a fuzzy version of the model called: Fuzzy Self-Organizing Map. We replace the neurons of the original model by fuzzy rules, which are composed of fuzzy sets. The fuzzy sets define an area in the input space, where each fuzzy rule fires. The output of each rule is a singleton. The outputs are combined together by a weighted average, where the firing strengths of the fuzzy rules act as the weights. The weighted average gives a continuous valued output for the system. Thus the Fuzzy Self-Organizing Map performs a mapping from a n-dimensional input space to one-dimensional output space. The learning capability of the Fuzzy Self-Organizing Map enables it to model a continuous valued function to an arbitrary accuracy. The learning is done by first self-organizing the centers of the fuzzy sets according to Kohonen's Self-Organizing Map learning laws. After that, the fuzzy sets and the outputs of the fuzzy rules are initialized. Finally, in the last phase of the new learning method, the fuzzy sets are tuned by an algorithm similar to Kohonen's Learning Vector Quantization 2.1. Simulation results of a two-dimensional sinc function show good accuracy and fast convergence.
Janos Abonyi, Sandor Migaly and Ferenc Szeifer, "Fuzzy Self-Organizing Map based on Regularized Fuzzy $c$-means Clustering"
This paper presents a new fuzzy clustering algorithm for the clustering and visualization of high-dimensional data. The cluster centers are arranged on a grid defined on a small dimensional space that can be easily visualized. The smoothness of this mapping is achieved by adding a regularization term to the fuzzy $c$-means (FCM) functional. The measure of the smoothness is expressed as the sum of the second order partial derivatives of the cluster centers. Coding the values of the cluster centers with colors, regions with different colors evolve on the map and the hidden relation between the variables reveal. Comparison to the existing modifications of the fuzzy $c$-means algorithm and several application examples are given. | Self-organizing maps: fuzzy input? | There are a few papers which propose fuzzy SOM.
Petri Vuorimaa, "Fuzzy self-organizing map", Fuzzy Sets and Systems
Volume 66, Issue 2, 9 September 1994, Pages 223-231.
Kohonen's Self-Organizing Map | Self-organizing maps: fuzzy input?
There are a few papers which propose fuzzy SOM.
Petri Vuorimaa, "Fuzzy self-organizing map", Fuzzy Sets and Systems
Volume 66, Issue 2, 9 September 1994, Pages 223-231.
Kohonen's Self-Organizing Map is one of the best-known neural network models. In this paper, we introduce a fuzzy version of the model called: Fuzzy Self-Organizing Map. We replace the neurons of the original model by fuzzy rules, which are composed of fuzzy sets. The fuzzy sets define an area in the input space, where each fuzzy rule fires. The output of each rule is a singleton. The outputs are combined together by a weighted average, where the firing strengths of the fuzzy rules act as the weights. The weighted average gives a continuous valued output for the system. Thus the Fuzzy Self-Organizing Map performs a mapping from a n-dimensional input space to one-dimensional output space. The learning capability of the Fuzzy Self-Organizing Map enables it to model a continuous valued function to an arbitrary accuracy. The learning is done by first self-organizing the centers of the fuzzy sets according to Kohonen's Self-Organizing Map learning laws. After that, the fuzzy sets and the outputs of the fuzzy rules are initialized. Finally, in the last phase of the new learning method, the fuzzy sets are tuned by an algorithm similar to Kohonen's Learning Vector Quantization 2.1. Simulation results of a two-dimensional sinc function show good accuracy and fast convergence.
Janos Abonyi, Sandor Migaly and Ferenc Szeifer, "Fuzzy Self-Organizing Map based on Regularized Fuzzy $c$-means Clustering"
This paper presents a new fuzzy clustering algorithm for the clustering and visualization of high-dimensional data. The cluster centers are arranged on a grid defined on a small dimensional space that can be easily visualized. The smoothness of this mapping is achieved by adding a regularization term to the fuzzy $c$-means (FCM) functional. The measure of the smoothness is expressed as the sum of the second order partial derivatives of the cluster centers. Coding the values of the cluster centers with colors, regions with different colors evolve on the map and the hidden relation between the variables reveal. Comparison to the existing modifications of the fuzzy $c$-means algorithm and several application examples are given. | Self-organizing maps: fuzzy input?
There are a few papers which propose fuzzy SOM.
Petri Vuorimaa, "Fuzzy self-organizing map", Fuzzy Sets and Systems
Volume 66, Issue 2, 9 September 1994, Pages 223-231.
Kohonen's Self-Organizing Map |
37,796 | Reliability of mean of standard deviations | If you want to test whether the variances of several machines deviates from the other variances combining them into average will not help you. The problem is that these differing variances will skew your average. To test whether there are different variances you can use Bartlet's test. It is sensitive to normality, but since you said that your data is normal this should not be a problem, though it would be a good idea to test that.
Now if you can assume that all the machines are similar in sense that they can have different means but similar variance, the problem is very simple. If you assume that machines are independent treat the variances from each machine as a random sample. Then estimate the mean and standard deviation of this sample. For large number of machines, the normal approximation will kick in, so it will not matter whether you use standard deviations or the variances. In both cases the sample mean will estimate average statistic of your choice, and standard deviation of the sample will estimate average spread of statistic of your choice. The 95% confidence interval will then be $\mu\pm 1.96\sigma$. | Reliability of mean of standard deviations | If you want to test whether the variances of several machines deviates from the other variances combining them into average will not help you. The problem is that these differing variances will skew y | Reliability of mean of standard deviations
If you want to test whether the variances of several machines deviates from the other variances combining them into average will not help you. The problem is that these differing variances will skew your average. To test whether there are different variances you can use Bartlet's test. It is sensitive to normality, but since you said that your data is normal this should not be a problem, though it would be a good idea to test that.
Now if you can assume that all the machines are similar in sense that they can have different means but similar variance, the problem is very simple. If you assume that machines are independent treat the variances from each machine as a random sample. Then estimate the mean and standard deviation of this sample. For large number of machines, the normal approximation will kick in, so it will not matter whether you use standard deviations or the variances. In both cases the sample mean will estimate average statistic of your choice, and standard deviation of the sample will estimate average spread of statistic of your choice. The 95% confidence interval will then be $\mu\pm 1.96\sigma$. | Reliability of mean of standard deviations
If you want to test whether the variances of several machines deviates from the other variances combining them into average will not help you. The problem is that these differing variances will skew y |
37,797 | Explaining conditioning number in statistics to non-statisticians | Answered (and seemingly accepted) in the comments by Demetri Pananos:
Conditioning numbers were explained to me like a nightmare shower nob. You ever take a shower in which the smallest twist of the nob dramatically changes the temperature? That is what conditioning numbers are like. | Explaining conditioning number in statistics to non-statisticians | Answered (and seemingly accepted) in the comments by Demetri Pananos:
Conditioning numbers were explained to me like a nightmare shower nob. You ever take a shower in which the smallest twist of the | Explaining conditioning number in statistics to non-statisticians
Answered (and seemingly accepted) in the comments by Demetri Pananos:
Conditioning numbers were explained to me like a nightmare shower nob. You ever take a shower in which the smallest twist of the nob dramatically changes the temperature? That is what conditioning numbers are like. | Explaining conditioning number in statistics to non-statisticians
Answered (and seemingly accepted) in the comments by Demetri Pananos:
Conditioning numbers were explained to me like a nightmare shower nob. You ever take a shower in which the smallest twist of the |
37,798 | Interactions - using ratio of variables | Given the basic arithmetic:
$$\frac{x}{y} = x \times \frac{1}{y}$$
this does not really matter. So growth rate in is: $\text{size} \times (1 / \text{age}) $ and you already have this effect in your interaction. If it makes interpretation easier you can always work with reverse age rather than age as independent variable. | Interactions - using ratio of variables | Given the basic arithmetic:
$$\frac{x}{y} = x \times \frac{1}{y}$$
this does not really matter. So growth rate in is: $\text{size} \times (1 / \text{age}) $ and you already have this effect in your in | Interactions - using ratio of variables
Given the basic arithmetic:
$$\frac{x}{y} = x \times \frac{1}{y}$$
this does not really matter. So growth rate in is: $\text{size} \times (1 / \text{age}) $ and you already have this effect in your interaction. If it makes interpretation easier you can always work with reverse age rather than age as independent variable. | Interactions - using ratio of variables
Given the basic arithmetic:
$$\frac{x}{y} = x \times \frac{1}{y}$$
this does not really matter. So growth rate in is: $\text{size} \times (1 / \text{age}) $ and you already have this effect in your in |
37,799 | Interactions - using ratio of variables | As size = age * growth rate, you could also see size as interaction between age and growth rate. | Interactions - using ratio of variables | As size = age * growth rate, you could also see size as interaction between age and growth rate. | Interactions - using ratio of variables
As size = age * growth rate, you could also see size as interaction between age and growth rate. | Interactions - using ratio of variables
As size = age * growth rate, you could also see size as interaction between age and growth rate. |
37,800 | How to test the increase of proportions | One is to use a Cochran Armitage trend test. There is an implementation in R. I suppose you are interested in event vs non-event, so you need to flip your matrix:
library(DescTools)
m <- matrix(c(71,248,419,796,285,288),ncol=3)
CochranArmitageTest(m)
Cochran-Armitage test for trend
data: m
Z = 8.5195, dim = 3, p-value < 2.2e-16
alternative hypothesis: two.sided
Since you have two groups, and you know the average score in event is higher, you can also test the difference of the scores using a wilcoxon:
event_scores = rep(1:3,m[1,])
noevent_scores = rep(1:3,m[2,])
wilcox.test(event_scores,noevent_scores)
Wilcoxon rank sum test with continuity correction
data: event_scores and noevent_scores
W = 618058, p-value < 2.2e-16
alternative hypothesis: true location shift is not equal to 0 | How to test the increase of proportions | One is to use a Cochran Armitage trend test. There is an implementation in R. I suppose you are interested in event vs non-event, so you need to flip your matrix:
library(DescTools)
m <- matrix(c(71,2 | How to test the increase of proportions
One is to use a Cochran Armitage trend test. There is an implementation in R. I suppose you are interested in event vs non-event, so you need to flip your matrix:
library(DescTools)
m <- matrix(c(71,248,419,796,285,288),ncol=3)
CochranArmitageTest(m)
Cochran-Armitage test for trend
data: m
Z = 8.5195, dim = 3, p-value < 2.2e-16
alternative hypothesis: two.sided
Since you have two groups, and you know the average score in event is higher, you can also test the difference of the scores using a wilcoxon:
event_scores = rep(1:3,m[1,])
noevent_scores = rep(1:3,m[2,])
wilcox.test(event_scores,noevent_scores)
Wilcoxon rank sum test with continuity correction
data: event_scores and noevent_scores
W = 618058, p-value < 2.2e-16
alternative hypothesis: true location shift is not equal to 0 | How to test the increase of proportions
One is to use a Cochran Armitage trend test. There is an implementation in R. I suppose you are interested in event vs non-event, so you need to flip your matrix:
library(DescTools)
m <- matrix(c(71,2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.